Webinar with Jean King: “Evaluation Capacity Building Through the Years”

November 2nd, 2016 by Tom Archibald

jean-king-webinar_smallFREE WEBINAR on ECB!
The Organizational Learning and Evaluation Capacity Building (Ol-ECB) Topical Interest Group (TIG) of the American Evaluation Association (AEA) is pleased to invite you to the first in a series of ECB webinars.

Jean King is a thought leader in ECB and has also contributed greatly to evaluation in Extension in her long and fruitful career at the University of Minnesota. In the webinar, she will reflect on her observations on the developments in ECB through the years. Don’t miss it!

Wednesday, November 9 from 1:00-2:00pm Eastern via WebEx. 

Click here to register.

Participatory Data Analysis

September 13th, 2016 by Tom Archibald

By Corey Newhouse (Public Profit) and Kylie Hutchinson (Community Solutions)

Earlier this year we held our first webinar on Participatory Data Analysis for Evaluators. In the field of evaluation, which is growing by leaps and bounds and continually innovating, there’s surprisingly little written about this area. Also known as data parties, sense-making sessions, results-briefings, and data-driven reviews, participatory data analysis plays an important role in promoting evaluation use. In this post we’ll describe briefly what participatory data analysis is, and several reasons why you should consider it seriously for your practice.

What is it?ppt data analysis (1)

Participatory data analysis can take many forms, but essentially it’s an opportunity for you the evaluator to consult with key stakeholders regarding the preliminary data and analyses. It’s an opportunity to see how stakeholders understand and interpret the data collected by the evaluation, and possibly an opportunity to learn important additional contextual information.

Why is it helpful?

  1. People support what they helped create.

This quote by Richard Beckhard[1] says it all. When stakeholders play an active role in interpreting the findings, we believe they are more likely to develop ownership of the evaluation and implement the recommendations later on. A 2009 survey[2] by Dreolin Fleischer and Tina Christie of Claremont University found that 86% of American Evaluation Association members believed that the involvement of stakeholders in the evaluation process was an influential or extremely influential factor of greater utilization. Who can say no to that?

  1. Every evaluator needs a reality check.

Participatory data analysis not only ensures that, as evaluators, we arrive at the correct conclusions, but also that our recommendations hit the mark. We’re (usually) not program staff and lack the in-depth day-to-day familiarity with a program that our evaluands have. We need their input to indicate which findings are the most meaningful and to suggest recommendations we might never have thought of on our own.  Key stakeholders can suggest appropriate wording for these recommendations and in the process we can ensure there is consensus on the conclusions.

  1. Assure the evaluation will reach key stakeholders

Data parties are also a great opportunity to get stakeholder input on which forms of reporting are best for which stakeholders. They can tell us not only who should get the report and by when (to meet key decision-making cycles), but also who actually has the power to act. In this fast-paced, mobile age, evaluators need as much help figuring out how to reach their target audience as they can.

  1. Look Ma, I’m capacity-building!

A wonderful thing happens during participatory data analysis. At some point along the way, we’re building evaluation capacity in a hands-on and directly relevant way for our stakeholders.

  1. Avoid “gotcha” surprises for your client

Sometimes evaluations surface less-than-great findings for the client. Data parties are a good opportunity to share negative findings early, rather than saving the bad news for the final report – which can feel like a set-up for your client.

We have found data parties to be a great way to engage our clients in the sense-making process, which in turn yields more actionable recommendations and builds clients’ support for the evaluation as a whole. Data parties can be large affairs, with lots of people spending hours and hours pouring over results. They can also be briefer, smaller sessions with just a few stakeholders and a few data points. The most important thing is to get the (data) party started!

Not sure where to begin? Check out Public Profit’s free guide, Dabbling in the Data. It has step-by-step instructions for 15 team-based data analysis activities that are just right for your next data party. Or download Kylie’s one-page cheat sheet on Data Parties. Party on!

 

[1] Richard Beckhard (1969). Organization development: strategies and models. Reading, Mass.: Addison-Wesley. p. 114.

[2] Fleischer, D.N., & Christie, C.A. (2009). Evaluation use: Results from a survey of U.S. American Evaluation Association members. American Journal of Evaluation, 30(2): 158-175.

 

Worthy and Effective Public Value Narratives

January 20th, 2016 by Tom Archibald

By Scott Chazdon, University of Minnesota Extension

In 2014, my evaluation colleagues and I began gathering stories about the impact Extension programs have on individuals and communities. Based initially on the Most Significant Change method (Dart & Davies, 2003), the project aimed to promote ongoing dialogue about Extension programming and help staff and stakeholders explore the changes that occur because of Extension programming.

Methodological Underpinnings

The Most Significant Change methodology is a participatory, story-based approach to evaluate Extension’s impact on participants and the public. We piloted the project across programming in the central region of Minnesota—15 counties that include the Twin Cities metropolitan area and surrounding suburban and rural counties.

The project uses a dialogical process; each submitted story is reviewed against a rubric. The project drew on Brinkerhoff’s Success Case Method (2002) to show evidence of program impact through rich and verifiable descriptions. This method does not replace other evaluation efforts, but stories can be powerful communication tools, especially when combined with other evaluation methods.

The Rubric

As a result of the central region project, we strengthened the rubric to ensure that public value is deeply engrained in impact narratives.  The rubric is based on our learning from the project as well as my own work on Evaluation for Public Value (Chazdon & Paine, 2014).

The elements of the new rubric are:

  1. Story demonstrates behavior changes that resulted from Extension programming. A strong narrative must incorporate evidence that the program has achieved its intended behavioral outcomes.
  2. Story demonstrates the trust and respect Extension has established with key audiences. Extension has built trust through long-standing relationships with key stakeholders. These aspects of programs are often overlooked and need to be incorporated into impact narratives.
  1. Story demonstrates Extension programs, staff, and volunteers meeting the needs of underrepresented populations. Part of the public value of a program is determining the audiences that most need the programming. These should be audiences that cannot otherwise receive the content through private sources.
  1. Story demonstrates Extension adapting to meet the changing needs of its key audiences. Public value also resides in staying current with traditional Extension audiences (farmers, youth, and conservation professionals) by addressing changing needs. This can include changing content due to new economic, environmental, or political contexts.
  2. Story demonstrates ways that Extension leverages organizations or partnerships to expand the delivery of research and education beyond initial program participants. Public value resides in the way Extension leverages its partnerships and collaborations to reach beyond its direct participants.
  1. Story demonstrates ways that Extension programming led to positive social, economic, environmental, cultural, health, or civic effects for public-serving organizations or communities. Public value resides in the “so what?”—the positive things that happen in families, organizations, and communities that can be attributed at least, in part, to Extension education. It is challenging to quantify these types of impacts, but systematic qualitative methods, such as Ripple Effects Mapping, can be very useful to document these effects.

These six aspects of public value are easy to teach and provide a useful framework for thinking about the public value of Extension education.

Moving Forward

As evaluators move into public value narratives, we must tread carefully with the communications staff in our organizations. Typically, they write impact stories, and they do it well! But they may not employ evaluative frameworks, such as this rubric, in doing so. To distinguish our work, we describe it as public value “narratives” rather than “stories.”

Moving forward, we continue to develop tools and training resources to support the writing of impact narratives. We have developed the following quick guide to composing a narrative:

  1. What was the presenting issue?
  2. Who was the target audience, and why?
  3. Why Extension? Credible information, research-based, trusted resource?
  4. What changes in behavior or action occurred as a result of the program? Include evaluation evidence.
  5. What were the broader impacts? Evidence of spillover, leveraging, ripples, return on investment, benefit-cost analysis?

We are also working to train new Extension educators in this narrative writing process. We hope to have a public value narrative contest as part of our annual professional development conference and use narratives for our reporting to the national land grant impacts database.

I am happy to share more information on our process and can be reached at schazdon@umn.edu.

Key References:

Brinkerhoff, R. O. (2002). The success case method. San Francisco: Berrett-Koehler.

Chazdon, S.A. & Paine, N. (2014). Evaluating for Public Value: Clarifying the Relationship Between Public Value and Program Evaluation.  Journal of Human Sciences and Extension, 2(2), 100-119. Retrieved from http://media.wix.com/ugd/c8fe6e_8b2458db408640e580cfbeb5f8c339ca.pdf.

Dart, J. & Davies, R. (2003). A dialogical, story-based evaluation tool: the Most Significant Change Technique. American Journal of Evaluation, 21(2), 137-155.

Franz, N. (2013). Improving Extension programs: Putting public value stories and statements to work. Journal of Extension, 51(3). Retrieved from http://www.joe.org/joe/2013june/tt1.php

Kalambokidis, L. (2011). Spreading the word about Extension’s public value. Journal of Extension, 49(2). Retrieved from http://www.joe.org/joe/2011april/a1.php.

Knowledge Sharing Toolkit. (2014). Most Significant Change. Retrieved from http://www.kstoolkit.org/Most+Significant+Change

Flexible Systematic Approaches Build Evaluation Capacity for Program Staff

July 29th, 2015 by Tom Archibald

By Celeste CarmichaelProgram Development and Accountability Specialist, Cornell Cooperative Extension Administration

“Systematic approaches with flexibility built in to meet local needs”—that is how I would describe ideal program development resources for Extension programs.  Most of our Extension Educators are busy with field responsibilities.  In order to assist with implementation of best practices, resources need to be applicable to broad goals, easy to find, use, and adapt.

For Cornell Cooperative Extension (CCE), Qualtrics has proven to be a systematic yet flexible resource for supporting needs assessments and program evaluations.  There are other options for survey development, but Qualtrics is supported at Cornell for faculty, staff, students and CCE educators.  We have also found Qualtrics to be a good match for any job from very simple to highly complex surveys, and it provides substantive data protection for respondents through secure servers.  One of the other features that makes Qualtrics very attractive is the ability to create a library of sample Cooperative Extension evaluation forms and questions to help Extension Educators get started with survey development. 

Staff have reported that because of time limitations there are instances when evaluation measure development is done in haste just prior to a face to face event.  When created in a hurry questions might not reflect the intended program outcomes and the resulting responses may not be as useful as they could have been otherwise.  Staff also report that survey development can be frozen by simple details that might feel overwhelming when having to develop a survey in short order.  Challenges noted include:

  • Getting the right survey look and feel
  • Developing questions and question order
  • Pilot testing questions
  • Understanding the overall evaluation questions for the program

In order to give more common programs a leg up on building evaluation forms, draft surveys that ask questions connected to how programs reach statewide outcomes are being developed and shared in the Qualtrics Cornell Cooperative Extension survey library.   The draft surveys have a Cooperative Extension header and footer, an appropriate question logic for typical programs, questions and blocks of questions that have been piloted, and questions related to behavioral aspirations and outcomes.  Surveys from the library can be saved into a user’s personal library and adapted as needed.  Additionally survey questions can be individually found in the question bank library.

On using the libraries:

CCE Qualtrics

Qualtrics users will note that “Library” is a tab in the Qualtrics menu where surveys can be saved into a user’s personal account and adapted.  The data collected belong with a user’s personal account and not the library.  A benefit to Qualtrics is the online documentation about using the features including libraries.

Similar options for a systematic approach exist beyond Qualtrics, of course.  The idea is simple—provide a starting point to allow all staff a baseline set of questions to collect data around programs.  When the starting point is adaptable—it builds capacity for the program practitioner to grow into the evaluator, adapting the questions to meet to local needs.  Where Qualtrics or another survey tool is not available, a virtual folder of adaptable documents can help local educators who are doing similar types of programs build around common program outcomes and indicators.

No shoes? No shirt? No problem!

July 17th, 2015 by Tom Archibald

By Karen Ballard, Professor, Program Evaluation, University of Arkansas & NAEPSDP President-Elect

WHAT?  Be a part of the FREE virtual Program Evaluation Summer School July 21st – 24th, 2015.flip flops

No travel funds?  Lots of questions? We have you covered.

The National Association of Extension Program and Staff Development Professionals (NAEPSDP) and the PSD/Southern Region Program Leadership Network is co-sponsoring this four-day webinar series.  The free live interactive sessions will consider some of Extensions’ big issues . . .

Want to know how to produce webinars with wow?

Want to consider what the future may hold for Extension?

Want to know where to even start with program evaluation?

Want to know how to understand what really matters with social media?

Pull up a chair . . . in your office or on the beach . . . Register and join us next week.

You can register for one or all of the educational sessions, Tuesday-Friday, July 21st – July 24th

For more information related to topics and speakers, see the detailed program descriptions and registration links below,  or visit https://naepsdp.tamu.edu/ to register.

 

VSS flyer_Page_1

VSS flyer_Page_2

 

Program Schedule

 

Tuesday, July 21st

Session Title:  Oh, What a Tangled Web…inar We Weave!

Presenters: Mary Poling and Dr. Julie Robinson

Session Description:

This session will look at the intricacies and continuous development of best practices for webinars and blended courses based on user feedback, instructor experiences, and evaluation results.

Participants will learn:

  • best practices for hosting a webinar.
  • best practices for conducting a webinar.
  • best practices for delivering a blended course.

Registration Link:  https://uaex.zoom.us/webinar/register/e0f3ccfc80a2c5c0d746f627e8486654

 

Wednesday, July 22nd

Session Title: The Art and Science of Environmental Scanning: Staying Real During Rapid Change

Presenters: Dr. Nancy Franz and Dr. Karen Ballard

Session Description:

This session examines trends and disruptive technologies that currently exist and/or are on the horizon for Extension. To plan responsively in this environment, Extension workers must anticipate these new developments. This session will engage participants in exploring strategies and methods Extension may need to adopt to insure relevance and support from stakeholders. Participants will be invited to participate in the discussion to stimulate actions supporting the future of Extension.

Registration Link:  https://uaex.zoom.us/webinar/register/dc80b86017278299cde7dc3c8da9331e

 

Thursday, July 23rd

Session Title:  When Is a Program Ready for Replication and Rigorous Evaluation?

Presenters: Dr. Donna J. Peterson and Dr. Laura H. Downey

Session Description:

This session will explain the Systematic Screening and Assessment Method (SSA; Leviton, Khan, & Dawkins, 2010) and how it can be applied to Extension programs.  SSA includes environmental scanning methodology as well as evaluability assessment.  Participants will:

  • Learn the step-by-step process of conducting an environmental scan and evaluability assessment
  • Understand criteria used in an evaluability assessment
  • Be asked to apply the evaluability assessment method to a program of their own

Registration Link:  https://uaex.zoom.us/webinar/register/2c11f52eaef207f2d746f627e8486654

 

Friday, July 24th

Session Title:  Evaluation of Social Media Platforms for Extension Outreach and Education

Presenter: Amy Cole

Session Description:

This session will address identifying “if” and/or “what” social media tools may assist with effective Extension outreach and education of target audiences.  Participants will learn what research reflects regarding audience demographics for key social media sites and the implications for Extension educators.  Strategies and successful current practices from multiple organizations will be shared to assist participants in identification of effective social media methods that can be replicated.

Registration Link:  https://uaex.zoom.us/webinar/register/f2ab575c6aab73f47510d14dfea9e911

Reflections on a Year of Extension Program & Staff Development Work

May 5th, 2015 by Tom Archibald

By Diane Mashburn, Instructor-Program Planning, Evaluation, and Accountability, Program & Staff Development, University of Arkansas Division of Agriculture

In terms of Extension years, most people still view me as a newbie. I have been working for the University of Arkansas Division of Agriculture for almost six years. Up until last May, I was a 4-H agent in a rural county in Southeast Arkansas. May 1st marks the one year anniversary of a huge change in my Extension career, moving from the county to Program and Staff Development. In Arkansas, I am tasked with providing the leadership for our state reporting system, as well as the creation of the NIFA Report of Accomplishments and Plan of Work. When I have told those in other states about these responsibilities, I have gotten a number of different reactions, anything from “bless your heart” to simply laughter. At that point, I knew I better get to know others in Extension Program & Staff Development (PSD) and get their numbers programmed into my phone, quick.

In doing this I learned about the National Association of Extension Program & Staff Development Professionals. I attended our annual conference in hopes of learning from others who are more experienced in the areas of evaluation and accountability. While in San Antonio, I definitely had my eyes opened to how broad a field we work in, despite being a subsection of the ever growing field of Extension. I had the wonderful opportunity of picking the minds of some of our seasoned Extension professionals, as well as sharing some of the experiences I had just in the short amount of time in PSD up until that point. Seeing how other states approach the same task, such as reporting and accountability, has given me such an appreciation of what we do in PSD and the hard work so many people have put into getting us to where we are today. It is fascinating to me to see how we are all working towards the same end goal of improving the lives of people, yet take very different paths.

Since then I have continued to take advantage of opportunities to learn from others, both in Arkansas and out.

As I celebrate my one year anniversary of being an “Instructor for Program Planning, Evaluation, and Accountability,” I want to share a few lessons I have learned about working with people in evaluation and accountability, including:

  • No matter how much you understand how important evaluation is, not everyone understands that.
  • One size (or method or explanation) does not fit all, especially when working with both Extension and Experiment Station faculty!
  • Sitting in your office yelling “Really??” at an email is not going to help someone improve their ability to report accurately. Bite your tongue and calmly pick up the phone to offer assistance, no matter how many times you have talked them through turning off the pop-up blocker…
  • Do not let “that’s how it’s always been done” be an excuse for not at least attempting to improve.
  • At the same time, do not ignore why “that’s how it’s always been done,” as there is lots to be learned there as well.
  • Feel free to ask questions and get opinions and feedback, but at the same time realize everyone’s idea may not work for your situation, program, or state.
  • Take advantage of every learning opportunity you have, no matter how long you have been with Extension.
  • Utilize your previous experiences and unique perspectives you may bring to the table, as those help bring about some of the best solutions.

Are You Trying to Measure and Articulate the Value of Community Engagement?

January 13th, 2015 by Tom Archibald

By Nancy Franz, Professor Emeritus, School of Education, Iowa State University

I recently presented in the share fair at National Association of Extension Program and Staff Development Professionals (NAEPSDP) on this topic. The presentation was based on an article I was asked to write for the Journal of Higher Education Outreach and Engagement in 2014 to share lessons learned from 100 years of measuring and articulating the value of Extension work. Over my 35 years with Extension, I’ve seen changing value expectations for Extension engagement from providing private value for learners to now also having to show the public value of those efforts. I’ve also seen changes in how we measure value including hiring evaluation experts, training all Extension educators to evaluate the impacts of programs including use of the logic model, and an increased focus on using evidence-based programs and implementing them with high fidelity. Finally, I’ve also experienced changes in how we articulate the value of Extension engagements as public funding for our work dwindles. As a result, we are creating a portfolio of educational projects and develop fewer comprehensive educational programs.

The lessons I believe we’ve learned as expectations about our value change include the need to:

  • Provide professional development opportunities for staff on measuring and articulating public value
  • Include the perspectives of economists, program evaluators, communicators, and community members in measuring and articulating public value
  • Develop public value statements and stories for substantive programs to share with stakeholders
  • Share resources and ideas through Kalambokidis’ public value blog and the Extension Public Value Network Facebook group
  • Engage early adopters in measuring and articulating Extension’s public value
  • Start with program evaluation at the beginning of a program or project rather than waiting until the end
  • Address issues with programs rather than focusing on ongoing educational activities
  • Include the ability to measure and articulate public value in position announcements, job descriptions, and performance reviews
  • Adopt planning, reporting, and promotion and tenure/performance review systems to better capture public value data
  • Designate a public value champion in the organization for strong support internally and externally

Three new articles (published and in press) that may help you with measuring and articulating Extension’s public value:

Measuring and Articulating the Value of Community Engagement: Lessons Learned from 100 Years of Extension Work by Nancy Franz in the Journal of Higher Education Outreach and Engagement 18(2)

Public-Interest Values and Programs Sustainability: Some Implications for Evaluation Practice by Eleanor Chelimsky in the American Journal of Evaluation 35(4)

Programming for the Public Good: Ensuring a Public Value Through the Cooperative Extension Program Development Model by Nancy Franz in the Journal of Human Sciences and Extension (forthcoming, June 2015)

Upcoming series: Reflections from NAEPSDP2014

January 5th, 2015 by Tom Archibald

Happy New Year from the eXtension Evaluation Community of Practice (CoP)! As you may know, this year has been declared as the International Year of Evaluation. This blog and this CoP can be one venue in which we celebrate advances in Extension Evaluation as part of the worldwide celebration of and emphasis on evaluation.

logo_2015

To begin the year, we bring you a short series of reflections on the recent conference of the National Association of Extension Program and Staff Development Professionals (NAEPSDP), which took place in San Antonio in December. The conference was a great learning and networking opportunity.

In this short series, we will hear from Nancy Franz (Professor Emeritus, School of Education, Iowa State University) and Diane Mashburn (Program Planning, Evaluation and Accountability Coordinator, University of Arkansas Cooperative Extension).

I hope there excellent insights will prompt some good comments and discussion. Also, feel free to use the comments section to suggest future topics you would like to see addressed in this blog. We want to ensure that you find this blog useful and interesting. Let us hear from you!

 

AEA2014: “Right-sized” Evaluation

December 4th, 2014 by Tom Archibald

Ben Silliman, Extension Specialist and Professor of Youth, Family, and Community Sciences at North Carolina State University

The thought that recurred for me throughout AEA14 in Denver was the importance of “right-sizing” evaluation. Not everybody needs to be an expert and not every program requires publishable evidence. This theme was apparent from the first morning when Melissa Cater and I hosted a roundtable on evaluating youth program quality. Leaders of many different youth organizations shared stories on how quality is defined, implemented, measured, and valued in a variety of contexts.

Two prominent themes were staff training and stakeholder support. Front-line staff who understand and practice developmentally-appropriate attitudes and skills at point-of-service promote a climate for positive youth development. Evaluation that empowers staff to understand and succeed with youth energizes and informs their work. Mastering a checklist or survey process without grasping its connection to people and programs is just “going through the motions.”

Stakeholders, especially funders, must understand that long-term investments in quality provide the best prospects for reaching performance benchmarks such as school success. Thus the first “right-sizing” is not related to evaluation expertise or generating data for outcomes, but rightly understanding and connecting to participants’ needs. NASCAR owners, who spend millions on high-performance drivers and equipment, understand that a race cannot be won without meticulous attention to “little things” from the driver’s water bottle to the vehicle’s tire wear.

No matter what the program, staff, or stakeholders, “right-sizing” evaluation is about thinking and communicating. Many of this year’s presentations underlined the importance of evaluative thinking, including the disciplines of researching best practice, modeling paths toward outcomes, and reflecting on teachable moments with diverse stakeholders. Equally important is regular communication among program partners, interpretation of contexts, practices, and findings to diverse stakeholders, and growing through communities of practice with peers. To support Youth Program Quality evaluation, I am launching a resource web site here. The site also includes research and tools on Growth and Development and on Evaluation Capacity Building, including links to E-Basics Online Evaluation Training and Discussion forums on Evaluation and Youth Program Quality.

Conferences such as AEA are great for encouragement and insight, but once-a-year is “too low a dosage” to promote personal and professional growth. On my return flight I read Atul Gawande’s “Better” (2007, Picador), a pop book of stories on how evaluative thinking is improving health and medical care.  From the first chapter he underlines the importance of diligence in attending to small actions and thinking about large systems. The closing chapter describes how groups of under-resourced teams in Indian medical clinics finished their 12+ hour days by debriefing “lessons learned,” building resilience in themselves and their patients. He noted how well-resourced Western hospital staff often feel they have no time to reflect and learn together like those village teams.

As important as evaluation may be for accountability or funding, without understanding of people needs and program practices, checklists and reports quickly become “the tail that wags the dog,” rather than the best way to tell that the dog is healthy, happy, and not ready to bite.

 

AEA2014: To Join or Not to Join AEA

November 17th, 2014 by Tom Archibald

By Pennie CrinionDirector of Program Planning and Evaluation, University of Illinois Extension

Ever find yourself wondering if you should renew an Extension Professional Association membership or join another one?  As an administrator most of my career, I’ve seen the addition of three new national professional associations for Extension resulting in a total of seven and have felt pressure in deciding how many to join.

Then when I assumed my current position which included leadership for evaluating programs my predecessor impressed upon me the need to join the American Evaluation Association (AEA). So I registered for the Summer Evaluation Institute held in Atlanta but didn’t formally commit to AEA membership and national conference participation until 2013.

This year I found the conference theme–Visionary Evaluation for a Sustainable, Equitable Future to be particularly interesting in light of the concerns regarding protecting the environment, an issue that has global impact.  Bob Willard, a leading expert on quantifying and promoting the business value of corporate sustainability strategies and core faculty member of the International Society of Sustainability Professionals provided the opening presentation.

His efforts to engage the business community in proactively avoiding risks and capturing opportunities by using smart environmental, social, and governance strategies was insightful and reassured me that  corporations are increasingly recognizing their place at the intersection of global economic, environmental, and equity issues. As Bob shared the business context for corporate social and environmental responsibility, he stressed the importance of standards and benchmarks in linking the corporate world to the field of evaluation.  He highlighted standards that encourage organizations to create positive environmental, social, and economic value so that we have the possibility of sustaining a global economy, society, and ecosystem.  I left convinced that the world renowned evaluation experts who were in attendance and members of AEA would rise to the opportunity he described.

As always, I also appreciated conference opportunities to view the poster session and hundreds of choices offered in 15 concurrent session segments supported by 53 evaluation topical interest groups including the Extension Education Evaluation group. Comradery with Extension colleagues, reasonable registration fees, and opportunities to visit with others in the evaluation field are other great features.

So you may be asking what other benefits would I reap by joining AEA?  Here’s a list for your consideration.

  • Coffee Break demonstrations that are 20 minute long webinars designed to introduce audience members to new tools, techniques, and strategies in the field of evaluation.
  • Professional Development eStudy 3 or 6 hour webinars are provided by evaluation’s top presenters in multiple 90-minute sessions allowing for in-depth exploration of hot topics and questions from the audience.
  • AEA’s active listserv: EVALTALK, with more than 2000 subscribers from around the world that welcome questions and discussion on any topic related to evaluation.
  • AEA365 is dedicated to highlighting Hot Tips, Cool Tricks, Rad Resources, and Lessons Learned with the goal of a daily post from and for evaluators around the globe.
  • Printed or online copies of the American Journal of Evaluation and New Directions in Evaluation.
  • Job posting opportunities.

So visit www.eval.org and explore AEA membership.