Hello, SOCW 6311 Scholars,
This week, we will begin an introduction to program evaluation. Program evaluation is a task that you may be called on to perform in your workplace, or a community group may request from you, particularly if they are providing some kind of social service or advocacy in the community.
The assignment is designed to place you in the role of an evaluator. Understanding the type of assessment is key to success in doing the assignment, whether you select a product assessment that evaluates the program goals and effectiveness in meeting those goals or a process evaluation that investigates the processes and methods for delivering services and the impact of those methods on the recipients and others.
Please fill out the attached assignment handout as if you were placed in the role of a program evaluator.
In addition to the assignment, I am including a graphic that shows the difference between process and product evaluations, a handout on common questions addressed by an evaluation, and a handout on data collection methods. The common questions and data collection methods are also included on the last pages of the assignment for your convenience.
Assignment – Week 6:
Serving the Interests of Stakeholders in a Program Evaluation: Creating an Evaluation Plan
The objectives for this assignment
- Describe a plan for evaluating a program with which you are familiar. You may use any type of program that serves the community in some way. The program does not have to be a social work program. For example, it may serve children, adults, animals, housing, recreation, the environment, the political process (advocacy, voter rights, and so on) or any other area of interest. Use a different program for this assignment than you used for your discussion.
- Describe concerns that stakeholders may have about the program evaluation process and suggest a strategy for addressing them.
- Write the descriptions clearly as if you were communicating to the stakeholders. One sure way to cause a program to fail is to state things in such a way that stakeholders and program staff become confused about the goals of the evaluation, how it will be conducted, how the information will be used, and what exactly will be expected from them. That includes your social work colleagues!
- Be concise. Limit your answers to the questions and the tables to no more than four pages. Many proposals have word and page limits. Limiting the length of your response that your words be chosen carefully, be informative, and accurate (recall thick descriptions from last week), without padding or fluff. Busy reviewers have not time for lengthy reports. Using fewer than four pages is even better as long as you provide accurate, clear, and detailed information without cutting corners.
Please do not use a title page. Place your name and “Week 6 assignment” in the running head.
- Name a real or fictional agency program. Alternatively, you can describe a real program but change the name to avoid revealing confidential details.
- 2. Present or create a general goal statement for the program that briefly describes its mission and the types of problems or conditions that the program is intended to address. (2-3 sentences).
If you describe an existing program, use its detailed goal statement if you can find it on the supporting agency website. If the organization hosting the program does not have a specific goal statement for the program in their communications, write a 1-2 sentence goal statement that specifies the general purpose or goal of the program.
If you are describing a fictional program, create a 1-2 sentence goal statement that is compelling and descriptive because readers will not support the program if we do not catch their interest.
Example (adapted from Dress for Success-NY): “The goals of the Dress for Success-Walden Valley program are to enable disadvantaged women to thrive in the workplace and life. Dress for Success-Walden Valley promotes the economic independence of disadvantaged women by providing them with career guidance and assistance in projecting a professional appearance in a 4-week program sponsored by the Walden Valley YMCA.
- Identify expected specific services and measurable outcomes that the program plans to achieve or accomplish in the community.
Example (adapted from Dress for Success-NY): Women mentors from business will provide a support, information, and personalized guidance in career training and development in weekly one-hour sessions for three weeks. In the fourth week, mentors from cosmetology and fashion will provide a make-over for hair and make-up and assist with clothing selection from the Dress For Success wardrobe line appropriate for the participants’ selected workplace environments.
- Select three types of stakeholders to ask questions about the program from the table below (your choice) or other types of stakeholders that are unique to your program. You will enter them into
Table 1. Types of Stakeholders
|Organization leaders /program directors|
|Prospective staff who support or would deliver the services|
|Prospective recipients, representatives of the target population|
|Implementers: those who are responsible for providing the services
|Agency and Program management and staff|
|Partners: Those who support the people involved in the program and are invested in its success
|Parents and families|
|Teachers and school personnel|
|Social service providers|
|Funders and donors|
|Policy and decision makers||Community leaders|
- Identify a general purpose for the evaluation based on the perspectives of the stakeholders that you selected. The Community ToolBox from the Center for Community Health and Development (n.d.) offers the following suggestions for conducting an evaluation that may be helpful to you. (You may have to use your educated imagination for this – the purposes should be reasonable). You will enter the identified purposes in Table 2.
- To be accountable as a public operation
- To assist [program providers] to improve [their services]
- To assess the quality … of programs
- To plan and implement new programs
- To increase knowledge [about the social concern or population that prompted the development of the program
- To plan expansions of current programs
- To recommend changes in programs resulting from changes in social or other conditions]
- For funders: To improve their skill in identifying and supporting programs that respond best to community needs
- To decide whether the program should continue, expand, change, or end.
- To understand and plan for future funding needs]
- Adapted from the Center for Community Health and Development, Community ToolBox
- List three specific questions relevant to the purpose(s) of the evaluation that your selected stakeholders would want to learn from the evaluation. Ensure that each stakeholder, their purpose in obtaining an evaluation and their questions are linked together logically and entered into Table 2 The handout Common stakeholder questions for evaluations – SOCW 6311 (attached to this email) may be helpful.
Table 2. Stakeholders, the purpose for requesting an evaluation, and relevant questions.
|Selected stakeholder type||Purpose for which they would seek an evaluation of your program||A question that each might ask you to answer through the evaluation|
- 6. Recall the definitions and purposes of process and product evaluations. Select the appropriate form of evaluation for each question you selected for Question 3. Enter the question and the evaluation type in Table 3 You may use the same type of evaluation for each question, if appropriate.
For review, the definitions for each type of evaluation can be found here: Chapter 3 of the W.K. Kellogg Foundation Evaluation Handbook (pp. 25-34), Logan and Royse (Chapter 13 of B. Thyer (Ed.), The Handbook of Social Work Research Methods (2nd ed.), pp. 225-226, or the description of these two types of evaluations by TSNE [https://www.tsne.org/blog/process-evaluation-vs-outcome-evaluation]
- Select TWO data collection methods you will use to find answers for each stakeholder question. (It’s always best to have a back-up plan!)
Choose data collection methods that will obtain the most detailed and accurate information within the constraints of your time and budget (use your imagination!) and will provide the most credible answers to your stakeholders.
See pages 143- 160 of The Step-by-Step Guide to Evaluation (2017) for details on observation tools or see the handout, Common Types Of Data Collection Methods With Advantages And Disadvantages at the end of this assignment. Interviews, if used, may be selected only once.
Table 4. Questions addressed in the proper evaluation format and data collection plans
|Question to be addressed in the evaluation (copy from the list in Question 3)||Type of evaluation appropriate for answering the question||Data Collection Methods (list two for each question)|
- Describe two unexpected or interesting facts you learned about planning evaluations this week (2 sentences).
Cite any references or learning resources you have used according to APA (7th ed) formatting rules we have covered in the Scholarly Writing Goals in the discussion thus far in the class.
Assignments are due by 11:59 p.m. Mountain Time (MT) on Sunday (which is 1:59 a.m. Eastern Time (ET) Monday). The time stamp in the classroom will reflect Eastern Time (ET), regardless of your time zone. As long as your submission time stamp is no later than 1:59 a.m. Eastern Time (ET), you have submitted on time.
Center for Community Health and Development. (n.d.). Chapter 3, Section 36: Understanding Community Leadership, Evaluators, and Funders: What Are Their Interests? Retrieved July 4, 2021 from the Community Tool Box: https://ctb.ku.edu/en/table-of-contents/evaluate/evaluation/interests-of-leaders-evaluators-funders/main
Logan, T. K. & Royse, D. (2010). Program evaluation studies. In B. Thyer (Ed.), The handbook of social work research methods (2nd ed.), (pp. 221–227). Sage. (PDF).
TSNE Mission Works (2018). Product evaluation vs. outcome evaluation. https://www.tsne.org/blog/process-evaluation-vs-outcome-evaluation
W.K. Kellogg Foundation (2017). Step-by-Step Guide to Evaluation (pp. 25-34, pp. 143-160) https://www.wkkf.org/resource-directory/resources/2017/11/the-step-by-step-guide-to-evaluation–how-to-become-savvy-evaluation-consumers
Yank, J. R. (2021). Common Evaluation Questions for Stakeholders – for SOCW 6311. Adapted from Krupar (2019). [Class handout]. Barbara Solomon School of Social Work. Walden University.
Two handouts are provided on the following pages:
- Common stakeholder questions for evaluations – SOCW 6311
- 2. Data collection methods, advantages and disadvantages (Yank, 2021)
Common stakeholder questions for evaluations – SOCW 6311
Adapted from Krupar, A. (2019). Asking program evaluation questions. Retrieved from https://programs.online.american.edu/online-graduate-certificates/project-monitoring/resource/asking-program-evaluation-questions
Who are the program stakeholders and why should they be involved?
See https://ctb.ku.edu/en/table-of-contents/evaluate/evaluation/interests-of-leaders-evaluators-funders/main for an excellent explanation of the concerns of stakeholders such as community leaders, funders, and those conducting the evaluation.
What questions might the stakeholders ask of the program developers and evaluators?
Stakeholder questions will vary depending on the effect of the program on their work, relationships, goals, and concerns. The questions will also depend on the phase of the program under consideration (planning, implementation, completion, and dissemination of results). Stakeholder may request information on the methods of providing services (process evaluation), the outcomes, effects, and impacts of these services (product evaluation), and the methods involved in conducting the evaluation (questions for evaluators). These are only a few of the possible questions that stakeholders may have.
Process evaluation questions: how is the program working?
- What were the demographic and clinical characteristics of clients?
- How are potential participants recruited for the program?
- Is the target population adequately reached?
- Are the promised activities being conducted with the target population?
- Are there other populations the program should be working with?
- Are participants progressing toward their objectives? If so, how? If not, why not?
- What proportion of those who might need the activity, service, or agency actually use it?
- Is the target population affected by the project/ program equitably or according to the plan or do some groups receive better/more or poorer quality/fewer services than others?
- What proportion of clients completed the program (or attend consistently) and what are the characteristics of those who dropped out?
- What do target participants think of the services? Are they satisfied?
- How is the project functioning from administrative, organizational, and/or personnel perspectives?
- Are program activities occurring according to plan? If not, why not?
- What methods are being used to monitor its operations such as service delivery, financial aspects, recruitment, and staffing? (observation, interview, survey, accounting practices, etc.)
- Are/Were resources sufficient and allocated properly?
- Do/Did the implementers have the right expertise to achieve the program goals?
- How many hours of programming were provided to each client individually and in groups?
- What proportion of program hours involved direct contact with the client compared to the proportion of hours interacting with others about the client, documentation, and other indirect services? (Adapted from the American University, 2019)
- Product/outcome … evaluation questions: How well does/did the program achieve its outcomes?
- What are the outcomes of the program? Is the information accurate and inclusive of different perspectives?
- Are/were the project/program services/activities beneficial to the target population?
- Are/were the project/program services/activities beneficial to others besides the target population?
- Are the goals appropriate or should different goals have been selected?
- How were those directly involved with the participants affected by the program? These might include teachers, nurses, health care providers, local police, employers, parents, spouses, children, neighbors, and others who interact with the program participants.
- Do/did the program activities have any negative effects?
- Is the problem alleviated that the project/ program attempted to address?
- Is the cost of the services or activities reasonable in relation to the benefits?
- Are there alternative approaches that could have the same outcomes with less cost?
- Were the finances for the program handled responsibly?
- What public policy changes could be made as a result of the outcomes from this program?
- What can be learned from the program? Should it be continued? Should anything be changed?
- Are outcomes, objectives, and goals of the program being achieved?
- Are the project/program services/activities beneficial to the target population?
- Do any of the services/activities have negative effects for others in the community?
- Is the problem/need that the project/ program intends to address alleviated?
- How cost effective is the program? Is the cost of services or activities reasonable in relation to the benefits? (Adapted from American University, 2019)
Concerns about the evaluation process are important to stakeholders as well. Different types of stakeholders have different perspectives on needs, methods, staffing, costs, and other components in the program. “Stakeholders must be part of the evaluation to ensure that their unique perspectives are understood. When stakeholders are not appropriately involved, evaluation findings are likely to be ignored, criticized, or resisted.” (Center for Community Health and Development. (n.d.) ).
Questions raised by stakeholders about the evaluation may include the following questions which have been adapted from lists provided by the Centers for Disease Control and Prevention and the Community Tool Box.
- Who is requesting the evaluation? Is the intent to comply with a funder’s request, help staff in making improvements, or measure the effect of the program?
- Who will decide what questions will be asked in the evaluation? (It’s impossible for most organizations to afford the time and resources for a full, comprehensive evaluation of every aspect of the program, so a few targeted areas are selected.)
- Will the evaluator(s) be associated with the organization that sponsors the program (possibility of bias) or will an outside neutral evaluator be engaged to do the evaluation (may not understand issues in the community) ?
Questions raised by stakeholders about the evaluation, continued
- How much time, effort, and resources will the evaluation require?
- Who will have access to the report describing the results of the evaluation?
- How will the privacy of the beneficiaries/clients/participants be protected?
- What will be done with the evaluation results (program promotion, using information to provide support for funding requests, making requests for new or changed public policies, and others)?
- How will the respondents be selected for the evaluation to ensure that the evaluation provides an accurate picture of the positive and negative aspects of the program?
- Will staff be asked for their views? The beneficiaries? Members of outside systems (schools, law enforcement, employers, etc.)? Interested others (family, friends, therapists, case managers, etc.)?
- Is the intent of the evaluation to determine manageable changes for improvement when the program is offered again, or is the intent to determine if the program should continue?
- How will the information gained from respondents be analyzed?
- Who will perform the analysis of the data? Do they have knowledge of conditions in the community that prompted the development of the program?
- When can we expect the results of the evaluation?
- How will the results be conveyed to stakeholders?
American University (2019). Asking program evaluation questions. https://programs.online.american.edu/online-graduate-certificates/project-monitoring/resources/asking-program-evaluation-questions
Center for Community Health and Development. (n.d.). Chapter 36: Section 1. A Framework for Program Evaluation: A Gateway to Tools. https://ctb.ku.edu/en/table-of-contents/evaluate/evaluation/framework-for-evaluation/main
Krupar, A. (2019). Asking program evaluation questions. Retrieved from https://programs.online.american.edu/online-graduate-certificates/project-monitoring/resource/asking-program-evaluation-questions
Note: Krupar cites the following text as a basis for her paper: Rossi, P. H., Freeman, H. E., & Lipsey, M. W. (1999). Evaluation: A systematic approach (6th ed.). Sage.
Cite this document as follows:
Yank, J. R. (2021). Common stakeholder questions for evaluations – SOCW 6311. Adapted from Krupar (2019). [Class handout]. Barbara Solomon School of Social Work, Walden University.
The handout on data collection methods, advantages and disadvantages is on the next page.
|Common types of data collection methods with advantages and disadvantages|
|Methods conducted by the researcher to gather data||Advantages||Disadvantages|
(a) Interviewer can build rapport to elicit good information
(b) interviewer can obtained detailed information about feelings, attitudes, and behaviors
(c) interviewer can interpret information when respondent has difficulty understanding
(d) high response rate
(e) reading and writing are not obstacles to participation
Advantages for face-to-face interviews: Interviewer can detect verbal/non-verbal cues
Advantages for phone interviews: interviewer might be able to contact difficult-to-reach individuals
(a) labor intensive
(b) costly to meet with individuals, train interviewers
(c) respondents may be reluctant to disclose highly sensitive information
(d) interviewers can introduce bias by reacting to respondents’ remarks
(e) data can be hard to code and measure
(a) Easy to organize
(b) low cost
(c) Useful information can be obtained by observing group dynamics
(d) group support may yield information that people would be unwilling to share with other methods
(a) the facilitator has to be skilled in eliciting information
(b) discussion can be dominated by more vocal individuals or those presumed to be more knowledgeable or willing to engage in conflict
(c) data can be hard to analyze
(d) group is usually not representative of the wider population
Researchers follow a person, small group, organization, or situation in depth to learn about the factors involved or the conditions affecting the person, group, or organization, or to observe the development of the issue and its effects over time.
(a) provides a richly, detailed set of descriptions and processes involved with a person, organization, or event.
(b) yields in-depth information about behavior and environmental factors that influence it that cannot be obtained any other way
(c) may use many different types of data collection methods – observations, interviews, questionnaires, record reviews
(a) costly and time-consuming
(b) there may be ethical issues related to privacy and researchers’ responses if illicit or potentially harmful situations are discovered
(c) generalization may be difficult
|Observations, descriptions, tallies||Advantages:
(a) can be unobtrusive
(b) access to “real life”, not merely self-reports which may lack accuracy
(c) can include context for observed behaviors
(a) ethical concerns when collecting information without the respondent’s knowledge or consent
(b) Hawthorne effect: people change behavior when they know that they are being observed
(c) requires several observers to assure reliability, correct errors
|Records review (analysis of records, documents, photos, videos, chart reviews or data already collected for another purpose, such as attendance records, phone logs, census or demographic data, etc.)
(a) efficient – uses already collected data
(b) data from a large number of people can be obtained without inconveniencing them
(c) usually inexpensive
(a) sometimes there is difficulty in accessing data under others’ control
(b) it can be difficult to ensure integrity of data collected by others unless there are reports about errors, missing data, problems collecting information, etc.
|Systematic reviews of the literature
Reviews all available databases for relevant information; identifies previously published relevant research; assesses research methods; synthesizes results to present a summary of current knowledge, emphasizing research with the greatest support and impact
(a) identify literature with which reviewers are unfamiliar
(b) applies systematic, explicit, and rigorous methods to assess quality of relevant data from as many research articles as possible.
(c) considered more reliable than reading a limited number of selected articles (reduces sampling bias)
(a) very time consuming
(b) before beginning the study, researchers have to develop a research question and plan an objective method for evaluating quality
(c) some articles or data bases may not be easily accessible
(d) quantifying results across different articles can be mathematically challenging
|Types of data collection that can be administered by the researcher in person, be self-administered, or be conducted over the Internet|
|Questionnaires: structured and unstructured
Can be administered in person, by phone, self-administered on paper or electronic formats
Fixed alternative: two options such as yes/no, approve/disapprove, etc.
Likert scales: multiple options from negative to positive, least to greatest
1-10 rating scales: multiple options
(a) More objective than interviews due to uniform questions
(b) easier to summarize data
(c) Less costly than interviews
(d) Less time consuming
(e) More people can be asked for responses (wider distribution)
(f) Anonymity may increase participation regarding questions pertaining to sensitive issues
(a) low response rate
(b) confusion or errors by respondents cannot be corrected in self-administered questionnaires
(c) Items may not have the same meaning to all respondents
(d) For self-administered questionnaires: Size and diversity of sample may be limited by people’s ability to read or return the completed questionnaire
(a) many available formats (mail, phone, electronic, handout (paper or cards), face-to-face for complex questions and/or vocabulary
(b) designed to be simple and easy to complete
(c) immediate response in phone and face-to-face surveys
(a) low response rates for mail and electronic formats
(b) Accuracy is hard to verify (secondary goal of building a positive reputation for an event or service by overly positive reviews)
(c) Items may not have the same meaning to all respondents
(d) reading problems may result in errors
(e) sample for mail, phone, or internet surveys is limited to those who have addresses, phones, and access to computers
|Self-checklists, self-report inventories, and tallies
Checklists include items that indicate the presence of a skill or problem, count behaviors, or identify symptoms. Examples: Beck Depression Inventory, Child Behavior Checklist, or PTSD Scale (PCL)
(a) same as questionnaires
(b) simple and easy to complete
(c) preferred for longitudinal and pre-post studies
(d) preferred for large group studies
(a) same as questionnaires
(b) Self-checklists are subject to lack of truthfulness or memory lapses; accuracy is hard to verify
(c) Items may not have the same meaning to all respondents
Different types completed by researcher or practitioner, participants/clients or by practitioners and clients together to evaluate for the presence of a problem.
(a) early detection, treatment, or prevention
(b) based on a set of reliable and accepted criteria (e.g., DSM-5)
(c) norms are based on a known population
(a) may miss important factors not included in the material on which the screening tool is based
(b) may not have included subcultural groups in the norms, making them less relevant to those populations
Tests designed to measure ability, memory, problem-solving, perceptions, symptoms, learning are usually administered by the researcher to guarantee integrity of the test, but are increasingly done online
(a) objective – either compares participant to norm for the population (norm-referenced) or an accepted standard (criterion referenced)
(b) subjected to stringent reliability and validity standards
(a) meanings of questions may be different for different ethnic or age groups
(b) many older tests include cultural bias due to certain groups with different values and beliefs not being included in the norming sample that formed the basis of scoring
|Physiological health status tests
From The Step-by-Step Guide to Evaluation (W.K.K. Kellogg Foundation, 2017), p. 150
(a) objective, compensates for faulty memory
(b) can identify health needs
(c) can be used to identify effectiveness or program methods and outcomes
(a) results rely on consistent participant attendance at appointments
(b) participants may avoid treatment or medical care if they are afraid that the results may not be positive
Yank, J. (2021). Common types of data collection methods with advantages and disadvantages. Adapted from Jacobson, M. (2010, February). Information collection tools – advantages and disadvantages. [Conference presentation]. 2010 ServeMontana Symposium, Helena, MT, United States and from the W. K. Kellogg Foundation (2017), The step-by-step guide to evaluation: How to become savvy evaluation consumers.
Note: The data collection table has been adapted for SOCW 6311 with permission.