Atrimonta
  • What's New
  • About Us
    • Project Experience Examples
  • Client-Centred Evaluation
    • CCE Awareness
    • CCE Access
    • CCE Needs
    • CCE Services
    • CCE Outcomes
  • Logic Models
    • Logic Model Best Practices
    • Logical Program Approaches
    • Logical Organizational Approaches
  • Templates
    • Scoping Questions
    • Program Profile
    • Logic Model
    • Evaluation Matrix
    • Data Collection Plan
    • Schedule and Budget
    • Data Collection and Analysis Form
    • Report Outline
    • Debriefing Questions
  • Evaluating Reach
    • Who's Participating
    • Reasons for Participating
    • Facilitators & Improvements
    • Challenges & Solutions
    • What Works & Sub-Populations
  • Evaluation Surveys
  • Evaluation Toolbox
  • Contact

Evaluation Surveys

A blog about creating, conducting and reporting on Evaluation Surveys

Learn More

Final Checks - Is your evaluation survey asking what you need it to?

11/1/2021

0 Comments

 
Picture
Your questionnaire’s been drafted, discussed, revised, discussed, tweaked, programmed, pre- tested and reprogrammed. Or at least some of the above.
 
Wondering if it’s still asking what you need it to?
 
Here are three ways to take your program evaluation questionnaire and final-check it:
 
1)     To your logic model – code each survey question back to the boxes or the linkages between them on your logic model. For example, maybe Q7 asks about an activity and Q12 about a short-term outcome. Code Q7 to activities and Q12 to outcomes.
 
Then ask yourself - which elements of the program logic will you be able to evaluate based on the survey responses? Are all the activities covered or at least the main ones? All the key short-term outcomes? What key linkages are you able to test with this survey? Any repeats or gaps?
 
2)    To your evaluation issues – code each survey question back to an evaluation issue. For example, maybe Q7 asks about implementation and Q12 about effectiveness. Code Q7 to implementation and Q12 to effectiveness.
 
Then ask yourself– which evaluation issues will you be able to (partly) address using these survey responses? Are all the evaluation issues you thought to (partly) answer using the survey covered off? Any repeats or gaps?
 
3)     To your report outline – create an evaluation report outline and enter each of your survey questions into it e.g., as bullets. Better yet, enter each question in a format which refers to the findings you’ll be reporting. For example, maybe for Q7 - % were satisfied with an activity and for Q12 - % reported a change in behaviour (as a result of the program).
 
Then ask yourself – how well does your report flow? Will you be able to speak to the key findings you plan to? Any repeats or gaps?
 
Not all questions need to be directly connected to a topic – some are filter or flow questions that are needed for you to be able to ask others.
 
Sometimes repeated questions are intentional. But if they’re not then they are a good place to do a final prune. If there are question gaps – and they are important – then do you need to make a last minute addition to your questionnaire?
 
The changes made to the questionnaire can be improvements to a logic model or to evaluation issues developed earlier. As a questionnaire is revised, it better reflects reality. For example, it is more likely to use the words that clients or staff or stakeholders use to describe the program, especially after a pre-test. As well, questions may have been refined when the questionnaire moved from being a paper version to being an online version – as the programming led to more clarity. Once you’ve done your final check of the questionnaire, it may be a good time to go back and tweak your logic model or evaluation matrix.

0 Comments

    Archives

    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    July 2021

    Categories

    All

    RSS Feed

Proudly powered by Weebly