|
|
Line 22: |
Line 22: |
| Central in all three approaches is the search for mechanisms that are believed to be ‘at work’ when a policy is implemented. | | Central in all three approaches is the search for mechanisms that are believed to be ‘at work’ when a policy is implemented. |
| | | |
− | '''
| |
| | | |
− | ==== '''Testing intervention theories on impact''' ==== | + | |
| + | ==== '''Testing intervention theories on impact''' ==== |
| | | |
| After articulating the assumptions on how an intervention is expected to affect outcomes and impacts, the question arises to what extent these assumptions are '''valid'''. In practice, evaluators have at their disposal a wide range of methods and techniques to test the intervention theory. We can broadly distinguish between '''two broad approaches'''. | | After articulating the assumptions on how an intervention is expected to affect outcomes and impacts, the question arises to what extent these assumptions are '''valid'''. In practice, evaluators have at their disposal a wide range of methods and techniques to test the intervention theory. We can broadly distinguish between '''two broad approaches'''. |
Line 32: |
Line 32: |
| *In short, theory-based methodological designs can be situated anywhere '''in between ‘telling the causal story’ to ‘formally testing causal assumptions’'''. | | *In short, theory-based methodological designs can be situated anywhere '''in between ‘telling the causal story’ to ‘formally testing causal assumptions’'''. |
| | | |
− | The systematic development and corroboration of the causal story can be achieved through '''''causal contribution analysis'''''(Mayne, 2001) which aims to demonstrate whether or not the evaluated intervention is one of the causes of observed change. Contribution analysis relies upon'''chains of logical arguments''' that are verified through a careful analysis. Rigor in causal contribution analysis involves systematically identifying and investigating '''alternative explanations for observed impacts'''. This includes being able to rule out implementation failure as an explanation of lack of results, and developing testable hypotheses and predictions to identify the conditions under which interventions contribute to specific impacts. | + | The systematic development and corroboration of the causal story can be achieved through '''''causal contribution analysis'''''(Mayne, 2001) which aims to demonstrate whether or not the evaluated intervention is one of the causes of observed change. Contribution analysis relies upon'''chains of logical arguments''' that are verified through a careful analysis. Rigor in causal contribution analysis involves systematically identifying and investigating '''alternative explanations for observed impacts'''. This includes being able to rule out implementation failure as an explanation of lack of results, and developing testable hypotheses and predictions to identify the conditions under which interventions contribute to specific impacts. |
| | | |
| The '''causal story''' is inferred from the following evidence: | | The '''causal story''' is inferred from the following evidence: |
| | | |
| *There is a reasoned theory of change for the intervention: it makes sense, it is plausible, and is agreed by key players. | | *There is a reasoned theory of change for the intervention: it makes sense, it is plausible, and is agreed by key players. |
− | *<span style="font-size: 10pt; color: windowtext; font-family: 'arial','sans-serif'">The activities of the intervention were implemented. <o:p></o:p></span> | + | *<span style="font-size: 10pt; color: windowtext; font-family: 'arial','sans-serif'">The activities of the intervention were implemented. <o:p></o:p></span> |
− | *<span style="font-size: 10pt; color: windowtext; font-family: 'arial','sans-serif'">The theory of change—or key elements thereof— is verified by evidence: the chain of expected results occurred. <o:p></o:p></span> | + | *<span style="font-size: 10pt; color: windowtext; font-family: 'arial','sans-serif'">The theory of change—or key elements thereof— is verified by evidence: the chain of expected results occurred. <o:p></o:p></span> |
| *<span style="font-size: 10pt; font-family: 'arial','sans-serif'; mso-ansi-language: de; mso-fareast-font-family: calibri; mso-fareast-language: en-us; mso-fareast-theme-font: minor-latin; mso-bidi-language: ar-sa">Other influencing factors have been assessed and either shown not to have made a significant contribution or their relative role in contributing to the desired result has been recognized</span> | | *<span style="font-size: 10pt; font-family: 'arial','sans-serif'; mso-ansi-language: de; mso-fareast-font-family: calibri; mso-fareast-language: en-us; mso-fareast-theme-font: minor-latin; mso-bidi-language: ar-sa">Other influencing factors have been assessed and either shown not to have made a significant contribution or their relative role in contributing to the desired result has been recognized</span> |
| | | |
− | One of the key limitations in the foregoing analysis is to pinpoint the exact causal effect from intervention to impact. Despite the potential strength of the causal argumentation on the links between the intervention and impact, and despite the possible availability of data on indicators, as well as data on contributing factors (etc.), there remains uncertainty about the ''magnitude ''of the impact as well as ''the extent ''to which the changes in impact variables are really due to the intervention or due to other influential variables. This is called the '''attribution problem'''. | + | One of the key limitations in the foregoing analysis is to pinpoint the exact causal effect from intervention to impact. Despite the potential strength of the causal argumentation on the links between the intervention and impact, and despite the possible availability of data on indicators, as well as data on contributing factors (etc.), there remains uncertainty about the ''magnitude ''of the impact as well as ''the extent ''to which the changes in impact variables are really due to the intervention or due to other influential variables. This is called the '''attribution problem'''. |
| | | |
| + | <br><br>'''''<span style="font-size: 9pt; mso-bidi-font-family: arial"><font face="Arial">Source:</font></span>''<span style="font-size: 9pt; mso-bidi-font-family: arial"><font face="Arial"> </font></span>''' |
| | | |
| + | ''<span style="font-size: 9pt; mso-bidi-font-family: arial"><font face="Arial">Leeuw, F. & Vaessen, J. (2009): Impact Evaluations and Development. Nonie Guidance on Impact Evaluation. Draft Version for Discussion at the Cairo conference March-April, 2009. Nonie – Network on Impact Evaluation, p. 20-25.</font></span>'' |
| | | |
− | <span style="font-size: 10pt; font-family: 'arial','sans-serif'; mso-ansi-language: de; mso-fareast-font-family: calibri; mso-fareast-language: en-us; mso-fareast-theme-font: minor-latin; mso-bidi-language: ar-sa" /><br>'''''<span style="font-size: 9pt; mso-bidi-font-family: arial"><font face="Arial">Source:</font></span>''<span style="font-size: 9pt; mso-bidi-font-family: arial"><font face="Arial"> </font></span>'''
| + | ''<span style="font-size: 9pt; mso-bidi-font-family: arial"><font face="Arial"><span style="font-size: 10pt"><font face="Arial">Mayne, J. (2001) “Addressing Attribution through Contribution Analysis: Using Performance Measures Sensibly”, </font></span></font></span>''Canadian Journal of Program Evaluation ''16(1), 1-24.'' |
| | | |
− | ''<span style="font-size: 9pt; mso-bidi-font-family: arial"><font face="Arial">Leeuw, F. & Vaessen, J. (2009): Impact Evaluations and Development. Nonie Guidance on Impact Evaluation. Draft Version for Discussion at the Cairo conference March-April, 2009. Nonie – Network on Impact Evaluation, p. 20-25.</font></span>'' | + | '''<span</span>''' |
| | | |
− | ''<span style="font-size: 9pt; mso-bidi-font-family: arial"><font face="Arial"><span style="font-size: 10pt"><font face="Arial">Mayne, J. (2001) “Addressing Attribution through Contribution Analysis: Using Performance Measures Sensibly”, ''Canadian Journal of Program Evaluation ''16(1), 1-24.</font></span></font></span>''
| + | '''<span id="1255517683234S">''R''</span><span id="1255517683234S">''ecommended Readings:''</span> ''' |
− | '''<span id="1255517683234S">''R''</span><span id="1255517683234S">''ecommended Readings:''</span> ''' | + | |
| | | |
− | ''Davidson,E.J. (2003): The “Baggaging” of Theory-Based Evaluation. In: Journal of Multi Disciplinary Evaluation, (4):iii-xiii. '' | + | ''Davidson,E.J. (2003): The “Baggaging” of Theory-Based Evaluation. In: Journal of Multi Disciplinary Evaluation, (4):iii-xiii. '' |
| | | |
− | ''Davidson, E. J. (2004): Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks: Sage.'' | + | ''Davidson, E. J. (2004): Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks: Sage.'' |
| | | |
− | ''Kellogg Foundation (2001): Logic Model Development Guide: Using LogicModels to Bring Together Planning, Evaluation, & Action. URL: <span>http://www.wkkf.org/default.aspx?tabid=101&CID=281&CatID=281&ItemID=2813669&NID=20&LanguageID=0</span>'' | + | ''Kellogg Foundation (2001): Logic Model Development Guide: Using LogicModels to Bring Together Planning, Evaluation, & Action. URL: <span>http://www.wkkf.org/default.aspx?tabid=101&CID=281&CatID=281&ItemID=2813669&NID=20&LanguageID=0</span>'' |
| | | |
− | ''Leeuw, F. (2003): Reconstructing program theories: methods available and problems to be solved .American Journal of Evaluation, 24 ( 1).'' | + | ''Leeuw, F. (2003): Reconstructing program theories: methods available and problems to be solved .American Journal of Evaluation, 24 ( 1).'' |
| | | |
− | ''Pawson, R. (2003): Nothing as Practical as a Good Theory. In: Evaluation, vol. 9(4).'' | + | ''Pawson, R. (2003): Nothing as Practical as a Good Theory. In: Evaluation, vol. 9(4).'' |
| | | |
− | ''van der Knaap, P. (2004): Theory-based Evaluation and Learning : Possibilities and Challenges.In Evaluation, 10 (1), 16-34. ''<span id="1255518025617E"> </span><span style="font-size: 10pt"><font face="Arial"><span class="MsoPageNumber"><span style="font-family: 'arial','sans-serif'; mso-bidi-font-family: 'times new roman'; mso-bidi-theme-font: minor-bidi"><o:p></o:p></span></span></font></span> | + | ''van der Knaap, P. (2004): Theory-based Evaluation and Learning : Possibilities and Challenges.In Evaluation, 10 (1), 16-34. ''<span id="1255518025617E"> </span><span style="font-size: 10pt"><font face="Arial"><span class="MsoPageNumber"><span style="font-family: 'arial','sans-serif'; mso-bidi-font-family: 'times new roman'; mso-bidi-theme-font: minor-bidi"><o:p></o:p></span></span></font></span> |
An important insight from theory-based evaluations is that policy interventions are (often) believed to address and trigger certain social and behavioral responses among people and organizations while in reality this may not necessarily be the case. Theories linking interventions to outcomes should be carefully articulated. What are the causal pathways linking intervention outputs to processes of change and impact?
The intervention theory provides an overall framework for making sense of potential processes of change induced by an intervention. Several pieces of evidence can be used for articulating the intervention theory, for example:
Central in all three approaches is the search for mechanisms that are believed to be ‘at work’ when a policy is implemented.
After articulating the assumptions on how an intervention is expected to affect outcomes and impacts, the question arises to what extent these assumptions are valid. In practice, evaluators have at their disposal a wide range of methods and techniques to test the intervention theory. We can broadly distinguish between two broad approaches.
The systematic development and corroboration of the causal story can be achieved through causal contribution analysis(Mayne, 2001) which aims to demonstrate whether or not the evaluated intervention is one of the causes of observed change. Contribution analysis relies uponchains of logical arguments that are verified through a careful analysis. Rigor in causal contribution analysis involves systematically identifying and investigating alternative explanations for observed impacts. This includes being able to rule out implementation failure as an explanation of lack of results, and developing testable hypotheses and predictions to identify the conditions under which interventions contribute to specific impacts.
One of the key limitations in the foregoing analysis is to pinpoint the exact causal effect from intervention to impact. Despite the potential strength of the causal argumentation on the links between the intervention and impact, and despite the possible availability of data on indicators, as well as data on contributing factors (etc.), there remains uncertainty about the magnitude of the impact as well as the extent to which the changes in impact variables are really due to the intervention or due to other influential variables. This is called the attribution problem.