Logo

600 bricolage

700 retraceable reconstructable

  • Yes! we hear a lot about how one could or should mix and match our methods. But there isn't much on HOW to do that. That's why Tom and Marina's Bricolage paper really hits the spot.

  • Yes I really agree that there is no compelling reason not to include quantitative methods under the bricolage umbrella. We would be likely to get much better uptake if we didn't present this as being something confined to the qualitative box.

  • I think the underlying problem is this:

  • I think the reason this problem is hard is because answering it requires a general theory of what makes an evaluation workflow valid. That's a tall order. If part of the answer is "most evaluations need to include most of a special list of specific evaluation functions", we need to argue why this should be so, and where this list of functions come from. In the absence of a general theory, (Scriven's general logic of evaluation?) we can perhaps just appeal to our obligation as evaluators to use "evaluative reasoning" in every step of the recipe:

  • Here we would have to start talking about how some general theory of evaluation would have to bravely grab the thistle and address the need for a concept like "validity" in qualitative methods. I don't think it's enough to replace the concept with "rigour" or to hope it can be approached with a mix of other related criteria like transparency or participation. "Rigour" begs the question, because you can rigorously apply an obviously inappropriate (and invalid) method. What our client in the end wants to ask is: ok, your workflow was inclusive and transparent and rigorously applied, but does it actually provide a valid answer to my original question? You said [for example] that my intervention didn't significantly affect the outcome, but how can you be reasonably sure that if it had affected it, you would have detected and reported that, and that you'd only report it as ineffective when it really was ineffective? – But, this discussion would be a long one. We would have to address the fact that in many cases beyond simple numerical measurement applications, we can't define validity as something like "corresponding to the facts" because the facts in a sense don't exist until we construct them (intersubjectively, collectively) through our workflow; and then we would have to show how anyone else could now follow our workflow and reconstruct broadly similar answers.

  • Often we have to start by describing the extent to which there is not agreement about what the question means and/or how to arrive at an answer to it, and then propose / construct a way to do this in a way which can be accepted so that stakeholders say "well now we agree enough what these questions mean and how to answer them". For example we have broken down the task into smaller pieces, established sufficiently unambiguous rubrics for success, etc.

  • Anyway …

  • Bricolage should also address the fact that we often don't arrive at our final workflow until the evaluation is nearly finished, that we often need time to pause and reflect and talk to people and tweak and reassemble, and that the workflow we arrive at is something we might not have been able to imagine when we started: and yet nevertheless we can now publish our workflow in a way that others could in principle reconstruct.

  • One more question to address: Is Bricolage specifically for impact evaluation? I don't really see why it should be.

  • Marina

© 2025 All rights reservedBuilt with Flowershow Cloud

Built with LogoFlowershow Cloud