Nuance
Nuance
it's all very well saying ah but quant doesn't capture these rich nuanced stories. How much time do you have?
— start-multi-column: ID_2583
Number of Columns: 2
Largest Column: standard
- Agreeing with most of the above, and a special applause for Florencia and Tom's piece. I like the way they address the process rather than the type of data or analysis.
- Coming here from a modest acquaintance with psychotherapy effectiveness research, where I’ve observed similar debates between quantitative and qualitative schools. I think it’s unfair to claim this problem with nuance is solely the fault of quantitative researchers for being insufficiently nuanced. For decades — at least 30 or 40 years — it’s been widely accepted that the key question isn’t simply “what works?” but rather “what works, for whom, in what circumstances, and why?” And I'm assuming that any colleagues with their salt in the quant evaluation of development effectiveness think the same.
- So in therapy research, they acknowledge the complex interplay of variables, such as the pathology, the context of the treatment, the person delivering it, and other factors. No-one cares (any more) about the main effect, like overall CBT is the best or psychodynamic is bullshit.
- To be fair, the basic concept of presenting scientific results as a main effect with nuanced caveats — like interaction effects and subgroup analyses — is a brilliant move. Maybe it's just how our brains work. It allows scientific findings to be communicated to the public in a digestible way. For instance, people might remember that CBT is effective for anxiety but then qualify that with considerations like comorbidities, age group, or cognitive functioning. You can't remember or work with hundreds of interaction effects and in this kind of research there are really very many of them.
- This method simplifies communication: starting with a general takeaway before diving into specific contexts and probabilities of success. While this is an oversimplification built on numerous assumptions, it is still an effective way to derive meaning from data. Of course, in some datasets, you might in fact find no main effect but significant interactions — therapy that doesn’t work overall but is effective for specific groups and possibly harmful to others. This complexity is harder to remember and to base policies on. And even experts can struggle with higher-order interactions.
- This challenge isn’t unique to quantitative results. Qualitative research faces similar barriers: how do you communicate nuanced findings to a policymaker who demands simple bullet points, wherever they come from? If the response is “it depends,” the message often fails to resonate. Therefore, claiming that qualitative research inherently occupies the “nuance corner” or just communicates better is misleading.
— column-break —
- I'm guessing that plenty of quant people must have uttered a wry grunt when Paulson and Tilly came up with “what works for whom, in what circumstances.” They really did not invent this way of thinking.
- The real issue, as Tom and others have already said, surely isn’t the type of data or analysis but how much we organise our world in terms of high-stakes questions and binary, all-or-nothing propositions.
- For example, the Ofsted scandal in the UK highlighted the dangers of reducing school performance to oversimplified ratings like pass or fail. It took tragic events, like teachers’ suicides, to prompt reforms. Yet, history repeats itself. Months later, the incoming Labour government boasts about publishing league tables for hospitals — another flawed reduction of complex evaluative judgements into a single metric, whether the league table comes from something “soft and fuzzy” like QCA or hierarchical card sorting — or some other favourite qualitative approach.
- It is the reductionism, whether from qualitative or quantitative approaches, that is problematic. The problem is how much we expect information to be aggregated as it moves up the chain of decision-making. By the time it reaches the highest levels, it’s reduced to a couple of traffic light icons for a single minister. That oversimplified output then gets propagated back down as blunt directives, such as in the extreme case, “Close this school” or Put this hospital into special measures.” I'm caricaturing here and this isn't something I know a lot about, but common sense would surely go with more distributed, localized, and adaptive management approaches - being more trusting of teachers, headteachers, nurses doctors, local education and health authorities, empowering them with the resources and support to make informed decisions within their contexts. And as often, the optimum is somewhere in between, between the big picture and the local context.
— end-multi-column
nuance and high stakes decisions