I’ve talked a few times about the value of pointier questions, what their effect is on the distribution of idea quality, and a little about how they work. Let’s focus on a few key distinguishing features of a pointier question.
Assuming the data behind them are1 sound, pointier questions always improve the average quality (effectiveness) of ideas.
Pointier questions can reliably be arrived at through colour-by-numbers strategy – that is, there’s no luck involved.
The shape of the quality distribution curve for pointier ideas is roughly the same as for undirected ideas, just with a higher average.
When I talk about colour-by-numbers strategy, I’m talking about very rational logical thinking. If you work your way through the situation and the available data, you will arrive at the pointier question. If someone else had the same situation and same data and worked with the same mental model, as long as neither of you fucked up on the thinking, you would both arrive at the same pointier question.
That alone is a pretty clear indication that this kind of strategy is not a creative process. I call it colour-by-numbers for a reason. The lines are given to you by the situation, the data, and the mental model within which you understand the challenge. It takes time and effort to colour between the lines, and maybe some people are faster at colouring than others, but anyone can do it with the right basic training and the finished picture is basically the same for everyone.
No creativity. And in a lot of cases, that is not a bad thing. If you’re reducing costs in your business and you analyse all of the different cost centres and arrive at one or two that are driving up costs the most, you focus on them and your ideas cut more costs on average. Great. The ideas to cut costs might be creative (different people might have different ideas of varying effectiveness), but the process of narrowing down to those issues was not.
As I said above, the same pointier question is arrived at when working with the same data and the same mental model. However, new data and different mental models will produce different questions from the same level of logical rigour. And finding relevant new data and framing the challenge in new ways both introduce some uncertainty (luck) and creativity (newness) to the story. As we will see, this has serious practical implications for when to approach a challenge in different ways.
Let’s start today with new data. New data is information that was not previously considered within your mental model for analysing the situation. Here are some broad categories of new data:
Information from sources that weren’t previously on hand – e.g., new primary research, new reports, new tracking.
Unquestioned assumptions which, upon examination, are actually questionable or not true.
Overlooked data which, upon examination, is actually more relevant.
New trends in data which relevantly change how things will be in the future.
For this new data to be relevant, it2 has to change the outcome of logically thinking through the situation within the mental model being used. That means that not every new factoid, new report, newly questioned assumption, or new trend is actually going to be relevant. This may seem obvious, but I’ve seen many situations where people think that simply being new and interesting is enough to be an “insight” which absolutely must be used in solving a challenge.
Last week’s case study of NZ Blood was an example of new data in the same mental model. That might not be immediately obvious. The basic model for approaching the challenge (encourage more regular blood donations) was to identify the most influential and influenceable motivations and barriers to the behaviour of donating blood, and then either appealing to that strongest motivator or reducing that strongest barrier – whichever looked more effective.
That’s a pretty standard approach to behavioural change, and because marketing (or at least advertising) is about behavioural change, it’s pretty standard for marketing too.
When we struck upon the notion that potential donors believed that enough other people were donating blood, all that really did was introduce a new barrier to the equation. The hypothesis was that this belief was more influential and/or influenceable than any of the other motives and barriers we had previously considered addressing.
That didn’t alter the underlying logic of the approach. We were looking for the most influential and/or influenceable factors. Among the ones we had, nothing stood out. When we discovered this one, it stood out as potentially both influential and influenceable. So the same basic equation shifted to have a different output: this is the issue to come up with ideas to address.
I said at the end of that case study, and I said above, that some luck is involved in new data as a way to find outlier more effective ideas. That’s because there is no guaranteed formula for discovering relevant new data. It’s possible, but not inevitable. With more time spent looking, the odds improve. By looking the right ways, the odds improve. But it’s not a sure thing.
Hopefully that sounds a bit familiar. I’ve been saying the same thing about having great ideas. Great ideas are possible, but not inevitable. With more time spent ideating, the odds improve. With the right strategic and creative processes, the odds improve. But it’s not a sure thing.
In other words, there is a probabilistic nature to finding relevant new data.
How might they3 be found? Well, consider the list I gave above. Find new sources of information. Identify and question unquestioned assumptions. Examine under-examined data for overlooked relevance. Look through trends.
Doing all of that will improve the chances of discovering relevant new data which shift the balance of the logic you’re using to find a pointier question. It’s the same logic, the same mental model, but the new information changes the result of that thinking.
Another great example of this comes from the same agency again, YoungShand, with their anti-smoking campaign several years ago. I wasn’t involved, but while reading through various articles and studies, they discovered a little nugget: the pets of cigarette smokers were more likely to die of cancer. So they produced a heart-wrenching video from a pet dog’s point of view of the beloved owner smoking and the dog eventually slowing down and dying from cancer. I can’t watch the bastard video any more, it’s too sad, but you can find it here.
In both these cases, YoungShand got lucky. That doesn’t mean no skill or talent or process was involved. The skill and talent and processes made it possible to find these insights. But they did not make it inevitable that an insight would be found. For NZ Blood, heads and hours were set aside to go through the data, talk about the data, knock about “what if?” possibilities. For the anti-smoking campaign, heads and hours were set aside to read lots of studies and articles in the hopes of finding a nugget.
In both cases, they did, but they might not have. And while it’s technically possible that the odds of finding a useful insight could increase to near inevitability by setting aside a near-infinite amount of time to look for them, strategy operates with the same constraints as creativity: there’s only so much time to do it.
We can only do the best we can in the time we have. And while colour-by-numbers strategy is reliable in its outcomes (the pointiest question with the available data in the most available mental model) and is roughly predictable in the time required, finding new data which might reveal more effective possibilities is not.
All we can do is…
Understand the value of exploration in making these insights possible.
If possible, set aside time to find them, in situations where outlier success is worth seeking.
Use methods to make finding them in the time available more likely.
Recognise that finding them is not inevitable – a colour-by-numbers approach to the pointiest question will have to do (and that’s still plenty good).
Finally, you’ll note that at some point I transitioned into referring to these4 new data as “insights”. In my view, they are one kind of insight. There are other kinds. But this is a common one, and it’s useful to understand its relationship to colour-by-numbers strategy and pointier questions – that is, they reveal potentially more effective pointier questions.
I’m trying out using correct conjugation for the word “data”. The word “data” is the plural of a single “datum”, so it’s more correct to say “data are” rather than “data is”. But as much as I enjoy being a pedant when it comes to the English language, I really do feel like a wanker this time. My instinct is to call data an “it”, not a “they”.
See?
Clearly I’ve thrown grammatical consistency out the window at this point.
Come on, Ryan, pick a lane.