The difference between uncertainty and probability is that uncertainty is about knowledge (in our heads) while probability is about how things are (in the world). So far I’ve talked a bit about probability in strategy and ideation, but have glossed over uncertainty.
For example, I’ve suggested that a group of ideas will have varying quality and that quality has a (roughly) bell-shaped curve – mostly similar quality, a few terrible, a few great. Theoretically, if a workshop room produces 100 ideas, you can be confident that two or three of them will be great.
But which two or three?
The difficulty of evaluating ideas varies with context. Some kinds of ideas, proposals, plans are more easily evaluated than others.
For some kinds of ideas, the effects can be projected or predicted on paper. If you’re coming up with ideas for cutting costs in a business, you can probably predict the savings that will result from different ideas and compare them.
Some kinds of ideas have a lot of potential for a big gap between how you imagine them playing out in the real world and how they actually play out. New products and product features are a classic example of this.
For creative comms ideas, often the quality is at least partially dependent on getting attention, provoking an emotional response, being surprising, and/or being novel. Most of these can be evaluated to some degree by anyone, simply by being a human and gauging your own reaction, or relying on the expert eye of a creative director. Though there’s some danger in assuming that your audience is just like you.
Obviously, it is useful to gain more knowledge and confidence in the efficacy of an idea. This involves some mix of these two elements:
Identifying what needs to be true for the idea to work or what you need to know to predict how well it will work
Modelling (testing) the real-world execution with greater or lesser accuracy (fidelity)
In terms of identifying what needs to be true, in an idealised world, you already had all the relevant information when you did your colour-by-numbers strategy and narrowed down to a particular way of approaching the challenge. But in real life, we seldom have 100% knowledge of 100% of the relevant factors when we’re doing strategy. Usually it would take an impractical amount of time to do so and most of that time would be wasted.
So, often we have ideas that are tentatively great, but we have either zero or only partial information about things upon which the idea’s quality depends. We might think that our customers prefer safety over convenience and we might have a great idea that will work if they do, but suddenly we’re a lot more interested than previously in what those facts about our customers are.
This is as simple as asking, “What needs to be true for this idea to work?” And similarly, “What would we need to know to be able to predict this idea’s success?” Often those are questions which can be answered with specific research, either desk research or your own primary research – interviews, focus groups, surveys, depending on how confident you want to be.
And then there’s modelling the execution in some way. As a rule of thumb, the closer you get to imitating reality, the more confident you can be, but the more expensive and time-consuming such testing is. You can think of this kind of confidence/information gathering on a spectrum:
Gut feels and personal expertise
Team gut feels and group expertise
Subject-matter expert opinion or consensus
Focus groups and interviews
Various levels of prototyping
Controlled real-world releases
This is not to suggest that full prototypes and real-world releases are “best”. It does depend on the kind of challenge. It also depends on the stakes involved and the relative cost of confidence. For something as trivial as the copy for Wednesday morning’s Facebook post, subject-matter expertise is enough, in the form of a creative director’s sign-off. For a new product innovation that will require refitting $30 million worth of plant equipment, you’d likely be happy to spend months and hundreds of thousands of dollars to put a working prototype in the hands of multiple customers.
There are often multiple rounds of evaluation, too. After all, how do you select which ideas whose quality you want to evaluate further? Typically we start with a quick low-cost shortlisting exercise – for example, team consensus on which ideas are the most promising. Then the most time-consuming and costly investigations can be focused on that short list.
There’s some degree of risk and uncertainty inherent to process here, too. For example, there is always a chance that the actually very best idea is passed over in that first shortlisting exercise and therefore never gets investigated further to reveal its great potential. That’s sad, isn’t it? All we can do is minimise the chance of that happening – we can’t eliminate it.
You can never really eliminate uncertainty in general. The world is too unpredictable and complex, mainly because people are too unpredictable and complex. For example, I mentioned focus groups above. I’ll talk about them in detail some other time, but I’ve used focus groups to “test” creative campaign ideas many times and while I’ve walked out confident that an idea is actually bad, I can’t say that I’ve ever walked out of one confident that an idea is great. They’ve been very useful for other reasons, but evaluation and confidence beyond “no red flags” has not really been one of them.
For a fun warning about the limitations of focus groups, here is a great video about Apple’s famous 1984 TV ad being focus-grouped. (Hat-tip: Craig from WeirdWorks about ten years ago.)
Research, testing, prototyping, iterative development… These are all much broader topics than can really be covered in a simple post, and I don’t want to go further into it in this article series.
Mainly here I wanted to draw attention to the necessity and challenges of evaluating idea quality in the strategic/creative process.
With all of the above said, anyone who’s had a really great idea has often known it immediately. Other people know it immediately when they hear the idea. They say, “Holy shit. That’s… That’s fucking awesome. That’s a great idea.” They often don’t require a lot of wordy convincing, because the rationale behind the great idea is often obvious in retrospect even if it wasn’t obvious before the idea was had. Non-obvious connections become obvious in retrospect.
Ironically, it’s the merely good ideas which more often require investigation and evaluation. The great ideas often speak for themselves, because often they’re not dependent on obvious connections between unknowns, but rather are dependent on non-obvious connections between knowns.
So, to sum up…
We can’t be completely certain how good an idea is, but we need to be certain enough to pick one from the many.
Identify what needs to be true for an idea to work or what you need to know to predict outcomes – then learn those things.
And/or test key assumptions with real audiences through research or prototyping at various levels of fidelity.
How confident we need to be (or how uncertain we’re willing to be) will depend on the stakes involved and the relative cost of confidence in time and resources.
We often use multiple rounds of evaluation, shortlisting with expertise and consensus, then investigating further focused on the short list.
So much for evaluating the quality of ideas, plans, proposals, etc., which we get from asking and answering strategic questions.
Here’s what interests me more: how do we know the quality of the questions we’re asking in the first place?