When is Discovery done?
It’s uncomfortable to be in the Discovery phase of a large project without an end in sight. One unknown leads to another, like a set of diabolical Russian dolls, and you become circumspect, even though you know your job is to keep digging. Still, I’ve had managers ask me the legitimate question: “How will you know when you are done?”
A thorough Discovery will reveal:
-
- who all your users are
- their context, goals and pain points
- how frequently they interact with the current system
- their level of expertise
- their workarounds
- the other systems they use and how those systems interact (or don’t)
- how the system is designed to work
The Russian doll part here often involves figuring out how the system is designed to work, and why that is not happening. Users can become so accustomed to dysfunctional or nonfunctional systems, that they cease questioning them, like a tree forming a protective gall over an irritant.
For example, users might face frequent character count errors if a form is unable to calculate running character totals. They will then have to click “Submit” to become aware of the overrun.
Why? – “Because this part of the interface was deprioritized due to higher-visibility fixes.”
Why not fix it anyway? – “Because the code base is antiquated.”
Why are you not updating it? – “Because it has to integrate with a different legacy system.”
Why not replace that system? “I don’t know, they were thinking about that a few years ago.”
Meanwhile your boss is yelling: “Hey you!! ! Is that Discovery done yet!??”
You could shorten your discovery time by simply listing “form fields with character counts are subject to frequent errors”. In that case you are just a scribe, jotting down symptoms. If you want to provide value, you’ve got to discover the mechanism that causes user pain. Only then you can intelligently suggest solutions.
There are other approaches to “done-ness”, one is saying that you are never done, the concept of Continuous Discovery . Teresa Torres talks about doing Discovery with a focus on opportunities rather than problems. She has created an Opportunity Solution Tree model to do this. This adds a dimension of complexity, because driving towards an opportunity’s outcome requires a theory – a theory of how things could be. To prove a theory you need run experiments, to answer questions like:
- Which elements of the existing system will be useful?
- Which broken ones do you fix?
- When do you add in new ones?
- Will an improvement degrade another area of functionality?
One can see how this goes beyond traditional Discovery, which is essentially mapping the territory of “what is”. As such, it requires continuous adjustment and effort.
Jeff Gothelf, in his article “When is Amazon Done?” suggests that the way to stay focused when moving this many levers is to keep the customer in mind at all times. Eliminate opportunities that don’t directly help the end-customer.
Amazon was built on this. Although their discovery method (not “methodology” – pet peeve) may be though of as continuous, the way they think about adding value is discrete. It’s not through gradual improvement that Amazon created Prime next-day shipping. They didn’t say “Lets get it to around 46 hours per shipment and see how people react. It’s more likely they said to themselves “What’s the one biggest thing we can provide that will create customer loyalty and what do we have to do to make that happen?” It’s risky to bet so much on one opportunity, but less so when customer value is the North Star.
On a much simpler level, I’ve found that Discovery starts to “feel” done, when your team can start answering their own questions. As part of a shared discovery team, for a UX’er to hear the tech lead say something like: “Almost correct, Roy. Actually a regional manager can re-assign an account, but only a district supervisor can delete one…..” is a sweet sound, and means you are close to reaching the shared understanding we all seek in the Discovery phase.