Design is changing rapidly, and we need our tools to keep up. The pure streams of many creative fields—design research, industrial design, and graphic design for example—have become largely commoditized and internalized within organizations. The practice of shopping around in order to get the greatest bang for the buck has become commonplace. Most of the organizations we work with have their own research facilities, and many will ask us to use findings they've already generated. In their eyes, consultancy-driven design research is redundant to what they're already doing. From the perspective of the design researcher, who naturally has a stake in being included in the knowledge gathering process, and has deep levels of skill and pride in their chosen craft, the leveraging of internal research often doesn't seem to be the best solution. They know best how to gather and process design-specific insights; their methodologies are tried, true and commonly understood.
The traditional arc of a project puts research up front, defining the problem, touching upon all relevant areas and coming up with opportunities that track back to that defined problem. From those insights, designers start designing and eventually something gets made. Ideally, months later, the end result still correlates to the research that started the project. Often times, however, a game of telephone has occurred and the researchers' opportunity areas have transformed into something very different.
This waterfall approach goes back to the industrial revolution; it's battle tested but increasingly untenable. Clients want to see progress, things happening, concrete examples that give them confidence in the partner they've chosen. Unwilling to sacrifice time and money for something they think they already have, they chop research from the plan and the end result becomes based on the designers' intuition and little else.
My sense is that we need to reexamine HOW and WHEN we use research. At frog, we've been testing the waters with highly iterative spin-cycle programs that begin with a hypothesis design being created, then tested with users, over and over until we're happy with the results. We don't do focus groups. They're a great way to generate group-think and pack mentalities, but the end result is often missing the nuance we're looking for. We do ask users to look at our designs and suggest ways to make them better. Frequently, in one-on-one interactions, we encourage them to design the ideal product for themselves. We ask the same people to come back again and again; they act as our conscience and hopefully, eventually, advocates for the end product. At every level, the insights we gather are directly tied to the thing we're trying to design.
This kind of user-research isn't totally open-ended. We have a hypothesis already, but it occurs early enough in the process that we can radically alter the end result before it migrates too far from relevance. We'll cycle through that process a number of times, getting closer and closer to the final objective. Which is, of course, to design the best possible solution for any given problem set.
Rigor and process are still core elements of the design practice, but now are more flexible. Intuition and preexisting knowledge are always going to be a part of the design process, only now they are leading the charge. Insights are gathered along the way and we no longer have to hope that the things we were looking at at the start of a program are still relevant at the end. As is demonstrated so clearly in the digital realm, we've moved from a world of waterfall processes to one that exists in near-perpetual beta. Why not adopt that thinking for design research as well?