This anecdote, analog as it is, illustrates a point that I believe gets more and more relevant as our technology advances: The more we talk about features and functions, the further we are apt to stray from what we're trying to accomplish in the first place. We distract ourselves with concerns about process, scale and cost, and end up building things for companies and departments, not for the people who work in them. We forget our priorities—as business leaders, builders, creators and inhabitants of this world—to place humans at the center of our problem-solving and let technology enable that process, not define it.
Flint, fire, the wheel, printing presses, telephones, even bulletin boards were all intended to serve us and our distinct needs. Yet somewhere in our post-industrial society we became accustomed to catering to the limits of tech advancement instead of what is optimal or even additive for our needs. For a very long time, we've been forced to comply with the limits of technology, and so we got used to letting it dictate the way we worked, went about our days and interacted with others. We collectively shrugged our shoulders at processes and experiences that weren't always natural or intuitive. Technical constraints became our problem to ignore or work around.
Today, that's changing. We've reached a point where the limits of technology feel almost nonexistent and where design and understanding of human behavior and needs is eclipsing the constraints of what's possible. The challenge is shifting to how do we best use all this power to meet human needs. I believe this is truly the new frontier, one that can unleash new ambitions and possibilities in every industry, business and society.
As we move into the future where decisions are exponentially being made by algorithms, and the technology itself is becoming increasingly transparent, the design behind all of this becomes even more important. As processes become more automated and insights more predictive, we—the founders, the designers and the technologists—need to be critically aware of the assumptions we're making and the logic we're encoding, especially when it comes to translating the often irrational quality of human behavior into the highly rational language of machine processes.
After all, sometimes a simple, seemingly logical conclusion can have big consequences. Just look at the unintended impact on public discourse that enabling people to read and see mostly what they like or agree with has had.
As a designer, and as a human being, I want to know that decisions like these are considered deeply by a diverse set of thinkers—engineers, sociologists, designers and other people who might see the decision at hand from a different perspective, people who might be able to distinguish between logical conclusions and unintended consequences.
I believe we can design the answers to these challenges, but only if we're willing to truly see the value that comes from elevating the importance of those professionals who can give context and better shape the application of our technology. We need those people to make sure we're solving human problems, not creating them, and that the world we are creating is the world we all want—or at least the experience we thought deeply about creating.
As we move into 2018, and continue the shift from being driven by ideas of what technology should, could and might make possible to remembering what we as humans want and need, we must assert a diverse point of view and expectation of what's next—broadening the idea of what it means to be a good technologist while asking a better set of questions: What business are we really in? What world are we creating? For whom?
This, in itself, is a newfound perspective. All the algorithms, computational intelligence, emerging interfaces and realities, and shiny devices are only as valuable as our ability to use them deliberately and appropriately to enhance relationships, create more natural solutions, and advance the human condition.