Is technological innovation good or bad? Seems like a silly question on the surface. But we have questions: Can self-driving cars ever be safe? How dangerous is Alexa? Will artificial intelligence take my job? Do cryptocurrencies empower terrorists? Can a cardiac pacemaker be hacked?
We have concerns: Fake news, fake ads, fake accounts, bots, foreign governments interfering with our elections …
The courts have historically decided that technology is neither intrinsically good nor bad, but they have expressed the opinion that people must be responsible and held accountable for how it is used. The problem is that technology is almost always ahead of strategy, tactics, and the law.
It's illegal to text and drive. Should it be illegal to wear an AR headset when driving? Should a provision for Level 5 driving automation, at which the system never needs intervention, be carved out in law?
Fear, uncertainty and doubt in Las Vegas
Once a beacon of optimism, the tech industry has come under pressure as concerns mount about potential negative impacts of innovation. Opportunistic politicians are preying on these concerns by sensationalizing or simply mischaracterizing potential outcomes to encourage support for new government regulations.
My colleagues at PwC and I agree that the time has come to seriously consider a responsible approach to innovation. We believe the circumstances require something new and different: a collective, self-regulatory approach from the key players in the industry.
At CES in Las Vegas next week, we'll present a discussion that explores the three basic approaches to the problem of regulating technological innovation:
1. Government regulation — The most traditional approach is to have policy makers and regulators step in and implement a regulatory regime that addresses policy concerns such as privacy, public safety, and national security. We have seen that recently, in legislation proposed in Congress to regulate highly automated vehicles and in several state and local laws regarding drone operations. The problem with the traditional regulatory approach is that it moves too slowly to keep pace with technological innovation. (See: net neutrality, royalties for actors in digital productions, cryptocurrencies, data privacy, and on and on.) Policy makers and their staffs just do not have sufficient technological understanding or efficient procedures to implement effective regulatory regimes in a timely manner.
2. Self-regulation — A second approach would be to ask individual companies to consider the societal impact of their technologies before those technologies are introduced to the public. This often falls under the category of "corporate social responsibility," and it can cover anything from environmental protection to addressing sexual harassment in the workplace. We have seen new policies, procedures, and controls adopted to address concerns about bias and "fake news" on social media sites. Although the efforts are commendable, they came only after problems occurred and only after several missteps raised the stakes and brought unwanted attention on the companies. Also, there is an inherent commercial conflict when individual companies are asked to self-regulate. Regardless of corporate responsibility officers' altruistic goals, companies are in the business of making money for their shareholders. The problem is compounded by the need to speed new technologies to market, a differentiating factor that is critical to financial success.
3. A self-regulatory organization — If the regulatory approach is too slow and the "do it ourselves" approach is too rife with conflict, what is the answer? We believe the answer is a collective, self-regulatory organization. An SRO for emerging technologies would be made up of the leading organizations in the industry that would come together to define responsible innovation principles that all members of the SRO would agree to abide by. The SRO would also oversee and regulate compliance with those principles, levy fines, and refer any violations to a federal regulatory agency such as the Federal Trade Commission. This approach has worked in other industries, such as the financial services industry, where the Financial Industry Regulatory Association (FINRA) regulates and oversees broker/dealers and exchanges.
Our thinking
If the industry is serious about responsible innovation, taking such a self-regulatory approach would put their proverbial money where their mouth is. The collective industry, more so than any think tank, regulatory agency, or policy-making body, has the insight, technical knowledge, and incentive to create workable principles for responsible innovation.
Policy makers may initially balk at this usurpation of their authority, but they will quickly realize that this is a far more thoughtful, efficient, and effective approach to ensuring responsible technological innovation. If this approach includes some level of governmental oversight, policy makers will be able to influence the policy direction of the new innovation SRO and provide any necessary enforcement.
For the industry participants, while there will still be plenty of room to differentiate their technologies, they will know that their competitors are all subject to the same principles of responsible innovation. An Innovation SRO can transcend borders – something that will become increasingly important as technologies continue to evolve – and provide a globally level playing field.
Join us: I'll be discussing this subject with David Sapin, principal U.S. advisory risk and regulatory leader at PwC, and Rob Mesirow, principal at PwC Connected Solutions—both of whom contributed to this article—on Wednesday, Jan. 10, at 8 a.m. at CES 2018. Request your invitation here. This is not a sponsored post. We are the authors of this article and it expresses our own opinions. We are not, nor are our respective companies, receiving compensation for it.