Technology is accelerating at its fastest rate ever with no evident signs that it will slow down. As a result, there are growing concerns on how to effectively regulate the development and use of these technologies in order to protect the people and ensure fair markets without excessively hindering businesses. However, regulation through legislation is time-consuming to implement and has therefore generated a lag in the creation of regulatory laws behind technological advances. Businesses looking to maximize profits may take advantage of this lag and exploit helpless consumers. To prevent this, the existing approach to regulating new technologies must change into a more collaborative, ethically-based, and unbiased initiative.
For many users of new technology, data privacy is one of their largest concerns. The advances in data analytics allow businesses and others to increase efficiency at accomplishing their goals, yet the users may not necessarily know how their data is being used. For instance, from government officials to ordinary citizens, many were worried about the status of their personal information after the Facebook and Cambridge Analytica leak incident. Data is the driving force for many of the latest technological advancements, yet much of is it held without rules on privacy and liability. The digital landscape is essentially dominated by five companies: Amazon, Apple, Facebook, Google, and Microsoft, who are sometimes referred to as the “Frightful Five.” Although some of these companies provide free-to-use services, the hidden cost of being a customer is personal data privacy. With their unbeatable leads in retail, cloud services, social media, advertising, search, and device platforms, much of the everyday person’s data is controlled by unelected CEO’s. These businesses will likely not slow down in the furthering of technology, and no legitimate regulation exists to stop them.
If the volume and pace of digital transformation continues to remain the way it is, the existing regulatory approach won’t work.
The traditional regulatory system of designing legislation in response to new technologies is outdated. Bakul Patel, associate director of Digital Health for the US FDA, says “…if the volume and pace of digital transformation continue to remain the way it is, the existing regulatory approach won’t work.” Legislation should instead be designed with potential future advancements in mind. Currently, regulation is currently too fragmented between different countries and legislative systems. Traditional regulation also allows too many loopholes for companies to easily exploit without facing consequences, such as limitations or restrictions on their use of certain data. Relying on voluntary ethical behavior by corporations to ensure fairness is unreasonable to expect, thus the threat of strict enforcement needs to be increased. The complicated environment of technological regulation requires a restructuring of fundamentals and ideals for corporations to build from.
Technological regulation is a problem that can not be solved by a single country or organization. Ideally, an independent, multi-national council of technology authorities, such as former “Frightful Five” executives, that would regularly collaborate to create and then update legislation would be able to decrease the significance of the lag, while also preparing for any future advances. The legislation, however, would have to be based on ethical commitment, with respect to human dignity and the common good. Many fear the potential catastrophic consequence of letting technology “control” mankind due to the access and significance it has on daily life. While conspiracy-like concerns about “robot takeover” seem outlandish, the development of certain artificial technologies could be dangerous if left unchecked. Government leaders, as well as business leaders, industry experts, and shareholders need to all be able to voice their concerns on this issue, albeit in a manner as to not disrupt business models and progress. Businesses need to be incentivized to want to operate ethically, which would foster a better relationship and higher levels of trust between technology users and their developers.
While this may seem too idealistic to accomplish, one organization is laying down a foundation that the global community can hopefully build off of. AI Global is a non-profit organization with a focus on designing, developing, and using artificial intelligence (AI) technology responsibly. Last year, they launched the world’s first open AI marketplace, called AI Global Marketplace. The marketplace is a hub for AI assets, experts, and leaders to come together and share ideas to develop ethical and responsible practices and applications of artificial intelligence software and technologies. AI Global Chairman Tom Meredith claims that “The future of AI is critically dependent on stakeholders working together to create best practices, share findings and insights, and drive hands-on innovation”. In addition to AI Global’s efforts, Google, Amazon, and other large technology companies are beginning to open source an increasing amount of their programs with the non-profit intelligence organization, OpenAI. Founded by Elon Musk and Sam Altman, OpenAI aims to ensure technology is not advancing at the expense of others and to establish a check-and-balance measure for technology in today’s world. Open-sourcing, or allowing the public to develop and inspect existing software facilitates cooperation and user collaboration, while also eliminating the “black-box” problem, which is how companies keep their technologies’ composition and structure secret to maintain a competitive advantage.
These programs are just the beginning of what could be a radical revamp of today’s regulatory structure in response to advancing technologies. Continued mutual effort in the field may hopefully lead to a diminished lag and an ethical attitude towards the use of technology in a world now driven by digital innovation.