OpenAI: The Future and Ethics of Artificial Intelligence 

Photo Courtesy of Solen Feyissa on Unsplash

AI is rapidly becoming a strong presence in our lives. OpenAI, a company based in San Francisco, CA, is among the most influential, but controversial players. OpenAI is well known for its product ChatGPT, an AI chatbot with around 200 million users worldwide per week. Despite its popularity among students and professionals, its widespread use invites certain concerns. As AI continues to grow, we should ask: Does OpenAI adhere to the moral standards aimed at benefiting humanity they claim to adhere to? The company’s improper data usage, poor labour conditions and other risks it poses suggest otherwise. 

OpenAI’s mission centers on ensuring the production of AI systems that are smarter than humans but benefit humanity. The company was founded in 2015 by prominent technology figures, including Elon Musk, Sam Altman, Peter Thiel and Reid Hoffman. It established itself as a nonprofit to ensure its mission centred on safe and beneficial AI development, driven by the public good, not profit. However, this original model did not have enough funding to meet their ambitions in development and research, which, according to OpenAI, would result in the failure of their founding mission. This resulted in the company’s structure adopting a capped-profit component. While their original structures may show that OpenAI is committed to ethical, non-profit-driven practices, this no longer appears to be the case. 

The benefits of AI are significant. It enables greater accessibility to students in education,  support in providing medical diagnosis, and additional financial security by preventing fraud and assisting in investment decisions. Nevertheless, there are severe dangers and ethical drawbacks. OpenAI platforms can produce inappropriate content, deceptive deepfakes, and even aid in launching cyberattacks. AI also has detrimental environmental impacts due to its production of hazardous electronic waste, which requires significant amounts of water for cooling and electricity. Depending on the energy source, this can produce harmful greenhouse gases.

OpenAI platforms can produce inappropriate content, deceptive deepfakes, and even aid in launching cyberattacks.

One of the dangers of ChatGPT is its ability to produce inappropriate, explicit, and graphic content. OpenAI has put safeguards in place to prevent the generation of said content and outlines in its Usage Policies that its platform must not be used to produce content that may harm minors. Still, AI can make mistakes, failing to block the production of harmful content on occasion. Furthermore, such inappropriate content often negatively impacts the workers hired to view this content. For instance, to develop protections against inappropriate content in earlier versions of ChatGPT, OpenAI outsourced work to a firm in Kenya where employees would label explicit content as dangerous. This work entailed making workers read AI-generated text of a graphic, violent, and sexual nature, resulting in mental scarring and mental health issues. The work also involved looking at AI-generated images of child sexual abuse, bestiality, and other disturbing content, leading to the early cancellation of the contracts by the Kenyan firm. Despite the early ending of this work, the content the employees were exposed to left lasting impacts on their well-being. Furthermore, these positions are often severely undercompensated. Workers from the firm in Kenya would make less than $2 USD an hour for their labour. Therefore, while OpenAI appears to be attempting to improve its platforms against improper content, its methods in achieving such involve labour exploitation and poor working conditions. 

Another major issue with AI is the ability to produce deepfakes, a type of imagery, video, or audio that appears to be accurate, legitimate media created using deep learning. A major issue with deepfake content is its ability to spread misinformation. This is seen in how it can be used to influence elections by manipulating the information available to viewers. This is an issue that OpenAI is well aware of, causing them to release a detector for deepfakes to fact-checkers before the November 2024 US election. While this is a step in the right direction, with OpenAI providing the necessary tools to debunk potentially dangerous AI-generated content, its lack of availability to the general public makes it inaccessible.

As OpenAI technology advances, it should provide the tools to the public to verify AI-generated content, reducing product harm. 

As OpenAI stands at the forefront of AI, balancing the tremendous potential hand in hand with the ethical concerns of the field, it holds great responsibility. With its products, such as ChatGPT, revolutionizing various industries, the downsides of harmful labour practices, environmental impacts, and production of deepfakes are also prominent. Therefore, while OpenAI’s outlined mission to benefit humanity through safe technological development appears virtuous, it is also evident that significant efforts are required to reduce these negative impacts. This should include better regulation for working conditions, greater corporate accountability, and improved public access to deepfake detection tools. 

As an industry leader, OpenAI must balance upholding the morals outlined in its mission statement, remaining uninfluenced by monetary factors, and staying alert on the possible negative impacts of new, biased AI technology on society. As the company continues to make significant advancements in the artificial intelligence industry, influencing the future of AI, its actions will significantly impact humanity and the world. They will inevitably and ultimately affect society for the better or the worse, depending on how it goes about its business practices.

 

Leave a Reply

Your email address will not be published. Required fields are marked *