The Center for Artificial Intelligence and Digital Policy (CAIDP) has filed a complaint with the United States Federal Trade Commission (FTC) in an attempt to prevent the release of powerful AI systems to consumers.
The complaint focuses on OpenAI’s recently released large language model, GPT-4, which CAIDP described as “biased, deceptive, and a public privacy and security risk” in a March 30 complaint.
CAIDP, an independent non-profit research organization, argued that the commercial release of GPT-4 violates Section 5 of the FTC Act, which prohibits “acts or practices that are unfair or deceptive or affect commerce.”
To support their case, the AI ethics organization points to the contents of the GPT-4 System Card, which states:
“We found that these models have the potential to reinforce and reproduce certain biases and worldviews, including stereotyped associations and harmful demeaning of marginalized groups.”
In the same document, it says: “AI systems will have a greater potential to strengthen all ideologies, worldviews, truths and justifications, and to approve or lock, cancel contestations, reflections, and improvements in the future.”
CAIDP added that OpenAI released GPT-4 to the public for commercial use with full knowledge of the risks and that no independent assessment of GPT-4 was conducted prior to release.
As a result, CAIDP wants the FTC to conduct investigations into OpenAI products and operators of powerful AI systems:
“It’s time for the FTC to act […] CAIDP urges the FTC to open an investigation into OpenAI, request further commercial release of GPT-4, and ensure the establishment of necessary safeguards to protect consumers, businesses, and commercial markets.
When ChatGPT-3 was released in November, the latest version, GPT-4 was considered to be ten times smarter. When it was released on March 14, a study found that the GPT-4 could pass the most rigorous US high school and law exams in the top 90th percentile.
It can also detect smart contract vulnerabilities in Ethereum, among others.
This morning I hacked the new ChatGPT API and found something super interesting: there are over 80 secret plugins that can be revealed by removing certain parameters from the API call.
Secret plugins include “DAN plugin”, “Crypto Prices Plugin”, etc. pic.twitter.com/Q6JO1VLz5x
— (@rez0__) March 24, 2023
The complaint came as Elon Musk, Apple’s Steve Wozniak, and many AI experts signed a petition to “pause” the development of AI systems more powerful than GPT-4.
Having a bit of AI existential angst right now
– Elon Musk (@elonmusk) February 26, 2023
CAIDP President Marc Rotenberg is among the other 2600 signatories of the petition, which was introduced by the Future of Life Institute on March 22.
related: Here’s how ChatGPT-4 spent $100 on crypto trading
The authors argue that “Advanced AI could represent an important change in the history of life on Earth,” for better or worse.
The United Nations Educational, Scientific, and Cultural Organization (UNESCO) has also called on countries to implement the UN’s “AI Ethical Recommendations” framework.
After +1000 tech workers urged to rest in the most intense training #AI system, @UNESCO calls on countries to immediately implement Recommendations on AI Ethics – first global framework of its kind & adopted by 193 Member Stateshttps://t.co/BbA00ecihO pic.twitter.com/GowBq0jKbi
– Eliot Minchenberg (@E_Minchenberg) March 30, 2023
In other news, a former AI researcher for Google recently said that Google’s AI chatbot, “Bard,” has been trained to use ChatGPT responses.
While researchers have since resigned over the incident, Google executives have denied the allegations brought forward by former colleagues.
Magazine: How to prevent AI from ‘destroying humans’ using blockchain