[ad_1]
Computer scientist Eliezer Yudkowsky, who has predicted “we’ll all die” if super-intelligent artificial intelligence is created in the current situation, says he understands the challenges lawmakers face in regulating the system.
“An analogy I sometimes use is that AI is like a nuclear weapon, if a nuclear weapon spits out gold, and more gold as you make them bigger, until it finally reaches a threshold where no one can count first, then it explodes all over the world,” Yudkowsky, co-founder of the Machine Intelligence Research Institute, wrote in an email to CBC News.
“It’s a very difficult situation to manage.”
This is the regulatory conundrum – AI can bring many benefits but unchecked it can have a negative impact on society and may pose a significant threat to humanity – that some US politicians faced at a hearing of the US Senate Judiciary Committee on Tuesday.
‘Causing significant harm’
The hearing included the testimony of Sam Altman, the head of the artificial intelligence company OpenAI, which created ChatGPT, who advocated some regulations to face the risks of increasingly powerful AI systems that he even admitted could “cause great harm to the world.”
But Congress has yet to regulate some Big Tech companies like Meta and Google. And the issues Altman and others raised at hearings on AI also reflect the challenges facing regulating the industry, some experts say.
“I think one of the biggest problems with AI regulation is that [defining] what AI is,” said Matthew O’Shaughnessy, visiting fellow in technology and international affairs at the Carnegie Endowment for International Peace.
O’Shaughnessy said several panelists and senators at the hearing, including testimony from IBM’s chief privacy and trust officer Christina Montgomery and AI expert Gary Marcus, professor emeritus at New York University, talked about AI as a very broad concept. while others discuss it in a narrow way.
“The kind of core problem with AI is the ‘know it when you see it’ concept, which is constantly evolving. It’s very difficult. [to put into] legal definition.”
To open a hearing on artificial intelligence, Democratic Senator Richard Blumenthal played a recorded statement created entirely by OpenAI’s ChatGPT and AI voice cloning software trained on his own speech to imitate his voice. OpenAI CEO Sam Altman also testified at the hearing, urging the government to regulate artificial intelligence.
In Canada, the government has proposed the Artificial Intelligence and Data Act to “protect Canadians” and “ensure responsible AI development.” But there is no current federal proposal in the US
Concerns have been raised about ChatGPT, a chatbot tool that answers questions with human-like responses, and the ability of the latest generative AI tools to mislead people, spread falsehoods, violate copyright protection and improve some jobs.
Meanwhile, others, like Geoffrey Hinton, known as the “godfather of AI,” along with Yudkowsky, have expressed fears that unchecked AI could wipe out humans.
Yudkowsky fears that superhuman intelligent AI will, as he wrote in Time in March, “not do what we want, and not care about us or life in general.”
“The likely result of humanity facing an opposing superhuman intelligence is total loss.”
Yudkowsky said that, at the hearing, Altman and IBM’s Montgomery toyed with the AI worst-case scenarios raised by himself and Hinton.
“The real danger is that everyone on Earth is dying,” he said. “Sam Altman knows, he seems to have decided that Congress can’t be trusted with this information. speak loudly.”
‘Daunting’ task ahead
Computer scientist Mark Nitzberg, executive director of the Center for Human-Compatible Artificial Intelligence, said policymakers certainly have a “daunting” task ahead of them when it comes to regulation.
“How about we have this system where no one knows how it works, everyone agrees it’s really powerful and there’s really no regulation controlling anything,” he said.
The main problem is that while AI is a system that has capabilities in many ways, it can be random, creative, and no one knows the principles behind it, Nitzberg said.
“It’s not for other engineering systems that we have rules for.”
AI is different from gene editing or climate science, where the science is being worked out, and there is still work to be done in terms of politics such as regulations and public agreements, he said.

“You are forced to do politics before the science is delivered.”
Most post-mortem tests and analyzes for engineered systems, for example in car and airplane safety, rely on the system being predictable: performing the same way when put under the same conditions, Nitzberg said.
But large language models give different responses to the same prompts twice in a row. So different testing and monitoring methodologies should be created, he said.
Bart Selman, professor of computer science at Cornell University and director of the Intelligent Information Systems Institute, said regulations can take years, and even if you get input from stakeholders, they don’t deal with some real problems.
Some critics have suggested that Altman’s request for regulation may be self-serving. In an interview with ABC Start here podcast, Gizmodo technology reporter Thomas Germain pointed out that it is not unusual for the tech industry to ask to be regulated.
Canadian-British artificial intelligence pioneer Geoffrey Hinton says he left Google because new discoveries about AI made him realize it was dangerous to humans. CBC chief correspondent Adrienne Arseualt spoke to the ‘godfather of AI’ about the risks involved and if there are ways to prevent them.
“Some of the biggest supporters of privacy laws are Microsoft and Google and Meta, actually, because it gives tech companies a huge advantage if there’s a law they can comply with,” he said. “That way, if something goes wrong, they can just say, ‘Well, we followed the rules. It’s the government’s fault for not implementing better regulations.'”
At the hearing, Altman proposed the formation of a US or global body that would license the most powerful AI systems and have the authority to “remove the license” and ensure compliance with safety standards.
Nitzberg noted that in some chat forums, some suggested there would be “some who have the resources and the connections to get a license and therefore, [Altman’s] ensure the capture of their own regulations.”
However, Nitzberg said he doesn’t think Altman is a cynic who only says AI is dangerous in order to “get on the good side of people so they can build a bigger empire.”
“He talked about the dangers of AI in 2016.”
Eliminate innovation
Meanwhile, there is another concern about AI regulation: that too much government intervention could stifle innovation.
“There is no reason why private sector actors cannot develop principles of safe AI practice or create their own AI governing bodies,” James Broughel, Senior Fellow at the Competitive Enterprise Institute, wrote in Forbes last month.
“The problem with creating new federal agencies or adding new regulatory programs and staff is that they create new constituencies, including bureaucrats, academics and corporations who use government power to change public policy to their own interests.”

That’s why IBM’s Montgomery called on the Senate committee to adopt a “precision regulation” approach to AI to regulate the deployment of AI in specific use cases, and not regulate the technology itself.
This, he said, “strikes the appropriate balance between protecting Americans from potential harm and maintaining an environment where innovation can thrive.”
O’Shaughnessy agreed that too much regulation is a real concern and that policymakers should be careful to manage AI intelligently.
“But at the same time, these AI systems are very powerful. They have a real and direct negative impact on people and society today,” he said. “And it’s important that we implement meaningful and smart regulations.”
O’Shaughnessy said it was important that the hearing revealed some bipartisan support for some type of regulation.
“But it’s one thing for them to support the idea at a high level. It’s very different for them to support the real policy if it’s clearer what the tradeoffs are, what it looks like. So it’s too early to say there will be true is the momentum for regulation.”
[ad_2]
Source link