How Congress fell because of Sam Altman’s AI magic tricks

OpenAI CEO Sam Altman made a bold request before the Senate Judiciary Committee on Tuesday: to regulate the very industry his company is at the forefront.

“I think if something goes wrong with this technology, it could go absolutely wrong and we want to be upfront about that,” Altman said at the hearing. “We want to work with the government to prevent that from happening.”

Altman’s testimony included his thoughts on the dangers of artificial intelligence, and warnings that tools like ChatGPT — developed by OpenAI — have the potential to destroy jobs, even if hope could create new ones. Altman went so far as to recommend that lawmakers create a separate agency to regulate artificial intelligence.

When compared to congressional tech-related hearings from Facebook to TikTok in the past, Tuesday’s handover was surprisingly cordial — likely buoyed by the fact that Altman shared a dinner with about 60 lawmakers the night before, where he reportedly gave them a demonstration of ChatGPT’s capabilities. .

Many of those present reacted as if they had just watched a magic show, not a technology show.

“I thought it was great,” Rep. Ted Liu (D-Calif.) told CNBC.. “It’s not easy keeping members of Congress motivated for nearly two hours, so Sam Altman was very informative and gave a lot of information.”

“He gave amazing real-time demonstrations,” Rep. Mike Johnson (R-LA) told the broadcaster. “I think it startled a lot of the members.”

But, despite Altman’s heavy highlight, it wasn’t his only hearing that happened that day. In fact, just three floors up in the same building, the Senate Committee on Homeland Security and Governmental Affairs concurrently held another AI hearing, featuring speakers such as US government affairs data scientist Taka Ariga and Stanford law professor Danielle. He, University of Tennessee artificial intelligence researcher Lynne Parker, and journalist Jacob Siegel — and arguably more importantly.

“I got 1/100th of an interest,” Suresh Venkatasubramanian, director of the Center for Technical Responsibility at Brown University, told The Daily Beast. “He’s been looking at all the ways AI is being used across the board to actually impact our lives in ways that have been going on for years, and people still don’t talk about it enough.”

This sounds very important, but people remain very focused on Altmans and OpenAIs in the world. It seems that the whole world is focused entirely on generative AI, or machine learning systems that can generate content on demand (for example, chatbots like ChatGPT and Bard, or image generators like Midjourney or DALL-E). In doing so, we may miss the actual risks of AI — and open ourselves up to a lot of damage in the process.

Derailing the AI ​​hype train

Since OpenAI released ChatGPT in November 2022, generative AI has dominated the headlines and discourse surrounding all things machine learning. It created a whirlwind of hype and lofty promises—incredibly attractive to company executives and investors, but ultimately detrimental to its people.

We’ve already seen cases of this happen. Media companies like Insider and Buzzfeed They announced that they would use large language models (LLMs) to help write the content – before laying off large segments of their workforce. The Writers Guild of America went on strike in April, in part because of disagreements with the alliance of motion picture and television producers over the use of artificial intelligence in the writing process. Dozens of companies have already started using LLMs and image generators to replace copywriters and graphic designers for their businesses.

In reality, though, generative AI is actually very limited in what it can accomplish — despite what Altman and other companies might say. Venkatasubramanian said, “My concern is that with such powerful systems in completing words or pictures for us, we will feel as if we can replace human ingenuity and ingenuity.”

We do not require arsonists to be responsible for the fire department.

Suresh Venkatasubramanian, Brown University

Venkatasubramanian’s concern is that employers will be seduced by the temptation to cut costs with AI — and if they it’s notCompanies may feel pressure from shareholders and competitors. “Then you can rush down to try to make cost savings,” he explained. “I’m sure there will be this rush, and then a reaction where people realize it’s not working well and they’ve made a mistake.”

Emily M. Bender, professor of linguistics at the University of Washington, largely agrees. “The thing about noise is that it generates a sense of fear,” she told The Daily Beast. “If everyone is on board with this magical thinking machine, then I should know how to use it, right?”

This is what makes Altman’s time on Hill particularly raise an eyebrow at Bender, Venkatasubramanian, and many other AI experts. Not only did Altman get an incredible amount of media coverage from his testimony, but he also spent the evening before drinking and dining with the same lawmakers he was about to appear before. He has made startling and intimidating statements about the power of artificial intelligence and, more specifically, his company’s technology.

Meanwhile, lawmakers have welcomed his recommendations with arms and ears. During the hearing, Sen. John F. Kennedy (R-LA) implored Altman: “This is your chance, folks, to tell us how to do this thing right. Please use it. Speak plain English and tell us the rules to do.”

Bender didn’t mince words when she described her reaction to the hearing. “It’s marketing,” she said. “When you have people like Sam Altman saying, ‘Oh, no! What we’ve built is so powerful, it’s better to regulate it” or the people who signed the AI ​​pause letter saying, “This stuff is so powerful, we should pause for a while,” that’s also marketing.

While Altman said he welcomes regulation to rein in generative AI, the company’s refusal to be more transparent about the dataset used to train ChatGPT and lock down access requests for third-party apps to use its API suggests that OpenAI isn’t as warm to regulation as it claims.

“It was his company that put this out there,” Venkatasubramanian said. “He didn’t have an opinion about the dangers of AI. Let’s not pretend he’s the one we should listen to about how to regulate.”

He added, “We do not ask the arsonists to be responsible for the fire department.”

Meanwhile, Venkatasubramanian notes that Congress and the rest of the world have paid far less attention to other forms of artificial intelligence that have already been harming people for years. “They are the tools used to determine whether someone has committed fraud when applying for benefits, whether someone should be put in jail before a hearing, whether your resume should allow you to move on to the next stage of an interview, and whether You have to get some kind of medical treatment. It’s all using AI now.”

Now you see it, now you don’t

Altman may not be serious about his call for more regulation over AI, but he’s not wrong — and there are indeed things lawmakers can do. Venkatasubramanian co-authored the AI ​​Rights Bill Outline, which provides a set of guidelines for how to safely deploy machine learning algorithms to better protect the data and privacy of ordinary people. Venkatasubramanian said that while the framework has yet to gain traction in Congress, states like California are already paying bills inspired by it.

And while Altman’s proposal to create a separate agency to regulate AI isn’t a bad idea, Bender pointed out that the existing governing bodies actually It has the power to regulate the companies behind these AI technologies.

In fact, the Federal Trade Commission, Department of Justice, Consumer Financial Protection Bureau, and EEOC issued a joint statement last month stating that there is no exemption for AI and that “the FTC will vigorously enforce the law to combat unfairness,” deceptive practices, or unfair competition methods. fair.”

Resist the urge to like. This stuff is impressive, especially at first glance. But they are important because the people who build them are trying to sell you something.

Emily M Bender, University of Washington

Bender said creating a separate governing body for the technology would be “a move toward misdirection.” “It looked like [Altman] He was advocating for a separate regulatory agency that would take care of “artificial intelligence” separate from what that technology is actually used to do in the world. This appears to be a move toward de facto denial of jurisdiction to existing laws and agencies.”

It remains to be seen if we see any meaningful policy. While Congress’ history with emerging technologies suggests they will move at a similarly glacial pace to respond, Tuesday’s hearing suggests they are at least cautiously open to the idea of ​​AI regulation.

Yet, during the hearing and at the dinner before that, what the lawmakers and the world saw was effectively a magic show, Bender said — getting people’s attention with one hand while the other moved to do something else. In her view, the whole testimonial confirmed a lesson she was hammering well before ChatGPT first broke the tech scene.

“Resist the urge to be dazzled,” warned Binder. “These things are impressive, especially at first glance. But it’s important to keep a critical distance because the people who build them are trying to sell you something.”

#Congress #fell #Sam #Altmans #magic #tricks

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top