In Battle Over AI, Meta decides to give up the Crown Jewels

In February, Meta made an unusual move in the rapidly evolving world of artificial intelligence: it decided to give away its AI crown jewels.

The Silicon Valley giant, which owns Facebook, Instagram and WhatsApp, has created AI technology, called LLaMA, that can run online chatbots. But rather than keep the technology to itself, the Meta released the system’s core computer code into the wild. Academics, government researchers, and others who have provided their email addresses to Meta can download the code once the company has screened the individual.

Essentially, Meta has been giving away its AI technology as open source software—computer code that can be freely copied, modified, and reused—giving outsiders everything they need to quickly build their own chatbots.

“The platform that wins will be open,” Yann LeCun, chief AI scientist at Meta, said in an interview.

With the race to lead AI across Silicon Valley heating up, Meta stands out from its competitors by taking a different approach to technology. Led by its founder and CEO, Mark Zuckerberg, Meta believes the smartest thing to do is share its core AI drivers as a way to spread its influence and move faster into the future.

Its actions contrast those of Google and OpenAI, the two companies leading the new AI arms race. Fearing that AI tools like chatbots will be used to spread misinformation, hate speech, and other toxic content, those companies are becoming more secretive about the methods and software that underpin their AI products.

Google, OpenAI, and others have criticized Meta, saying that an unfettered open source approach is dangerous. The rapid rise of artificial intelligence in recent months has raised alarm bells about the risks of the technology, including how it can upend the job market if it is not properly deployed. And within days of LLaMA’s release, the system leaked to 4chan, the online message board known for posting false and misleading information.

“We want to think hard about giving away the details or the open source code” of AI technology, said Zubin Ghahramani, a vice president of research at Google who helps oversee AI work. “Where could this lead to abuse?”

But Meta said she saw no reason to keep her code to herself. Dr. LeCun said the increased secrecy at Google and OpenAI is “a huge mistake,” and “a really bad view of what’s going on.” He argues that consumers and governments will refuse to embrace AI unless it is outside the control of companies like Google and Meta.

“Do you want every AI system to be controlled by two powerful American corporations?” Asked.

OpenAI declined to comment.

Meta’s open source approach to AI is not new. The history of technology is full of battles between open source systems and proprietary or closed systems. Some store the most important tools used to build tomorrow’s computing platforms, while others dedicate these tools. Recently, Google opened the Android mobile operating system to take over the dominance of Apple in smartphones.

Many companies have publicly shared their AI technologies in the past, at the insistence of researchers. But their tactics are changing due to the race over AI. This shift began last year when OpenAI released ChatGPT. The chatbot’s massive success stunned consumers and led to increased competition in the field of AI, as Google moved quickly to incorporate more AI into its products and Microsoft invested $13 billion in OpenAI.

While Google, Microsoft, and OpenAI have since received most of the attention in the field of AI, Meta has also invested in the technology for nearly a decade. The company has spent billions of dollars building the software and hardware needed to realize chatbots and other “generative AI,” which produce text, images, and other media on their own.

In recent months, Meta has been working hard behind the scenes to weave its years of AI research and development into new products. Mr. Zuckerberg is focused on making the company a leader in artificial intelligence, holding weekly meetings on the topic with his executive team and product leaders.

Meta AI’s biggest move in recent months has been the launch of LLaMA, which is what is known as the Large Language Model, or LLM (LLa stands for “Large Language Model Meta AI.”) LLM are systems that learn skills by analyzing massive amounts of text, including Books, Wikipedia articles, and chat logs. ChatGPT and Google’s Bard chatbot are also built on top of such systems.

LLMs identify patterns in the text they analyze and learn how to create text of their own, including term papers, blog posts, poetry and computer code. They can even have complex conversations.

In February, Meta publicly launched LLaMA, allowing academics, government researchers, and others who provided their email addresses to download and use the code to build their own chatbot.

But the company has gone further than many other open source AI projects. It allowed people to download a copy of LLaMA after being trained on massive amounts of digital text extracted from the Internet. The researchers call this “weight release,” referring to the particular mathematical values ​​the system learned while analyzing the data.

This was important because analyzing all that data usually requires hundreds of specialized computer chips and tens of millions of dollars, resources that most companies don’t have. Those with the weights can deploy software quickly, easily, and inexpensively, spending a fraction of what it would cost to create such powerful software.

As a result, many in the tech industry believed Meta had set a dangerous precedent. And within days, someone unleashed LLaMA weights on 4chan.

At Stanford University, researchers have used the new Meta technology to build their own AI system, which has been made available on the Internet. A Stanford researcher named Moses Domboya soon used it to create a problematic script, according to screenshots seen by The New York Times. In one case, the regime gave instructions for disposing of the body without it being caught. He also produced racist material, including comments that endorsed the views of Adolf Hitler.

In a private conversation between the researchers, seen by The Times, Mr Doumboya said distributing the technology to the public would be “a grenade available to everyone in a grocery store”. He did not respond to a request for comment.

Stanford immediately removed the AI ​​system from the Internet. The project is designed to provide researchers with technology that “captures the behaviors of cutting-edge AI models,” said Tatsunori Hashimoto, the Stanford professor who led the project. “We canceled the demo because we became increasingly concerned about the potential for misuse outside of research.”

Dr. LeCun argues that this type of technology is not as dangerous as it might seem. He said that small numbers of individuals can actually produce and spread misinformation and hate speech. He added that toxic substances can be severely restricted by social networks such as Facebook.

He said, “You can’t stop people from creating nonsense or dangerous information or anything else.” “But you can stop it from spreading.”

For the Meta, more people using open source software can level the playing field as it competes with OpenAI, Microsoft and Google. If every software developer in the world built software with Meta Tools, it could help anchor the company for the next wave of innovation, and stave off potential inappropriateness.

Dr. LeCun also referred to recent history to explain why Meta is committed to open source AI technology. He said the evolution of the consumer Internet was the result of open community standards that helped build the fastest and most pervasive knowledge-sharing network the world had ever seen.

“Progress is faster when it’s open,” he said. “You have a more vibrant ecosystem where everyone can contribute.”

#Battle #Meta #decides #give #Crown #Jewels

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top