Meta and Google news add fuel to the open source AI fire

Join CEOs in San Francisco July 11-12, to learn how leaders are integrating and optimizing AI investments for success.. learn more


The open-source AI debate is getting hotter in Big Tech, thanks to recent headlines from Google and the Meta.

On Tuesday evening, CNBC reported that Google’s latest large language model (LLM) PaLM 2 “uses nearly five times more text data for training than its predecessor,” though when it announced the model last week, Google said it was smaller than The former is PaLM but uses a more efficient “technology”. “The company was unwilling to publish the volume of its training data or other details,” the article emphasized.



While a Google spokesperson declined to comment on CNBC’s reporting, Google engineers were, to put it mildly, angry about the leak and excited to share their thoughts. In a now-removed tweet, Dmitry (Dima) Lebekhin, senior staff software engineer at Google DeepMind, tweeted: “Whoever leaked PaLM2 details to cnbc, fuck you honestly!”

Alex Polozov, Google’s chief research scientist, also put his approval to what he called a “rant,” noting that the leak sets a precedent for the search silo augmentation.

It happened

Transform 2023

Join us in San Francisco on July 11th and 12th, where senior executives will share how to integrate and optimize AI investments for success and avoid common pitfalls.

Register now

Lucas Baer, ​​an AI researcher at Google in Zurich, agreed on Twitter: “It’s not the number of emoticons (which I don’t even know if it’s true) that bothers me, it’s the complete erosion of trust and respect. Leaks like this lead to body talk and lack of openness over time.” time, and a worse work/research environment in general. And why? FFS.”

Not in response to a Google leak — but serendipitously timing — Meta’s chief AI scientist Yann LeCun gave an interview focused on Meta’s open-source AI efforts with The New York Timesposted this morning.

The article describes Meta’s February release of LLaMA’s large language model as “giving away its AI crown jewels” — since it released the model’s source code to “academics, government researchers, and others who have provided their email addresses to Meta.” [and could then] Download the code as soon as the company vets the individual.”

“The platform that wins will be open,” LeCun said in the interview, later adding that the increased secrecy at Google and OpenAI is a “big mistake” and “a really bad view of what’s going on.”

In a thread on Twitter, VentureBeat journalist Sean Michael Kerner pointing to Outside that the Meta has “already given up on one of the most important AI/machine learning tools ever — PyTorch. The foundational ones should be open/And they are. After all, where would OpenAI be without PyTorch?”

But even Meta and LeCun would only go so far in terms of being open. For example, Meta made weights for the LLa model available to academics and researchers on a case-by-case basis—including for Stanford’s Alpaca project—but those weights were later leaked on 4chan. This leak is what allowed developers around the world full access to LLM at the GPT level for the first time, not the Meta version, which did not involve releasing the LLaMA template for commercial use.

VentureBeat spoke with Meta last month about the nuances of its approach to the open versus closed source debate. Joel Pino, vice president of AI research at Meta, said in our interview that accountability and transparency in AI models are essential.

“More than ever, we need to invite people to see technology with more transparency and a commitment to transparency,” she said, explaining that the key is to balance the level of access, which can vary depending on the potential harm of the model.

“My hope, which is reflected in our data access strategy, is to figure out how to allow transparency for the verifiability audits of these models,” she said.

On the other hand, she said, some levels of openness go too far. “That’s why the LLaMA model had a gated version,” she explained. “A lot of people would be very happy with a full openness. I don’t think that’s the responsible thing to do today.”

LeCun remains outspoken about the dangers of exaggerating AI

However, LeCun remains outspoken in favor of open source AI, and in New York times One interviewee argued that spreading misinformation on social media is more dangerous than the latest LLM technology.

He said, “You can’t stop people from creating nonsense or dangerous information or anything else.” “But you can stop it from spreading.”

And while Google and OpenAI may become more closed with their AI research, LeCun insisted that they — and Meta — remain committed to open source, saying “progress is faster when it’s open.”

VentureBeat’s mission It is to be the digital city arena for technical decision makers to gain knowledge about the technology of transformational and transactional enterprises. Discover our briefings.


#Meta #Google #news #add #fuel #open #source #fire

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top