Google’s latest AI model uses nearly five times more textual data for training than its predecessor

  • Google’s PaLM 2 large language model uses nearly five times as much textual data for training as its predecessor, LLM, CNBC has learned.
  • In announcing the PaLM 2 last week, Google said the model is smaller than the previous PaLM but uses more efficient “technology.”
  • The lack of transparency about training data in AI models has become an increasingly hot topic among researchers.

Sundar Pichai, CEO, Alphabet Inc. , during the Google I/O Developers Conference in Mountain View, Calif., on Wednesday, May 10, 2023.

David Paul Morris | bloomberg | Getty Images

CNBC has learned that Google’s new big language model, which the company announced last week, uses nearly five times as much training data as its predecessor from 2022, allowing it to perform more advanced coding, math and creative writing tasks.

PaLM 2, the company’s new public-use large language (LLM) model unveiled at Google I/O, has been trained on 3.6 trillion tokens, according to internal documents seen by CNBC. Tokens, which are strings of words, are an important building block for training LLM, because they teach the model to predict the next word that will appear in a sequence.

Google’s previous version of PaLM, which stands for Pathways Language Model, was released in 2022 and trained on 780 billion tokens.

While Google was eager to show the power of its AI technology and how it could be integrated into search, emails, word processing, and spreadsheets, the company was unwilling to publish the volume or other details of its training data. OpenAI, the innovator of Microsoft-backed ChatGPT, has also kept details of the latest LLM language called GPT-4 secret.

The companies say the reason for the lack of disclosure is the competitive nature of the business. Google and OpenAI are rushing to attract users who might want to search for information using chatbots instead of traditional search engines.

But as the AI ​​arms race rages on, the research community is calling for more transparency.

Since revealing PaLM 2, Google has said the new model is smaller than previous LLMs, which is significant because it means the company’s technology is becoming more efficient while accomplishing more complex tasks. PaLM 2 is trained, according to internal documentation, on 340 billion parameters, which is an indication of the complexity of the model. The initial PaLM is trained on 540 billion parameters.

Google did not immediately provide comment for this story.

Google said in a blog post about PaLM 2 that the model uses a “new technology” called Computational Scale Optimization. This makes the LLM “more efficient with better overall performance, including faster inference, fewer service parameters, and a lower cost of service.”

In announcing PaLM 2, Google confirmed previous CNBC reports that the model is trained in 100 languages ​​and performs a wide range of tasks. It’s already being used to power 25 features and products, including the company’s experimental chatbot Bard. It’s available in four sizes, from smallest to largest: Gecko, Otter, Bison, and Unicorn.

PaLM 2 is more powerful than any existing model, based on public disclosures. Facebook’s LLM called LLaMA, which it announced in February, has been trained on 1.4 trillion tokens. The last time OpenAI shared ChatGPT training volume was with GPT-3, when the company said it had trained 300 billion codes in that time. OpenAI released GPT-4 in March, and said it shows “human-level performance” in several professional tests.

LaMDA, a conversational LLM introduced by Google two years ago and promoted in February along with Bard, has been trained on 1.5 trillion tokens, according to the latest documents seen by CNBC.

As new AI applications go mainstream rapidly, so does the debate surrounding the underlying technology.

Mehdi Elmohamady, Google’s chief research scientist, resigned in February citing the company’s lack of transparency. On Tuesday, OpenAI CEO Sam Altman testified at a hearing of the Senate Judiciary Subcommittee on Privacy and Technology, and agreed with lawmakers that a new system is needed to deal with AI.

“For a technology that is so new, we need a new framework,” Altman said. “Certainly companies like ours have a lot of responsibility for the tools we put out into the world.”

— CNBC’s Jordan Novette contributed to this report.

He watches: Sam Altman, CEO of OpenAI, has called for AI oversight

#Googles #latest #model #times #textual #data #training #predecessor

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top