Meta introduces the LLaMA model, a more effective research tool than OpenAI’s GPT-3

Chatbots are flooding over! Following the revolution started by OpenAI’s ChatGPT, Google released its BARD, and a number of other companies did the same. Now it appears that Meta Platforms is putting itself in a position to stand out from its competitors. The internet giant with headquarters in California has launched a brand-new research tool that will soon assist in creating chatbots with AI.

Mark Zuckerberg, the co-founder of Facebook, informed researchers that LLaMA (Large Language Model Meta AI), a new large language model, would soon be released by Meta Platforms. The social networking giant intends to give researchers access to the technology and eventually include it in its platform.

According to the official press release, LLaMA is a next-generation core language model created to aid researchers in their work in the AI domain. This would be Meta’s third LLM, interestingly, following the shutdown of Galactica and Blender Bot 3 after receiving inaccurate results.

What is LLaMA?

Although LLaMA is not really a chatbot, Meta claims that it is a research tool that will likely address issues with AI language models.

LLaMA is a group of language models with parameters ranging from 7B to 65B. According to the business, it trains its models using trillions of tokens. It asserts that public data sets can be used to train next-generation models instead of exclusive, inaccessible data sets. Although LLaMA is not currently utilized in any of Meta’s products, the business intends to make it accessible to researchers. LLaMA is its most advanced system.

The adaptability and problem-solving abilities of LLaMA may offer a sneak preview of the enormous potential advantages that AI could bring to billions of people at scale.