Meta’s AI language model LLaMA leaked online in February, has drawn criticism from two US senators who fear it could be exploited for criminal purposes.
Meta, formerly known as Facebook, released LLaMA (Large Language Model Meta AI) in late February as a research tool for the AI community.
LLaMA is a state-of-the-art foundational large language model that can generate text for various tasks, such as writing creative stories, solving mathematical problems, and answering questions.
Meta claims that LLaMA is more compact and user-friendly than other large language models, such as OpenAI’s ChatGPT-4 and Google’s Bard.
However, LLaMA was not intended for public use and was only available to researchers who requested access.
Shortly after its release, a 4chan user leaked the full model on BitTorrent with no monitoring or oversight.
This sparked concerns from two US senators, Richard Blumenthal, and Josh Hawley, to write a letter to Meta’s CEO, Mark Zuckerberg, on June 6.
The senators questioned the ethical implications of LLaMA and its potential for generating harmful content. They argued that Meta did not undertake sufficient risk assessments or provide adequate safeguards before releasing LLaMA.
The senators cited examples of how LLaMA could be easily adopted by spammers, cybercriminals, and individuals involved in fraudulent activities or distributing objectionable content.
They compared LLaMA with ChatGPT-4 and Bard, which have built-in ethical guidelines that prevent them from generating certain types of responses.
For instance, when asked to generate a note pretending to be someone’s son asking for money, ChatGPT-4 would deny the request. At the same time, LLaMA would produce the requested letter and responses involving self-harm, crime, and antisemitism.
The senators also noted that users had found ways to “jailbreak” ChatGPT-4 and make it generate responses it would normally refuse.
The senators asked Zuckerberg several queries regarding the development, release, and impact of LLaMA.
In addition, they also inquired about how Meta uses user data for AI research and what steps Meta has taken to prevent or mitigate harm since LLaMA’s leak.
Meta is yet to respond to the senators’ letter yet publicly.
However, Meta stated its commitment to responsible AI practices and shared a model card describing how it built LLaMA in its blog post announcing the release.
Meta hoped sharing LLaMA would help address AI challenges like bias, toxicity, and misinformation.
Meta is not the only tech giant that is developing large language models. OpenAI, Google, and DeepMind have also released their models, such as ChatGPT-4, Bard, and Chinchilla70B.
These models have shown impressive capabilities but also raised ethical and social concerns.
Open-sourcing the code for these models allows for customization and collaboration among developers but also requires responsible and thoughtful approaches to prevent potential misuse and potential harm.