Facebook and Instagram’s parent company, Meta, introduced a suite of generative AI models called AudioCraft on August 2 to create music from various inputs. It also allows users to input text to create musical and audio compositions.
Included in the set of generative AI tools are MusicGen and AudioGen, which use text-based inputs to generate new audio, and EnCodec, which “allows for higher quality music generation with fewer artifacts.”
Meta stated in its announcement that its MusicGen model was trained with music it owns or has “specifically licensed.”
Significant controversy has already surrounded the training of AI with copyrighted works in numerous artistic disciplines, including a lawsuit against Meta for copyright infringement during AI training.
Meta offers MusicGen and AudioGen to the “research community” and developers in multiple capacities. As it develops more sophisticated controls, the company anticipates that the models will be helpful to both amateurs and professionals in the music industry.
“With even more controls, we think MusicGen can turn into a new type of instrument — just like synthesizers when they first appeared.”
In a recent interview, the president and CEO of the Recording Academy, Harvey Mason Jr., compared the emergence of AI-generated music to the introduction of synthesizers.
Meta’s release of its AI-generated music tools occurs shortly after Google’s debut of its MusicLM text-to-music conversion tools.
In May, the company announced that its AI Test Kitchen platform was admitting “early testers” of its products.
Meta and many other tech giants, such as Google and Microsoft, have been aggressively releasing new AI tools in a race to develop and deploy the most potent models.
On August 1, Meta announced the launch of new AI chatbots with personalities, which users of its platforms can use as search assistants and as a “fun product to play with.”