Meta Releases Latest AI Generator, Able to Make Music from Text

Able to Make Music from Text

Kabari99-Now Meta and Microsoft will join forces to introduce the new Llama two, the next generation big language model of AI.

Also With the new language model, Mark Zuckerberg is working on a generative AI tool for Instagram.

Mark Zuckerberg’s company Meta works on various generative AI tools for Instagram, including one that helps identify AI-generated content.

Such tools may be needed more than one might think. Because, Meta has now introduced the latest project.

Also read on:Find the safe behind the wardrobe Rp. 850,0000 million

Meta Rilis Generator AI Terbaru, Mampu Bikin Musik dari Teks


In a blog post,

Meta introduces its newest AI tool, AudioCraft. AudioCraft generates high-quality realistic audio and music from text.

The company says the tool helps a small business owner easily add a soundtrack to their latest video ad on Instagram.

So no more browsing through different songs for hours before uploading Reels.








Also read on:The Most Beautiful and Creative City Malmö, Sweden

There only need to write down what kind of music you need, and the AI tool generates it, as quoted from Phone Arena.

AudioCraft is yet to be launched on any of the social media platforms that Meta owns,

but it may only be a matter of time before AI tools become another feature that users can use every day. For now, Meta is releasing AudioCraft as open source code.







Also read on:Reportedly Withdraw from Merdeka Cup 2023, why?

The company says its goal is for researchers and practitioners to train models with their own data sets and help advance the field of audio and AI-generated music.

AudioCraft is a collection of three models namely MusicGen, AudioGen, and an enhanced version of EnCodec.

MusicGen is an audio generation model designed for making music. The model was trained on a large dataset of approximately 400,000 music recordings,








Also read on:What subjects at university yield the best jobs?

including text descriptions and metadata, for a total of 20,000 hours of music owned by Meta or licensed for this specific purpose.

AudioGen creates audio from written prompts,

simulating barking dogs or footsteps, and was trained on public sound effects. An improved version of Meta’s EnCodec decoder

lets users create sounds with fewer artifacts which is what happens when you manipulate audio too much.








Also read on:The UK will run the highest interest bills in the developed world

The company let the media listen to some sample audio made with AudioCraft.

The generated noise of whistling, sirens, and humming sounded pretty natural.

While the guitar strings on the songs felt real, they still felt, well, artificial.







Also read on:Pressing to go down the road to success

Meta is just the latest to tackle combining music and AI.

Google came up with MusicLM,

a large language model that generated minutes of sounds based on text prompts and is only accessible to researchers.

Then, an “AI-generated” song featuring a voice likeness of Drake and The Weeknd went viral before it was taken down.








Also read on:Kaizen Berkesinambungan dalam Project Management

More recently, some musicians, like Grimes, have encouraged people to use their voices in AI-made songs.

AudioCraft sounds like something that could be used for elevator music or a stock tune that could be fitted for a certain scene rather than the next big pop hit.









Meta believes its new model can usher in a new wave of songs in the same way synthesizers change music after it becomes popular.

“We think MusicGen can turn into a new kind of instrument just like synthesizers when they first appeared,”

the company said in a blog. Meta acknowledges the difficulty of creating an AI model capable of composing music







Also read on:Marocco Mohammed VI affirmed ties with Palestine

because audio often contains millions of points where the model performs actions compared to written text models like Llama 2 which only contain thousands.

The company said AudioCraft needed open source to diversify the data used to train it.







Also read on:Kasus di duga melakukan pencucian $176rb hasil cyber scam

“We realized that the data set used to train our model was less diverse. In particular,

the music dataset used contains a larger portion of Western-style music and contains only audio-text pairs with text and metadata written in English,” said Meta.







Also read on:Mantan Reporter Fox News Diperintahkan oleh Hakim

“By sharing the code for AudioCraft,

We hope that other researchers can more easily test new approaches to limit or eliminate the potential for bias and abuse of generative models.”

Record labels and artists have been sounding the alarm about the dangers of AI,

as many are afraid of AI models taking copyrighted material for training, and historically, they are a legally aware group.

Sure, we all remember what happened to Napster, but recently,








Also read on:Gaya Hidup Yang Salah Pada Remaja Karena Zat Berbahaya

Spotify is facing a multi-billion dollar lawsuit under a law that dates back to the days of piano players,

and just this year, a court had to decide whether Ed Sheeran copied Marvin Gaye for “ Thinking Out Loud.”

But before Meta’s “synthesizer” goes on tour, someone has to figure out the prompt that appeals to fans who want more machine-made songs and not just muzak.


last post

Back to top button