Google MusicLM: An Experimental AI Tool That Generates Music from Text Descriptions

By:

Published on:

Google has officially released the AI tool called MusicLM, which is capable of interpreting text descriptions and generating music based on input. 

In January, when Google provided a preview of MusicLM in an academic paper, the company stated that there were no immediate intentions to release it. The tool is accessible via a desktop browser using AI Test Kitchen and for Android/iOS. MusicLM generates two versions of the music by inputting a description of the song that the user wants to create. Users can choose the version they prefer and download and save it for future use.

The tool also allows users to specify words such as “electronic” and “classical” to select the instrument, atmosphere, mood, and emotion, and bring the generated work closer to the desired image. However, MusicLM is still in the early stages of development, and the output songs are good, but many of them are at the level of “making something like music” if amateurs touched the DAW.

The output music generated by MusicLM does not include the voice of the specific artist used for learning, and Google is limiting the prompts entered to avoid generating music that features the vocals of a particular musician, which could be treated as a deepfake. While the technology to have AI learn an artist’s vocals and performances and play different songs just like that artist is fascinating, copyright and other issues are being discussed.

According to TechCrunch, Google has been working with musicians and holding workshops to see how MusicLM can boost their creative processes. The tool has the potential to improve further in the future, allowing musicians to create music based on their input, making the creative process more efficient and effective.

While MusicLM is a promising technology, there are still many issues to be solved in the future. Deepfake music is a new phenomenon, and there are currently no clear laws governing the use of AI-generated music. As seen in the case of rapper Jay-Z, whose YouTube channel Vocal Synthesis used his voice AI to cover Billy Joel’s “We Didn’t Start the Fire” without permission, copyright infringement claims are being filed. The legal basis for deepfake music is ambiguous, and many problems need to be resolved before it can be adopted on a larger scale.

Vishak
Vishak
Vishak is a skilled Editor-in-chief at Code and Hack with a passion for AI and coding. He has a deep understanding of the latest trends and advancements in the fields of AI and Coding. He creates engaging and informative content on various topics related to AI, including machine learning, natural language processing, and coding. He stays up to date with the latest news and breakthroughs in these areas and delivers insightful articles and blog posts that help his readers stay informed and engaged.

Related Posts:

Leave a Reply

Please enter your comment!
Please enter your name here