Saturday, July 27, 2024
HomeAI News & Updates2024 Grammys Will Feature Generative AI

2024 Grammys Will Feature Generative AI

For what is referred to as “Music’s Biggest Night,” The Recording Academy and IBM have informed that they will utilize generative AI to produce material for the Grammy Awards social media platforms and enable music enthusiasts to create their AI-generated content.

Before and throughout the awards, the new “AI Stories with IBM Watsonx” tool will use real-time news and a range of sources to create text, photos, animations, and videos. Fans can use a widget on the Grammys website to generate their artificial intelligence material, joining the Recording Academy’s editorial team in using AI Stories. You may watch the awards ceremony and another day of programming live on the internet on February 4th.

More than a hundred musicians who have been nominated or won Grammys will have their stories told through AI Stories. All sorts of publicly available content, including articles on music and the Grammys, as well as historical statistics, artist pages, and Wikipedia profiles, were used to compile the training data.

“The reality is, we’ve got millions of news articles in our content system,” as stated by Ray Starck, VP of digital strategy at The Recording Academy. He further points out that it’s a thing to conduct a search and get results; it’s another to peruse those results and seize an opportunity based on something that’s currently trending in the business.

According to an interview Starck gave to Digiday, the objective is to create more real-time content and experiment with AI for the 2024 Grammys. Developing product ideas to harness AI for content production while preserving its intellectual property was already in the Academy’s plans before they even spoke with IBM about employing Watson. Humans will also be involved from the Recording Academy and IBM to ensure everything is accurate and update the ceremony as new information comes in.

According to Starck, the editorial team’s topic research and content creation can be enhanced with the application of generative AI. Additionally, the Academy has created pre-generated prompts to assist in restricting dangers associated with outputs or IP problems rather than allowing users to utilize any prompt.

Starck explained that one of their primary content strategies was to review all of the high-quality material stored in their CMS. Retrospectively reviewing one’s records, accomplishments, and history.

The music business is now experimenting with generative AI because of the uncertainty around its possible effects on record companies and musicians. Anthropic was sued last year by Universal Music and two other music publishers for allegedly breaking copyright rules when it used copyrighted lyrics to train its AI models and then used those models to answer questions via its Claude chatbot.

The Recording Academy unveiled new regulations for AI-generated music less than a year ago, and now GenAI is being used for Grammy content. Last summer, Harvey Mason Jr., CEO of the Recording Academy, stated that artists who utilize artificial intelligence (AI) for purposes such as voice or instrument might be eligible for nominations so long as they can demonstrate that a person still “contributed creatively” in the relevant categories.

Recording Academy Invites 2,710 Music People to Become Members

With these AI initiatives, Recording Academy and IBM continue a seven-year collaboration; IBM will showcase WatsonX and its many products at the event. The collaboration also signifies the Academy’s debut into AI-generated content creation using a big language model. Money is supposedly “being made both ways” in the arrangement, according to Noah Syken, IBM’s president for sports and entertainment, even though the tech company refused to provide the details of the deal.

“It’s really about how do we understand the engagement we’re trying to create with a 50-year-old like me, or an 18-year-old,” As informed by Syken. They kept the following questions in mind, In what words will you be able to reach them? How can we teach the model to recognize the environment in which the data is being transmitted?

To train AI Stories to use data from music-focused sources, a procedure known as retrieval augmentation generation (RAG) was implemented. This was done in conjunction with Meta’s open-source Llama 2 big language model and IBM’s WatsonX platform. IBM additionally employed a method known as few-shot learning, which aids in training the AI model with a limited quantity of data. To further guarantee that AI Stories produces appropriate pronouns for every artist according to their identification, IBM trained the AI model to give accurate information.

IBM plans to replace 7,800 jobs with AI over time, pauses hiring certain positions | Ars Technica

Developing an accurate yet imaginative and free-form feature using the music-specific data and the Llama 2 knowledge base was the biggest obstacle to overcome while developing a tool for The Grammys. An IBM engineer and inventor named Aaron Baughman used a liquid analogy to explain how to prioritize data sources using the RAG technique based on the desired output from an AI model.

According to Baughman, imagine a series of buckets being filled with water.  We would focus on gathering factual information first. We would then incorporate additional material from sources such as Wikipedia if there is sufficient space. Additionally, additional water would be poured in if additional tokens are remaining for the context.

Editorial Staff
Editorial Staff
Editorial Staff at AI Surge is a dedicated team of experts led by Paul Robins, boasting a combined experience of over 7 years in Computer Science, AI, emerging technologies, and online publishing. Our commitment is to bring you authoritative insights into the forefront of artificial intelligence.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments