The future of music

KAIST team discovers music-responsive neurons in the brain using AI model

January 23, 2024
Kaist
Source: The Korea Times

Key takeaways

  • On January 16, a research team at KAIST, under the direction of Professor Hawoong Jung from the Department of Physics, announced that they had used an artificial neural network model to identify the principle by which musical instincts emerge from the human brain without exceptional learning.
  • Using an artificial neural network model, Professor Jung’s group demonstrated how, without being taught music, cognitive processes for music arise naturally due to processing auditory information taken in from the outside world.

On January 16, a research team at KAIST, under the direction of Professor Hawoong Jung from the Department of Physics, announced that they had used an artificial neural network model to identify the principle by which musical instincts emerge from the human brain without exceptional learning.

In the past, numerous scholars have tried to pinpoint the parallels and discrepancies between the music produced by diverse cultures and investigate the genesis of universality. According to a 2019 Science article, all ethnographically distinct cultures make music using similar beats and melodies. Additionally, neuroscientists have previously discovered that the auditory cortex, a particular region of the human brain, is in charge of processing musical information.

Using an artificial neural network model, Professor Jung’s group demonstrated how, without being taught music, cognitive processes for music arise naturally due to processing auditory information taken in from the outside world. The researchers trained the artificial neural network to recognize the different sounds using AudioSet, a sizable collection of sound data made available by Google. 

Interestingly, the research team found that specific neurons in the network model would react to music differently. Put differently, they saw the spontaneous creation of neurons that responded strongly to vocal and instrumental music but not to other sounds, such as those made by machines, animals, or the environment.

Reactive behaviours exhibited by the neurons in the artificial neural network model were comparable to those observed in the auditory cortex of a real brain. Artificial neurons, for instance, reacted less to the sound of shortened and rearranged music. This suggests that the temporal structure of music is encoded in the spontaneously generated music-selective neurons. This characteristic appeared in 25 musical genres, including pop, rock, jazz, electronic, and classic. It was not exclusive to any of these genres.

Furthermore, suppressing the activity of the music-selective neurons was found to impede the cognitive accuracy of other natural sounds significantly. That is to say, the neural function that processes musical information helps process different sounds, and ‘musical ability’ may be an instinct formed from an evolutionary adaptation acquired to better process sounds from nature.

“The results of our study imply that evolutionary pressure has contributed to forming the universal basis for processing musical information in various cultures,” said Professor Hawoong Jung, who advised the research. 

In terms of the importance of the study, he stated,

“We look forward to this artificially built model with human-like musicality to become an original model for various applications, including AI music generation, musical therapy, and for research in musical cognition.” He also addressed its drawbacks, saying, “This research does not take into consideration the developmental process that follows the learning of music, and it must be noted that this is a study on the foundation of processing musical information in early development.”

Exciting times ahead for AI in music

MidderMusic announced TikTok’s new AI feature a few days ago. Per the report, seven months after launching Ripple, a tool for creating music with AI, TikTok has begun testing a feature called “AI Songs.” This enables users to make music on the platform by responding to text prompts. Redditors found that the feature is exclusive to a small subset of TikTok users in the video-upload section, suggesting this is a targeted beta release.

The ‘AI Song’ feature utilizes the power of Bloom, a massive open-source language model that can speak 46 natural languages and has an astounding 176 billion parameters. This places Bloom behind GPT-4, the most potent language model to date, and on par with OpenAI’s GPT-3.5. Many content creators have begun experimenting in response to this new development integrated by TikTok. AI has added an extra and essential flesh to the music and entertainment industry. Over the past couple of months, mega companies have launched AI tools. Spotify, for instance, introduced the AI DJ that improves users’ convenience incredibly. 

With this discovery by Professor Jung’s research group, AI has an immense propensity to take center stage in the consumption and production of music; most importantly entertainment. These discoveries approve the notion that AI is not as dangerous as many think of it. It places AI in the grand scheme of the world’s vital next step to providing better habitation for all. 

Tobi Opeyemi Amure

Tobi started as a crypto writer in 2017 and became a key crypto news editor and SEO specialist at Watcher Guru in 2021, significantly boosting their traffic. He enjoys Afrobeats, R&B, and Soul music in his spare time.

view profile

TikTok
Previous Story

TikTok trials an all-new “AI Song” feature, available to select users now

American Federation of Musicians
Next Story

The American Federation of Musicians contemplates strike over AI and streaming rights

Latest from News