AI Recreates Pink Floyd Song from Brain Activity Patterns

ai recreates pink floyd song from brain activity patterns.jpg Science

In a groundbreaking study, researchers at the University of California, Berkeley have used artificial intelligence (AI) to recreate a Pink Floyd song by analyzing brain activity. By studying brain recordings from individuals with surgically implanted electrodes, the team identified specific signals linked to pitch, melody, harmony, and rhythm. They then trained an AI to generate a prediction of a previously unheard snippet of the song based on these brain signals. Astonishingly, the AI-produced clip was found to be 43 percent similar to the original. This research not only deepens our understanding of how the brain perceives music but also holds the potential to improve devices for individuals with speech difficulties, allowing them to communicate in a more human-like manner.


Artificial Intelligence Generates Passable Cover of Pink Floyd Song by Analyzing Brain Activity

An artificial intelligence (AI) has successfully created a cover of a Pink Floyd song by analyzing brain activity recorded while people listened to the original. This groundbreaking research, conducted by Robert Knight and his colleagues at the University of California, Berkeley, not only furthers our understanding of how we perceive sound but also holds potential for improving devices for individuals with speech difficulties.

The study involved recording brain activity from electrodes implanted on the surface of 29 people’s brains as part of epilepsy treatment. The participants listened to Pink Floyd’s “Another Brick in the Wall, Part 1,” while the researchers compared their brain signals to the song. They identified a subset of electrodes strongly linked to the song’s pitch, melody, harmony, and rhythm.

Using this data, the researchers trained an AI to learn the connections between brain activity and the musical components. The AI was then able to generate a prediction of a 15-second segment of the song that it had not been trained on. The AI-generated clip had a spectrogram, a visualization of the audio waves, that was 43% similar to the real song clip.

The researchers also discovered that an area of the brain called the superior temporal gyrus processed the rhythm of the guitar in the song. Additionally, they found that signals from the right hemisphere of the brain were more crucial for processing music than those from the left hemisphere, confirming previous studies’ results.

Knight believes that this research could eventually lead to the development of devices that can communicate in a more human-like manner for individuals with speech difficulties such as amyotrophic lateral sclerosis or aphasia. Understanding how the brain represents the musical elements of speech, including tone and emotion, could make these devices sound less robotic.

While the invasive nature of brain implants makes it unlikely that this procedure will be used for non-clinical applications, other researchers have recently used AI to generate song clips from brain signals recorded using magnetic resonance imaging scans.

Looking ahead, Ludovic Bellier, a member of the study team, suggests that if AIs can reconstruct music that people imagine, not just listen to, this approach could even be used to compose music. However, as AI-based recreations of songs using brain activity become more advanced, questions around copyright infringement may arise, depending on the level of similarity between the reconstruction and the original music.

Overall, this study highlights the potential of AI and brain activity analysis in the field of music perception and production. By unraveling the intricacies of how our brains process music, we can pave the way for advancements in assistive technology for individuals with speech difficulties and potentially even open new doors in music composition.

Short takeaway: Artificial intelligence has successfully generated a cover of a Pink Floyd song by analyzing brain activity. This research improves our understanding of how we perceive sound and has the potential to enhance devices for individuals with speech difficulties.

Crive - News that matters