In a groundbreaking study published in PLOS Biology, researchers at the University of California at Berkeley have successfully recreated a Pink Floyd song using brain activity data. By analyzing the brain waves of 29 patients who listened to "Another Brick in the Wall, Part 1" while undergoing epilepsy treatment, the scientists were able to decode the neural signals and reconstruct the song using machine learning. This marks the first time that a song has been recreated solely from brain activity, and the implications for future medical applications and our understanding of how the brain processes music are immense.
The team, led by neuroscientist Robert Knight, hopes that this technology could eventually be used to help patients with speech impediments communicate more effectively. Music, with its emotional and melodic elements, adds a layer of expressiveness to speech, making it an ideal avenue for assisting those who have lost their ability to communicate. The ability to decode how the brain processes music opens up possibilities for composing music through thought and developing innovative medical applications. With further advancements, this research could lead to a "keyboard for the mind" or a machine that decodes the words patients want to say.
Scientists Use Brain Waves to Reconstruct Pink Floyd Song
Researchers at the University of California at Berkeley have successfully reconstructed a Pink Floyd song by analyzing brain activity. The team recorded the brain waves of more than two dozen people as they listened to the song “Another Brick in the Wall, Part 1” from the band’s famous album “The Wall.” Using machine learning, they decoded the brain signals and reconstructed the audio, marking the first time a song has been recreated from neural signals. The study, published in the journal PLOS Biology, aims to advance understanding of how the brain processes sound and music, while also potentially helping patients with speech impediments communicate more effectively.
Listening to music may seem like a simple activity, but it involves a complex process in the brain. When sound waves enter the inner ear, they are converted into electrical signals that are sent to the brain. Neurons in different parts of the brain then interpret the lyrics, melody, and rhythm of the music. By connecting electrodes to the brains of epilepsy patients and recording their brain activity while listening to the Pink Floyd song, the researchers gained insight into this process. They likened the electrodes to "piano keys" that allowed them to understand how different musical elements affected each electrode.
The chosen Pink Floyd song was “Another Brick in the Wall, Part 1” because all of the patients being studied liked the band, and the song had rich vocals and harmonics. Over a period of six years, the changes in brain activity of the patients were converted into a large dataset. However, the project was put on hold until Ludovic Bellier, a musician and computational research scientist, joined the team and offered to decode the data. Bellier discovered that certain parts of the patients’ temporal lobes became active when the 16th notes of the guitar rhythm played, and this had never been observed before.
Using artificial intelligence (AI), the researchers translated the analyzed data into music. The AI model took into account how the brain responded to different sound frequencies and generated a spectrogram, which represents the frequencies and changes in a sound over time. The resulting sound file closely resembled the original Pink Floyd song. The researchers believe this technology could lead to new treatments for patients who have lost their ability to communicate, as current speech-generating devices often produce robotic-sounding voices. The emotional and rhythmic elements of music could add expressiveness to these devices and improve the quality of communication.
Overall, this groundbreaking study provides new insights into how the brain processes music and sound. By reconstructing a Pink Floyd song from neural signals, the researchers have demonstrated the potential for using brain waves to help patients with speech impediments and improve communication devices. The study also opens up possibilities for composing music through thought and has sparked interest in the medical community for further research in this field.