hosting image
Unraveling the Mystery of Shazam's Song Identification Technology

Image Credits: Medium

Unraveling the Mystery of Shazam's Song Identification Technology

Understanding the Basics: Frequency, Pitch, and Waves

To grasp how Shazam functions, it's essential to comprehend basic audio concepts. Frequency, measured in hertz (Hz), indicates the number of soundwave cycles per second. Pitch is our perception of sound frequency, with higher frequencies perceived as higher pitches. Waveforms represent the pattern of sound in a visual format.

Graphs and Axes in Audio Analysis

Graphs are crucial tools for visualizing audio data, depicting information through lines or dots. Axes on a graph represent the dimensions along which data is plotted – typically, one axis is horizontal, and the other is vertical.

Shazam: A Revolutionary App for Song Identification

Shazam, an app that identifies songs from short audio samples, boasts over 200 million global users monthly. Acquired by Apple in 2018, Shazam records a snippet of music and matches it against a vast database to find the song's name and artist. Originally, Shazam functioned through a phone service where users dialed a number and held their phone close to the music source.

The Challenge of Song Recognition

Identifying a song from an audio sample is complex due to factors like background noise, frequency effects, and amplitude changes. These elements can significantly alter the audio's waveform, making traditional methods like pattern matching inefficient for this task.

Also Check New AMD Threadripper Pro CPU breaks cinebench records

The First Step: Calculating a Spectrogram

Both the registration and recognition processes in Shazam begin with generating a spectrogram of the audio. Understanding spectrograms requires knowledge of Fourier transforms.

The Fourier Transform: Decoding Frequencies

Fourier transforms analyze audio to determine the present frequencies. For instance, applying a Fourier transform to a 20Hz sine wave would show a peak at 20Hz. The result, known as a frequency spectrum, shifts the signal from the time domain to the frequency domain. The strength of each frequency component, which influences its audibility, is represented on the Y-axis of the frequency spectrum.

Visualizing Combined Frequencies

When combining different frequencies, the Fourier transform reveals distinct spikes for each frequency. For example, adding a 50Hz sine wave at half the amplitude to a 20Hz sine wave would result in spikes at both 20Hz and 50Hz on the frequency spectrum.

Conclusion

Shazam's algorithm remains a testament to the innovation in audio processing. While the technology has evolved since its inception, the fundamental principles of audio identification laid down by Shazam continue to be relevant. This in-depth look at Shazam's algorithm offers a glimpse into the sophisticated world of digital audio analysis, showcasing the intricate processes that enable us to effortlessly identify songs with just a tap on our smartphones.