The data behind
the discoveries
6,598 tracks. 44 features each. 65 years of Billboard history, enriched with brain activation maps.
6,598 tracks. 44 features each. 65 years of Billboard history, enriched with brain activation maps.
The Resonance dataset is built by combining three layers of data for every track that appeared in the Billboard Hot 100 Year-End charts from 1960 to 2025.
Track title, artist, year, chart position. Scraped from Billboard's official year-end Hot 100 archives.
Valence, energy, danceability, loudness, tempo, mode, speechiness, acousticness, instrumentalness, liveness, duration. Extracted via Spotify and Deezer APIs.
Predicted cortical activation from TRIBE v2, an fMRI-trained neural network. 20,484 vertices grouped into 6 functional brain regions: Auditory, Motor, Language, Emotion, Prefrontal, Visual.
Lyrics were retrieved via the LRClib API, matching by track title and artist. Since audio previews are limited to 30-second clips, the average word count per track is approximately 50 words, representing a snapshot rather than full lyrics.
Billboard Hot 100 Year-End charts from 1960 to 2025, parsed into structured records (title, artist, rank, year).
Each track was matched to a Spotify ID via the Search API. For unmatched tracks, Deezer was used as fallback. Audio features were extracted via the respective APIs.
30-second audio previews were processed through TRIBE v2, generating predicted cortical activation maps (20,484 vertices × 30 timesteps). Vertices were grouped into 6 functional regions based on a neuroanatomical atlas.
Lyrics matched via LRClib API by title and artist. 5,490 out of 6,598 tracks matched (83%). Average ~50 words per track from 30-second previews.
All data stored in PostgreSQL, with pre-computed JSON exports for each exploration. No external dependencies at runtime.
| Layer | Tracks | % |
|---|---|---|
| Chart metadata | 6,598 | 100% |
| Audio features | 6,098 | 92% |
| Brain activation | 6,458 | 98% |
| Audio + Brain | 6,023 | 91% |
| Lyrics | 5,490 | 83% |