When we form memories, not everything that we perceive is noticed; not everything that we notice is remembered. Humans are excellent at filtering and retaining only the most important parts of their experience—what if our audio compression had the same ability?
Our goal is to understand what makes sound memorable. With this work, we hope to gain insight into the cognitive processes that drive auditory perception and predict the memorability of sounds in the world around us more accurately than ever before. Ultimately, these models will give us the ability to generate and manipulate the sounds that surround us to be more or less memorable.
We envision this research introducing new paradigms into the space of audio compression, attention-driven user interactions, and auditory AR, amongst others.
A good overview of our dataset, experiments, and methodology can be found here: https://resenv.media.mit.edu/memory-dataset/demo.html