Approximating the acoustic impulse response function of rooms based on reverberant sound...

... is what we, Erik Larsson and Felix Viberg, try to accomplish in our Master's thesis project. We use Deep Convolutional Neural Networks (AI stuff) to extract the STFT-spectrogram of the room impulse response function (a way of representing how sounds behave in a given room), from reverberant sounds in the room. We call this system rirnet. We can then use rirnet to create new, artificial sounds that blend well with the soundscape.

Digital cats

Imagine wearing an Augmented Reality (AR) headset with headphones on. A digital version of a cat is placed in front of your eyes in the AR world. For the cat to sound as if it belongs in the physical room that you are standing in, the meows that it makes must be altered in some way to account for e.g. echos in the room. This is where rirnet comes in. By making some noise, e.g. a clap or yelling "hello!", rirnet recognizes this and processes the information to extract the needed information, an impulse response function. The extracted impulse response function is then applied to the meowing cat to make it sound as if it is actually being in the room.

We need your help!

In order to measure the performance of rirnet we need to conduct a few tests. We would very much appreciate if you spend a few minutes of your precious time to do this. This helps us in understanding the strengths and weaknesses of our system. The currently running tests can be found in the menu bar at the top of this page.