Episode 95 Gavin Kearney & Helena Daffern (AudioLab, University of York)

Physics-based Acoustics

Viewing posts tagged Physics-based Acoustics

Episode 95 Gavin Kearney & Helena Daffern (AudioLab, University of York)

This episode is sponsored by HHB Communications, the UK’s leader in Pro Audio Technology. For years HHB has been delivering the latest and most innovative pro audio solutions to the world’s top recording studios, post facilities, and broadcasters. The team at HHB provide best-in-class consultation, installation, training, and technical support to customers who want to build or upgrade their studio environment for immersive audio workflow. To find out more or book a demo at their HQ facility visit www.hhb.co.uk

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel is joined by Professors Gavin Kearney and Helena Daffern from the AudioLab at the School of Physics Engineering and Technology at the University of York, UK.

Gavin Kearney is a Professor of Audio Engineering at the School of Physics Engineering and Technology at the University of York. He is an active researcher, technologist and sound designer for immersive technologies and has published over a hundred articles and patents relating to immersive audio. He graduated from Dublin Institute of Technology in 2002 with an honours degree in Electronic Engineering and has since obtained both MSc and PhD degrees in Audio Signal Processing from Trinity College Dublin. He joined the University of York as a Lecturer in Sound Design at the Department of Theatre, Film and Television in January 2011 and moved to the Department of Electronic Engineering in 2016. He leads a team of researchers at York Audiolab which focuses on different facets of immersive and interactive audio, including spatial audio and surround sound, real-time audio signal processing, Ambisonics and spherical acoustics, game audio/audio for virtual and augmented reality and recording and audio post-production technique development.

Helena Daffern is currently a Professor in Music Science and Technology at the School of Physics Engineering and Technology at the University of York. Her research utilises interdisciplinary approaches to investigate voice science and acoustics, particularly singing performance, vocal pedagogy, choral singing and singing for health and well-being. Recent projects explore the potential of virtual reality to improve access to group singing activities and as a tool for singing performance research. She received a BA (Hons.) degree in music, an M.A. degree in music, and a PhD in music technology, all from the University of York, UK, in 2004, 2005, and 2009. She went on to complete training as a classical singer at Trinity College of Music and worked in London as a singer and teacher before returning to York. Her research utilises interdisciplinary approaches to investigate voice science and acoustics, particularly singing performance, vocal pedagogy, choral singing and singing for health and well-being. Recent projects explore the potential of virtual reality to improve access to group singing activities and as a tool for singing performance research.

Helena and Gavin talk about the recently announced CoSTAR project – the initiative focuses on leveraging a novel R&D in virtual production technologies including CGI, spatial audio, motion capture and extended reality to create groundbreaking live performance experiences.

Listen to Podcast

Show Notes

Gavin Kearney Linkedin – https://www.linkedin.com/in/gavin-p-kearney/?originalSubdomain=uk

Helena Daffern Linkedin – https://www.linkedin.com/in/helena-daffern-32822439/?originalSubdomain=uk

AudioLab – https://audiolab.york.ac.uk/

University of York – https://www.york.ac.uk/

CoSTAR Project – https://audiolab.york.ac.uk/audiolab-at-the-forefront-of-pioneering-the-future-of-live-performance-with-a-new-rd-lab/

BBC Maida Vale Studios – https://www.bbc.co.uk/showsandtours/venue/bbc-maida-vale-studios 

AudioLab goes to BBC Maida Vale Recording Studios – https://audiolab.york.ac.uk/audiolab-goes-to-bbc-maida-vale-recording-studios/

Project SAFFIRE – https://audiolab.york.ac.uk/saffire/

Our Sponsors

Innovate Audio offers a range of software-based spatial audio processing tools. Their latest product, panLab Console, is a macOS application that adds 3D spatial audio rendering capabilities to live audio mixing consoles, including popular models from Yamaha, Midas and Behringer. This means you can achieve an object-based audio workflow, utilising the hardware you already own. Immersive Audio Podcast listeners can get an exclusive 20% discount on all panLab licences, use code Immersive20 at checkout. Find out more at innovateaudio.co.uk *Offer available until June 2024.*

HOLOPLOT is a Berlin-based pro-audio company, which features the multi-award-winning X1 Matrix Array. X1 is software-driven, combining 3D Audio-Beamforming and Wave Field Synthesis to achieve authentic sound localisation and complete control over sound in both the vertical and horizontal axes. HOLOPLOT is pivoting the revolution in sound control, enabling the positioning of virtual loudspeakers within a space, allowing for a completely new way of designing and experiencing immersive audio on a large scale. To find more, visit holoplot.com.

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 87 Lorenzo Picinali (Imperial College London)

Summary

This episode is sponsored by Innovate Audio. Innovate Audio offers a range of software-based spatial audio processing tools. Their latest product, panLab Console, is a macOS application that adds 3D spatial audio rendering capabilities to live audio mixing consoles, including popular models from Yamaha, Midas and Behringer. This means you can achieve an object-based audio workflow, utilising the hardware you already own. Immersive Audio Podcast listeners can get an exclusive 20% discount on all panLab licences, use code Immersive20 at checkout. Find out more at innovateaudio.co.uk *Offer available until June 2024.*

In this episode of the Immersive Audio Podcast, Oliver Kadel is joined by the academic and researcher at Imperial College – Lorenzo Picinali from London, United Kingdom.

Lorenzo Picinali is a Reader at Imperial College London, leading the Audio Experience Design team. His research focuses on spatial acoustics and immersive audio, looking at perceptual and computational matters, as well as real-life applications. In the past years Lorenzo worked on projects related to spatial hearing and rendering, hearing aids technologies, and acoustic virtual and augmented reality. He has also been active in the field of eco-acoustic monitoring, designing autonomous recorders and using audio to better understand humans’ impact on remote ecosystems.

Lorenzo talks about the breadth of research initiatives in spatial audio under his leadership of the Audio Experience Design group and we discuss the recently published SONICOM HRTF Dataset developed to improve personalised listening experience.

Listen to Podcast

Newsboard

Our friends at the Sennheiser Ambeo Mobility team are on the lookout for a Senior Audio Engineer, check out the link for more details – https://www.linkedin.com/jobs/view/3725223871/

Show Notes

Lorenzo Picinali – https://www.imperial.ac.uk/people/l.picinali

Imperial College London – https://www.imperial.ac.uk/

Audio Experience Design – https://www.axdesign.co.uk/

SONICOM Website – http://www.sonicom.eu

The SONICOM HRTF Dataset – https://www.axdesign.co.uk/publications/the-sonicom-hrtf-dataset

The SONICOM HRTF Dataset AES Paper – https://www.aes.org/e-lib/browse.cfm?elib=22128

Immersive Audio Demonstration – https://www.youtube.com/watch?v=FWmKNNQpZJA&t=2s

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 73 Arjen Van Der Schoot (Audio Ease)

Summary

This episode is sponsored by Spatial, the immersive audio software that gives a new dimension to sound. Spatial gives creators the tools to create interactive soundscapes using our powerful 3D authoring tool, Spatial Studio. Their software modernises traditional channel-based audio; by rethinking how we hear and feel immersive experiences, anywhere. To find more go to https://www.spatialinc.com.

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the co-founder at Audio Ease – Arjen Van Der Schoot from Utrecht, Netherlands.

Arjen studied music technology at the Utrecht School of Arts. After which Arjen and 4 of his fellow graduates started Audio Ease in 1995. He was freelancing recording Classical music, Minoring in Classical guitar and majoring in software development for audio and music. Together they made about 10 products before their big break with Altiverb which became an industry standard. The 360pan suite came out of the fierce hope that an old affection – ambisonics, would now finally definitely make it, because somebody thought of it in the context of head-tracked VR, an application that requires headphones, in Arjen’s opinion the only medium it is perfect for.

Arjen tells a fascinating story of the creation of Audio Ease and the famous Altiverb and we explore the topic of convolution for spatial audio.

Listen to Podcast

Show Notes

Arjen Van Der Schoot – https://www.linkedin.com/in/arjen-van-der-schoot-4036a932/

Audio Ease – https://www.audioease.com/

Audio Ease Official YouTube Channel – https://www.youtube.com/user/audioease

Altiverb – https://www.audioease.com/altiverb/

360 Pan Suite – https://www.audioease.com/360/

Speakerphone – https://www.audioease.com/speakerphone/

Sonic Wonderland: A Scientific Odyssey of Sound – https://www.amazon.co.uk/Sonic-Wonderland-Scientific-Odyssey-Sound/dp/1847922104

Watch The Sound with Mark Ronson – https://tv.apple.com/gb/show/watch-the-sound-with-mark-ronson/umc.cmc.56ka6i8ccv7tsatj6nd1uo808

Sennheiser VR Mic – https://en-uk.sennheiser.com/microphone-3d-audio-ambeo-vr-mic

Delay Lama – https://freevstplugins.net/delay-lama

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 62 Nikunj Raghuvanshi & Noel Cross (Microsoft – Physics Based Virtual Acoustics)

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel, Monica Bolles and Bjørn Jacobsen are joined by a Principal Dev Leader at the Mixed Reality division at Microsft Noel Cross and Senior Principal Researcher at Microsoft Research Nikunj Raghuvanshi from Redmond, US.

Nikunj likes to invent techniques that create immersive sight and sound from computation. He is endlessly fascinated with simulating the laws of physics in real-time and finds it thrilling to search for simple algorithms that unfold into complex physical behaviour. He has over a decade of research and development experience at the intersection of computational audio, graphics, and physics, with over fifty papers and patents. His inventions have been successfully deployed in the industry, particularly Project Acoustics, which is bringing immersive sound propagation to many major AAA game franchises today. Nikunj is currently a Senior Principal Researcher at Microsoft Research. Previously, he initiated interactive sound simulation research at UNC-Chapel Hill during his PhD studies, whose codebase was acquired by Microsoft. 

Noel grew up playing games on my Commodore 64 and Amiga computers. His love for multimedia computing helped him to start working at Microsoft as an intern in 1991 in the multimedia team. This was the age of the SoundBlaster 16 ISA cards and CD-ROMs were just being introduced into PCs.  Out of the multimedia team, the DirectX team was born to accelerate the development of high-quality games for the PC. Noel worked on DirectSound and audio drivers for Windows getting a taste of the game development community attending several GDCs in the 90s. This was the first time he was introduced to 3D audio algorithms and at the time the technology didn’t impress much.  Through the 2000s, he worked on every release of Windows with the focus on improving the audio subsystem. This led to the complete overhaul of the audio infrastructure on the Windows Vista platform which has remained largely intact since introduced in 2006.  The most current stop on his Microsoft journey is working on Mixed Reality devices. He worked on the speech and audio functionality exposed from HoloLens and Windows Mixed Reality devices with a concentration on spatial audio. After having lackluster impact in the 90s with spatial audio, he’s been reinvigorated working on this technology with the introduction of high-quality HRTFs and head-tracking services to complete the experience. Spatial audio processing has also led Noel to better understand the impact of acoustics on virtual 3d worlds. His team is currently working on Project Acoustics which allows developers of 3d titles to take advantage of wave-based simulations to handle how audio propagates in the real world.

In this episode, Nikunj and Noel dive deep into the topic of physics-based virtual acoustics along with Project Triton and Project Acoustics covering fundamental theory, research, technology and case studies.

Listen to Podcast

Show Notes

Project Acoustics: Making Waves with Triton – https://youtu.be/pIzwo-MxCC8

Gears of War 4, Project Triton: Pre-Computed Environmental Wave Acoustics  – https://youtu.be/qCUEGvIgco8

NOTAM – https://notam.no/meetups/the-ambisonics-salon/ 

Nikunj Raghuvanshi’s LinkedIn: https://www.linkedin.com/in/nikunj-raghuvanshi-a499172b/

Noel Cross’s LinkedIn: https://www.linkedin.com/in/noel-cross-0b9a51167/

Microsoft Soundscape – https://www.microsoft.com/en-us/research/product/soundscape/

Microsoft HoloLens – https://www.microsoft.com/en-us/hololens

AVAR 2022: AES 4th International Conference on Audio for Virtual and Augmented Reality – https://aes2.org/contributions/avar-2022/

Senua’s Saga: Hellblade II – Gameplay reveal – https://youtu.be/fukYzbthEVU //

https://twitter.com/dagadi/status/1470000580223504389)

Directional Sources & Listeners in Interactive Sound Propagation using Reciprocal Wave Field Coding – https://www.youtube.com/watch?v=pvWlCQGZpz4

​​Project Acoustics | Game Developers Conference 2019- https://youtu.be/uY4G-GUAQIE

Interactive sound simulation: Rendering immersive soundscapes in games and virtual reality – https://youtu.be/2sKPDGBsM0Q

Project Triton Research website – https://www.microsoft.com/en-us/research/project/project-triton/

Notes on Parametric Wave Field Coding for Precomputed Sound Propagation – https://www.microsoft.com/en-us/research/publication/parametric-wave-field-coding-precomputed-sound-propagation/

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.