Episode 108 David Ledoux (Société Des Arts Technologiques)

Education

Viewing posts tagged Education

Episode 108 David Ledoux (Société Des Arts Technologiques)

Immersive Audio Podcast Masterclass

How to access the content from our Immersive Audio Podcast Masterclass series? Head out to our page on Patreon https://www.patreon.com/c/immersiveaudiopodcast. These sessions are designed to enhance your practical learning experience and are delivered by world-class experts. We go deeper by providing video demonstrations, spatial audio playback and an exclusive opportunity to interact with our expert guests. Our latest instalment features the Co-Founder and CEO of Atmoky – Dr Markus Zaunschirm, the Co-Founder and Lead Developer at Atmoky – Christian Schörkhuber and audio, director at VRelax – Jelmer Althuis! In this session, we cover spatial and interactive sound design for games, XR and web applications using cutting-edge authoring tools by https://atmoky.com/.

Keep up to date with our upcoming events, announcements and industry news by subscribing to our newsletter https://immersiveaudiopodcast.com/.

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the Research Integrator in immersive sound at the Society for Arts and Technology – David Ledoux from Montreal, Canada.

Holder of a Master’s degree in composition and sound creation, David has spent countless hours experimenting with sound spatialization in the speaker domes at the Faculty of Music at the University of Montreal. As a teaching assistant in acoustics, psychoacoustics, sound recording, and mixing, as well as a research assistant for the Groupe de Recherche en Immersion Spatiale (GRIS), David has acquired unique expertise while contributing to the development of SpatGRIS, an open-source software suite for sound spatialization. After serving as the Immersive Lead Audio, overseeing the operational integration of the new 93.5 channels audio system for the Satosphere and providing technical support to artists in residence, David is now working as a Research-Integrator in immersive sound at the Society for Arts and Technology (SAT), offering services and consulting in scenophony for various projects, both for internal needs and external partners of the SAT. David is also a Collaborating Member of the Center for Interdisciplinary Research in Music and Multimedia Technologies (CIRMMT). David covers various immersive audio initiatives and projects under the SAT umbrella featuring the 93.5 channel audio system for the Satosphere and we dive into the concept of Scenophonic Spatial Audio.

Listen to Podcast

Show Notes

David Ledoux –  LinkedIn

David Ledoux Instagram

David Ledoux Personal Website

SAT Official Website

SAT Projects

Scenophonic Spatial Audio Facebook Group

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 106 (DOK Exchange XR)

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel co-hosts the DOK Exchange XR annual podcast with its regular host and curator – Weronika Lewandowska. They are joined by a spatial audio designer and engineer – Aili Niimura and a spatial audio designer and co-founder at DELTA Soundworks – Ana Monte. 

We discuss how to enhance human interactions through spatial audio and how to approach designing audio experiences that foster meaningful connections within immersive environments. Can spatial audio help create more collaborative or socially engaging XR experiences, how does the physical environment influence design choices and what are the challenges Ana and Aili face when it comes to the integration of spatial audio for immersive experiences that may involve both fully digital and speaker-based deployment and much more.

Weronika Lewandowska is the coordinator of DOK Exchange XR, a spoken word poet and performer with a PhD in cultural studies. She experiments with interdisciplinary forms of poetry, and her work has been published internationally and exhibited eg. at The Palais de Tokyo in Paris. She co-directed, wrote, and produced the VR experience “Nightsss”, which premiered at Sundance. She has served as an expert for the Committee for Innovative Film Projects at the National Film Institute Poland (2021) and mentors the Climate Challenge XR project for the European Space Agency. She curates innovative XR showcases, such as the “Embodied Realms” festival focused on XR and performance art in Poland and, the “XRossspace Showcase” in BFI London (2023).  She writes and researches XR climate games for the Guest XR (AI RL Agent) research project.

Aili Niimura is a spatial audio designer and engineer dedicated to pushing the boundaries of spatial audio and AR/VR. Currently working on spatial audio prototyping as a Meta contractor,  formerly driving Metaverse voice experiences at Microsoft Mesh, and spatial audio experiences and devices at Bang & Olufsen. Her focus revolves around end-to-end spatial design, aiming to bridge the gap between art and technology in these areas: Voices in the Metaverse, Spatial Audio Prototyping, Immersive Sound Design, Tuning Spatial Audio Algorithms, Audio Tooling for creators and Spatial Audio to enhance human interactions. 

Ana Monte is a leading expert in spatial audio, co-founder, and sound designer at DELTA Soundworks. Ana specialises in creating immersive audio content that places listeners inside the story across formats like XR, Fulldome, and Themed Attractions. She draws inspiration for authentic soundscapes from her deep interest in different cultures and her experience as a sound recordist and sound designer for documentaries.

Listen to Podcast

Show Notes

DOK Exchange XR is DOK Leipzig’s networking and inspiration programme on interactive and immersive storytelling with a focus on XR works and the sustainable development of the international community of XR creators, producers and distributors. This year’s programme focuses on Spatial Audio and will take place in a hybrid form on 31 October and 1 November 2024.   For more information follow – https://www.dok-leipzig.de/en/dok-exchange-xr#-dok-exchange-xr-conference

Weronika Lewandowska LinkedIn

Aili Niimura  LinkedIn 

Ana Monte LinkedIn 

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 105 Sol Rezza (Dimensions of a Sphere)

Announcement

We’re excited to announce the launch of the Immersive Audio Podcast Masterclass series. The sessions are designed to enhance your practical learning experience and are delivered by world-class experts. We go a level deeper by providing video demonstrations, spatial audio playback and an exclusive opportunity to interact with our expert guests. https://www.eventbrite.com/e/immersive-audio-podcast-masterclass-with-dr-hyunkook-lee-tickets-977952280597

We proudly present our first expert guest Dr. Hyunkook Lee, Professor of Audio and Psychoacoustic Engineering, and the Director of the Applied Psychoacoustics Laboratory (APL), University of Huddersfield, UK. In this masterclass, our audience will learn about psychoacoustic and cognitive aspects of binaural perception, the binauralasation technology and practical use cases of Virtuoso. https://apl-hud.com/product/virtuoso/

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the composer and sound designer – Sol Rezza from Buenos Aires, Argentina.

Sol Rezza is a composer, sound designer, and audio engineer who has carved a path in the realms of immersive narratives and spatial audio. Residing in Mexico for 14 years enriched her understanding of soundscapes and aboriginal narratives concerning time and space. This exploration period led to a diverse skill set, including sound engineering, sound design, room acoustics, spatial audio, and immersive audio.

Rezza’s sonic creations invite audiences to reflect on how sound can redefine their relationship with the environment and the technological developments surrounding them. By employing cutting-edge methodologies such as adaptive spatial processing, real-time interactive sound design, and AI-driven audio synthesis, her goal is to establish new standards for experimental immersive sonic narratives. 

Believing in the power of collaboration, Sol actively participates in the independent creative community by organizing workshops and discussions that foster dialogue and learning in immersive audio experiences. These initiatives invite independent producers to explore their own narrative proposals, enhancing both technical skills and critical thinking about the evolving landscape of sound. 

Her portfolio includes notable works such as the album “SPIT” (2011), praised by the American press, and performances like “In the Darkness of the World” (2015), which was described as a captivating blend of sounds evoking otherworldly realms. In 2021, her audiovisual installation “Catastrophic Forgetting” debuted at the X-Church Cultural Center, utilizing a state-of-the-art multichannel sound system. 

Currently, Sol is researching the influence of advanced audio technologies on digital storytelling. Through her work, she aims to inspire the next generation of audio artists and professionals, expanding the vital role of sound in shaping our perceptions of time and space.

Sol talks about her projects whilst exploring concepts of time and space through the lens of new technologies such as spatial audio and artificial intelligence.

Listen to Podcast

Show Notes

Sol Rezza LinkedIn – https://www.linkedin.com/in/solrezza/

Sol Rezza Official Website – https://solrezza.net/en/

Sol Rezza Research – https://solrezza.net/en/immersive-audio-research/

Sol Rezza Instagrm – https://www.instagram.com/sol.rezza/

Sol Rezza – Youtube – https://www.youtube.com/solrezza

Sol Rezza Email – [email protected]

The Turbulence Sound Matrix – https://archive.aec.at/media/assets/38c52969b8c4fec6cd15e091fc08cc58.pdf

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 95 Gavin Kearney & Helena Daffern (AudioLab, University of York)

This episode is sponsored by HHB Communications, the UK’s leader in Pro Audio Technology. For years HHB has been delivering the latest and most innovative pro audio solutions to the world’s top recording studios, post facilities, and broadcasters. The team at HHB provide best-in-class consultation, installation, training, and technical support to customers who want to build or upgrade their studio environment for immersive audio workflow. To find out more or book a demo at their HQ facility visit www.hhb.co.uk

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel is joined by Professors Gavin Kearney and Helena Daffern from the AudioLab at the School of Physics Engineering and Technology at the University of York, UK.

Gavin Kearney is a Professor of Audio Engineering at the School of Physics Engineering and Technology at the University of York. He is an active researcher, technologist and sound designer for immersive technologies and has published over a hundred articles and patents relating to immersive audio. He graduated from Dublin Institute of Technology in 2002 with an honours degree in Electronic Engineering and has since obtained both MSc and PhD degrees in Audio Signal Processing from Trinity College Dublin. He joined the University of York as a Lecturer in Sound Design at the Department of Theatre, Film and Television in January 2011 and moved to the Department of Electronic Engineering in 2016. He leads a team of researchers at York Audiolab which focuses on different facets of immersive and interactive audio, including spatial audio and surround sound, real-time audio signal processing, Ambisonics and spherical acoustics, game audio/audio for virtual and augmented reality and recording and audio post-production technique development.

Helena Daffern is currently a Professor in Music Science and Technology at the School of Physics Engineering and Technology at the University of York. Her research utilises interdisciplinary approaches to investigate voice science and acoustics, particularly singing performance, vocal pedagogy, choral singing and singing for health and well-being. Recent projects explore the potential of virtual reality to improve access to group singing activities and as a tool for singing performance research. She received a BA (Hons.) degree in music, an M.A. degree in music, and a PhD in music technology, all from the University of York, UK, in 2004, 2005, and 2009. She went on to complete training as a classical singer at Trinity College of Music and worked in London as a singer and teacher before returning to York. Her research utilises interdisciplinary approaches to investigate voice science and acoustics, particularly singing performance, vocal pedagogy, choral singing and singing for health and well-being. Recent projects explore the potential of virtual reality to improve access to group singing activities and as a tool for singing performance research.

Helena and Gavin talk about the recently announced CoSTAR project – the initiative focuses on leveraging a novel R&D in virtual production technologies including CGI, spatial audio, motion capture and extended reality to create groundbreaking live performance experiences.

Listen to Podcast

Show Notes

Gavin Kearney Linkedin – https://www.linkedin.com/in/gavin-p-kearney/?originalSubdomain=uk

Helena Daffern Linkedin – https://www.linkedin.com/in/helena-daffern-32822439/?originalSubdomain=uk

AudioLab – https://audiolab.york.ac.uk/

University of York – https://www.york.ac.uk/

CoSTAR Project – https://audiolab.york.ac.uk/audiolab-at-the-forefront-of-pioneering-the-future-of-live-performance-with-a-new-rd-lab/

BBC Maida Vale Studios – https://www.bbc.co.uk/showsandtours/venue/bbc-maida-vale-studios 

AudioLab goes to BBC Maida Vale Recording Studios – https://audiolab.york.ac.uk/audiolab-goes-to-bbc-maida-vale-recording-studios/

Project SAFFIRE – https://audiolab.york.ac.uk/saffire/

Our Sponsors

Innovate Audio offers a range of software-based spatial audio processing tools. Their latest product, panLab Console, is a macOS application that adds 3D spatial audio rendering capabilities to live audio mixing consoles, including popular models from Yamaha, Midas and Behringer. This means you can achieve an object-based audio workflow, utilising the hardware you already own. Immersive Audio Podcast listeners can get an exclusive 20% discount on all panLab licences, use code Immersive20 at checkout. Find out more at innovateaudio.co.uk *Offer available until June 2024.*

HOLOPLOT is a Berlin-based pro-audio company, which features the multi-award-winning X1 Matrix Array. X1 is software-driven, combining 3D Audio-Beamforming and Wave Field Synthesis to achieve authentic sound localisation and complete control over sound in both the vertical and horizontal axes. HOLOPLOT is pivoting the revolution in sound control, enabling the positioning of virtual loudspeakers within a space, allowing for a completely new way of designing and experiencing immersive audio on a large scale. To find more, visit holoplot.com.

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 94 John Johnson (HHB – Multi Channel Sound Systems)

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the Chief Technology Officer at HHB Communications – John Johnson from London, UK.

JJ has nearly two decades of experience supplying and supporting audio-visual equipment for post-production facilities, broadcasters, and music studios, worldwide. Managing the Pre-Sales, Training & Technical Support teams at HHB Communications, his focus on technology includes assisting the widespread adoption of Immersive  Audio such as Dolby Atmos and Apple Spatial Audio, and various implementations of Audio Visual over IP solutions – predominantly based on the Audinate Dante platform.

JJ covers in-depth the entire process of what it takes to install a multi-channel sound system, from small boutique production spaces to large-scale broadcasters and post facilities, covering the speaker set-ups, consultations, assisted installation and calibration along with associated challenges and opportunities.

Listen to Podcast

Show Notes

JJ Linkedin – www.linkedin.com/in/jajola

HHB Website – www.hhb.co.uk

Visit London’s premiere pro audio demonstration facility. Visitors have the opportunity to audition for an array of studio solutions including Dolby Atmos. The HHB team of audio experts will be available to help customers navigate all the options available today. Book an appointment – www.hhb.co.uk/demo-request

The Home Entertainment Dolby Atmos Room Design Tool (DART) – https://professionalsupport.dolby.com/s/article/The-Home-Entertainment-Dolby-Atmos-Room-Design-Tool-v6-0-0-is-Now-Available?language=en_US

Our Sponsors

Innovate Audio offers a range of software-based spatial audio processing tools. Their latest product, panLab Console, is a macOS application that adds 3D spatial audio rendering capabilities to live audio mixing consoles, including popular models from Yamaha, Midas and Behringer. This means you can achieve an object-based audio workflow, utilising the hardware you already own. Immersive Audio Podcast listeners can get an exclusive 20% discount on all panLab licences, use code Immersive20 at checkout. Find out more at innovateaudio.co.uk *Offer available until June 2024.*

HOLOPLOT is a Berlin-based pro-audio company, which features the multi-award-winning X1 Matrix Array. X1 is software-driven, combining 3D Audio-Beamforming and Wave Field Synthesis to achieve authentic sound localisation and complete control over sound in both the vertical and horizontal axes. HOLOPLOT is pivoting the revolution in sound control, enabling the positioning of virtual loudspeakers within a space, allowing for a completely new way of designing and experiencing immersive audio on a large scale. To find more, visit holoplot.com.

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 93 Juergen Herre (MPEG-I)

Summary

This episode is sponsored by Innovate Audio. Innovate Audio offers a range of software-based spatial audio processing tools. Their latest product, panLab Console, is a macOS application that adds 3D spatial audio rendering capabilities to live audio mixing consoles, including popular models from Yamaha, Midas and Behringer. This means you can achieve an object-based audio workflow, utilising the hardware you already own. Immersive Audio Podcast listeners can get an exclusive 20% discount on all panLab licences, use code Immersive20 at checkout. Find out more at innovateaudio.co.uk *Offer available until June 2024.*

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the Chief Executive Scientist for Audio and Multimedia fields at Fraunhofer International Audio Laboratories – Juergen Herre from Erlangen, Germany.

Juergen Herre received a degree in Electrical Engineering from  Friedrich-Alexander-Universität in 1989 and a Ph.D. degree for his work on error concealment of coded audio. He joined the Fraunhofer Institute for Integrated Circuits (IIS) in Erlangen, Germany, in 1989. There he has been involved in the development of perceptual coding algorithms for high-quality audio, including the well-known ISO/MPEG-Audio Layer III  coder (aka “MP3”). In 1995, he joined Bell Laboratories for a PostDoc term working on the development of MPEG-2 Advanced Audio Coding (AAC).  By the end of 1996, he went back to Fraunhofer IIS to work on the development of more advanced multimedia technology including MPEG-4,  MPEG-7, MPEG-D, MPEG-H and MPEG-I, currently as the Chief Executive  Scientist for the Audio/Multimedia activities at Fraunhofer IIS,  Erlangen. In September 2010, Prof. Dr. Herre was appointed full professor at the University of Erlangen and the International Audio  Laboratories Erlangen. He is an expert in low-bit-rate audio coding/perceptual audio coding, spatial audio coding, parametric audio object coding, perceptual signal processing and semantic audio processing.  Prof. Dr.-Ing. Herre is a fellow member of the Audio Engineering Society  (AES), chair of the AES Technical Committee on Coding of Audio Signals and vice chair of the AES Technical Council. Prof. Dr.-Ing. Juergen  Herre is a senior member of the IEEE, a member of the IEEE Technical  Committee on Audio and Acoustic Signal Processing, served as an associate editor of the IEEE Transactions on Speech and Audio Processing and was an active member of the MPEG audio subgroup for almost three decades.

Juergen explains the science of the key technology concepts behind the worldwide adopted family of MPEG codecs and we discuss the latest addition of the reference model for the virtual and augmented reality audio standard – MPEG-I Immersive Audio.

Listen to Podcast

Show Notes

Juergen Herre – https://www.audiolabs-erlangen.de/fau/professor/herre

International Audio Laboratories, Erlangen – https://www.audiolabs-erlangen.de

Fraunhofer Institute for Integrated Circuits (IIS) – https://www.iis.fraunhofer.de/en/ff/amm/for/audiolabs.html

Friedrich-Alexander-Universität Erlangen-Nürnberg –  https://www.fau.eu/

AES Paper MPEG-I Immersive Audio — Reference Model For The Virtual/Augmented Reality Audio Standard – https://www.aes.org/e-lib/browse.cfm?elib=22127

Perceptual Audio Codecs Tutorial “What to Listen For” – https://aes2.org/resources/audio-topics/audio_coding/perceptual-audio-codecs/

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 91 SXSW 2024 Panel Acceptance Announcement

Summary

We have a big announcement to make – Immersive Audio Podcast are going to SXSW 2024!

We are hosting the panel “State of Play of Immersive Audio: Past, Present & Future”.

It’s been almost six years since we started the Immersive Audio Podcast and as we are coming up to our 100th release anniversary we wanted to mark this milestone with a special edition at SXSW 2024. With the hindsight of releasing almost 100 episodes, we’ve met a lot of companies and experts covering a broad spectrum of topics fundamental to our industry. This panel will highlight the key indicative developments that have defined the immersive audio industry for the past decade, reflect on the current trends and look forward to the future.

Our expert guests will feature Audioscenic Limited covering binaural audio over speakers on consumer devices and HOLOPLOT discussing spatial audio for large-scale immersive events. We’ll moderate the panel as well as discuss our own work in the space of interactive live performance in immersive spaces and spatial and interactive sound design in immersive media production.

Audioscenic – https://www.audioscenic.com/

HOLOPLOT – https://holoplot.com/

If you’d like to learn more about our panelists go to your favorite podcast app or our website and check out Episodes 75/76 featuring Audioscenic (Binaural Audio Over Speakers) and Episodes 77/85 HOLOPLOT (3D Audio – Beamforming and Wave Field Synthesis).

Our audience will receive a comprehensive overview of what defines immersive audio as a whole in the modern multifaceted world of digital media. We’ll provide an overview of core technologies, formats and distribution platforms along with associated challenges and opportunities. Attentive audience members will get a range of unique perspectives from panel experts that can be helpful for education and business development.

This panel is aimed at Sound Engineers, Sound Designers, Musicians, Mixers, Gamers, Writers, Audio Tech Consumers, Academics, Immersive Content Makers and general Tech Enthusiasts…

If you’re planning to attend and would like to arrange to meet us, please get in touch via [email protected]

The panel will be recorded and released on our podcast channel as Episode 100th.

We’d like to thank all our listeners who voted for us and your continued support!

Listen to Podcast

Show Notes

Panel Details, State of Play of Immersive Audio: Past, Present & Future – https://schedule.sxsw.com/2024/events/PP132288

Episode 75 Audioscenic Binaural Audio Over Speakers (Part 1) – https://immersiveaudiopodcast.com/episode-75-audioscenic-binaural-audio-over-speakers-part-1/

Episode 76 Audioscenic Binaural Audio Over Speakers (Part 2) – https://immersiveaudiopodcast.com/episode-76-audioscenic-binaural-audio-over-speakers-part-2/

Episode 77 HOLOPLOT (3D Audio-Beamforming and Wave Field Synthesis) – https://immersiveaudiopodcast.com/episode-77-holoplot-3d-audio-beamforming-and-wave-field-synthesis/

Episode 85 Roman Sick (HOLOPLOT) – https://immersiveaudiopodcast.com/episode-85-roman-sick-holoplot/

Episode 89 John-Henry Dale & Merijn Royaards (Sonic Sphere)

Summary

This episode is sponsored by Innovate Audio. Innovate Audio offers a range of software-based spatial audio processing tools. Their latest product, panLab Console, is a macOS application that adds 3D spatial audio rendering capabilities to live audio mixing consoles, including popular models from Yamaha, Midas and Behringer. This means you can achieve an object-based audio workflow, utilising the hardware you already own. Immersive Audio Podcast listeners can get an exclusive 20% discount on all panLab licences, use code Immersive20 at checkout. Find out more at innovateaudio.co.uk *Offer available until June 2024.*

In this episode of the Immersive Audio Podcast, Monica Bolles is joined by musicians and audio engineers John Henry Dale and Merijn Royaards from Miami, US.

John Henry Dale is an immersive media artist, musician and entrepreneur focused on live spatial audio and video performance, based between Miami and New York. He holds an MSc in Digital Composition and Performance from the University of Edinburgh and composes, performs, and produces music across a range of genres from electronica, jazz, funk, Latin, global bass and ambient, to avant-garde and serialist composition projects. He has also worked extensively in the confluence of IT, Web, AV, Live Streaming, and Immersive Media technology at The Regional Arts and Culture Council, New World Symphony, Hive Streaming and Linkedin.  Most recently in July of 2023, he worked with Merijn Royaards and the Sonic Sphere project to help create custom spatial audio mixes in SPAT, Reaper and Ableton Live of selected works for the Sonic Sphere residency at the Shed and also created a personalised spatial audio mix and listening session for Mike Bloomberg and Marina Abramovic. John Henry performed his live music for his “In Viridi Lux” spatial audio performance project inside the Sonic Sphere as part of a 2023 Miami Individual Artist grant funded by the National Endowment for The Arts and the Miami-Dade Cultural Affairs Department.

Merijn Royaards is a sound architect, researcher, and performer guided by convoluted movements through music, art, and spatial studies. The interaction between space and sound in cities with a history/present of conflict has been a recurring theme in his multimedia works to date. His 2020 awarded doctoral thesis explores the state-altering effects of sound, space, and movement from the Russian avant-garde to today’s clubs and raves. He is one part of a critical essay film practice with artist-researcher Henrietta Williams and teaches sound design for film and installation art at the Bartlett School of Architecture.

JH and Merijn talk about the evolution of Sonic Sphere as a concept, playback system and performance space. They talk about the practical aspects of crafting and experiencing different spatial audio content within the spherical structures.

Listen to Podcast

Newsboard

Our friends at the Sennheiser Ambeo Mobility team are on the lookout for a Senior Audio Engineer, check out the link for more details – https://www.linkedin.com/jobs/view/3725223871/

Show Notes

John Henry Dale Linkedin – https://www.linkedin.com/in/johnhenrydale/

John Henry Dale Website – https://johnhenrydale.com/

Merijn Royaards – https://www.linkedin.com/in/merijn-royaards-82867a273/

Sonic Sphere – https://www.sonic-sphere.com

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 88 Dave Marston & Matt Firth (BBC R&D)

Summary

This episode is sponsored by Innovate Audio. Innovate Audio offers a range of software-based spatial audio processing tools. Their latest product, panLab Console, is a macOS application that adds 3D spatial audio rendering capabilities to live audio mixing consoles, including popular models from Yamaha, Midas and Behringer. This means you can achieve an object-based audio workflow, utilising the hardware you already own. Immersive Audio Podcast listeners can get an exclusive 20% discount on all panLab licences, use code Immersive20 at checkout. Find out more at innovateaudio.co.uk *Offer available until June 2024.*

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the members of the BBC R&D Audio team Dave Marston and Matt Firth from the United Kingdom.

Dave is part of the audio team in BBC R&D, having joined the corporation in 2000. His role in the audio team is mainly involved in the development of the Audio Definition Model (ADM), Next Generation Audio (NGA), and standardisation.   His main area of standardisation work for many years has been at the ITU, representing both the BBC and the UK. Over this period, he has been involved in the standardisation of the ADM, Serial ADM, and the BW64 file format, as well as other related standards and reports. Over many years Dave has worked closely with the EBU and currently chairs the AS-PSE group on personalised sound experiences. One area the group is currently working on is a production profile for the ADM. Past EBU work includes audio codec subjective testing, the BWAV file format, and other ADM-related projects.  Dave has also worked on many collaborative projects over many years, some of which were EU-funded projects (such as ICoSOLE and Orpheus) and some were part of the BBC’s Audio Research Partnership working with Universities.   The most recent area of ADM-related work Dave has been involved in is looking at its use in live production scenarios. This work has included a live trial of Serial ADM, the ADM-OSC protocol, and NGA codecs with the 2023 Eurovision Song Contest.

Matt Firth is a Project R&D Engineer in the Audio Team at BBC Research and Development and leads a workstream on the production of audio experiences. Matt joined BBC R&D in 2015 and has been working on audio production tools and workflows for the past 8 years with a particular focus on Next Generation Audio (NGA) and spatial audio. His work with the BBC has included developing spatial audio tools for live binaural production at scale for the BBC Proms and developing the production tools used for the ORPHEUS project which demonstrated an end-to-end object-based media chain for audio content. Matt has also been involved in standardisation work around the Audio Definition Model (ADM) through the ITU since 2019. He is part of the development team behind the EAR Production Suite which facilitates NGA production using ADM. Recently, Matt was involved in running the live ADM production trials for the Eurovision Song Contest. He also developed some of the production tools and rendering software used during the trial.

We talk about the Next Generation Audio for Live Event Broadcasting, covering aspects such as immersion, interactivity, personalisation and workflows featuring cutting-edge codecs and metadata for Audio Definition Model (ADM), Serial ADM (S-ADM), and OSC-ADM.

Listen to Podcast

Newsboard

Our friends at the Sennheiser Ambeo Mobility team are on the lookout for a Senior Audio Engineer, check out the link for more details – https://www.linkedin.com/jobs/view/3725223871/

Show Notes

Dave Marston Linkedin – https://www.linkedin.com/in/dave-marston-5231961/

Matt Firth Linkedin – https://www.linkedin.com/in/matt-firth-mf/

BBC R&D Website – https://www.bbc.co.uk/rd

BBC R&D Blog – https://www.bbc.co.uk/rd/blog

Live Next Generation Audio trial at Eurovision 2023 – https://www.bbc.co.uk/rd/blog/2023-06-eurovision-next-generation-audio

The EAR Production Suite (EPS) – https://ear-production-suite.ebu.io/

L-ISA – https://l-isa.l-acoustics.com/

New AirPods Pro Support ‘groundbreaking ultra-low latency audio protocol’ for Vision Pro – https://www.roadtovr.com/apple-vision-pro-low-latency-audio-protocol-airpods-pro/

Razer is Releasing Noise Cancelling Wireless Earbuds for Quest 3 – https://www.roadtovr.com/razer-quest-3-noise-cancelling-earbuds/

Audiomovers release landmark plugin Binaural Renderer for Apple Music – https://audiomediainternational.com/audiomovers-release-landmark-plugin-binaural-renderer-for-apple-music/

SPAT Revolution Now Supports Audio-Technica’s BP3600 Immersive Audio Microphone – https://audioxpress.com/news/spat-revolution-now-supports-audio-technica-s-bp3600-immersive-audio-microphone

Sphere Las Vegas – https://www.thespherevegas.com/

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 87 Lorenzo Picinali (Imperial College London)

Summary

This episode is sponsored by Innovate Audio. Innovate Audio offers a range of software-based spatial audio processing tools. Their latest product, panLab Console, is a macOS application that adds 3D spatial audio rendering capabilities to live audio mixing consoles, including popular models from Yamaha, Midas and Behringer. This means you can achieve an object-based audio workflow, utilising the hardware you already own. Immersive Audio Podcast listeners can get an exclusive 20% discount on all panLab licences, use code Immersive20 at checkout. Find out more at innovateaudio.co.uk *Offer available until June 2024.*

In this episode of the Immersive Audio Podcast, Oliver Kadel is joined by the academic and researcher at Imperial College – Lorenzo Picinali from London, United Kingdom.

Lorenzo Picinali is a Reader at Imperial College London, leading the Audio Experience Design team. His research focuses on spatial acoustics and immersive audio, looking at perceptual and computational matters, as well as real-life applications. In the past years Lorenzo worked on projects related to spatial hearing and rendering, hearing aids technologies, and acoustic virtual and augmented reality. He has also been active in the field of eco-acoustic monitoring, designing autonomous recorders and using audio to better understand humans’ impact on remote ecosystems.

Lorenzo talks about the breadth of research initiatives in spatial audio under his leadership of the Audio Experience Design group and we discuss the recently published SONICOM HRTF Dataset developed to improve personalised listening experience.

Listen to Podcast

Newsboard

Our friends at the Sennheiser Ambeo Mobility team are on the lookout for a Senior Audio Engineer, check out the link for more details – https://www.linkedin.com/jobs/view/3725223871/

Show Notes

Lorenzo Picinali – https://www.imperial.ac.uk/people/l.picinali

Imperial College London – https://www.imperial.ac.uk/

Audio Experience Design – https://www.axdesign.co.uk/

SONICOM Website – http://www.sonicom.eu

The SONICOM HRTF Dataset – https://www.axdesign.co.uk/publications/the-sonicom-hrtf-dataset

The SONICOM HRTF Dataset AES Paper – https://www.aes.org/e-lib/browse.cfm?elib=22128

Immersive Audio Demonstration – https://www.youtube.com/watch?v=FWmKNNQpZJA&t=2s

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.