Episode 126 (SXSW 2026 Panel – Audible Dimensions)

Uncategorized

Viewing posts from the Uncategorized category

Episode 126 (SXSW 2026 Panel – Audible Dimensions)

Summary

In this episode of the Immersive Audio Podcast, we travel to Austin, Texas, for the 2026 edition of the SXSW festival to present the panel – Audible Dimensions: Expanding Story Through Spatial Audio.

This panel explores the latest innovation in immersive sound technology and practical application from three different perspectives: software and hardware capabilities of hosting platforms, the spatial audio formats, and content production workflows that bring these experiences to life.

In this session, Dolby also shares how they’ve evolved Dolby Atmos to support new types of extended reality experiences with the new Dolby Atmos Immersive Media Production Suite. Learn more and request early access at professional.dolby.com/daimps.

As storytelling expands into extended reality media, such as immersive films, VR, MR, AI glasses and location-based installations, spatial audio is emerging as one of the key narrative forces. With today’s advanced head-mounted display technology, audiences expect not just 3D visuals but equally rich immersive sound that conveys presence, space and movement.

This episode recording is supported by our friends at Nomono

This panel features: 

Monica Bolles – Creative Lead/Owner at Resonant Interactions has been working with spatial audio since 2011, beginning with her local planetarium’s 15.1 surround system. As both technician and artist, she collaborates with composers and artists to realise immersive spatial works and also creates her own large-scale multimedia installations. Her work has featured at events like the Conference of World Affairs and Virginia Tech’s Cube Fest. She co-hosts the Immersive Audio Podcast and NOTAM’s Spatial Audio Meetups. Monica regularly curates, moderates, and presents on spatial audio at festivals and conferences, including New Visions Festival, SXSW, IRCAM Forums, Ableton Loop, IMERSA Summit, and NIME. She runs Resonant Interactions, a company focused on immersive experience design and music production.

Eric Cheng – Director of Immersive Media at Meta Reality Labs is an award-winning photographer, creative technologist, and Director of Immersive Media at Meta Reality Labs. Having produced 8 Emmy-nominated VR films and experiences, Eric is at the forefront of exploring emerging media formats. Before joining Meta, Eric was Director of Aerial Imaging at DJI and Director of Photography at Lytro. In a previous life, Eric was on the ocean as a professional underwater photographer, during which he published widely, led expeditions to remote locations on the planet, and gave talks about the intersection of technology and storytelling at events including TEDx, EG, TTI/Vanguard, DEFCON, CES, SXSW, and others. He is a passionate advocate for ocean conservation, having served on the Board of Directors of Shark Savers (now part of WildAid) and the Board of Advisors of Sea Shepherd Conservation Society. Eric holds degrees in computer science from Stanford University, where he also studied classical cello performance. He lives in the San Francisco Bay Area with his wife and two sons.

David Gould currently leads the Content Creation and Distribution Solutions team at Dolby, responsible for the professional ecosystems that enable the creation of spectacular entertainment content in Dolby Atmos and Dolby Vision. Prior to joining Dolby in 2012, David was a Senior Product Manager at Avid Technology, where he was responsible for Pro Tools software – the industry standard Digital Audio Workstation (DAW). David started his career in London as a recording engineer at Abbey Road Studios, specialising in orchestral film scoring; he joined Avid in 2005, where he held various positions in technical sales before moving into product management.

Victor Agulhon – Co-founder & CEO at TARGO, the multi-award-winning and 4 x time Emmy®-nominated immersive documentary studio. TARGO creates the future of immersive entertainment, producing industry-leading non-fiction experiences for virtual, mixed and augmented reality and spatial computers, including Vision Pro, Meta Quest, and GalaxyXR. As a member of the Television Academy’s Emerging Media Peer Group, he helps make immersive technologies more accessible to the film and television industry. He also coaches young entrepreneurs to accelerate their growth and build companies more quickly.

Show Notes

Eric Cheng LinkedIn

David Gould LinkedIn 

Victor Agulhon  LinkedIn

Listen to Podcast

Immersive Audio Podcast Masterclass

How to access the content from our Immersive Audio Podcast Masterclass series? Head out to our Patreon. The sessions are designed to enhance your practical learning experience and are delivered by world-class experts. The livestream contains video demonstrations and spatial audio playback with live Q&A. Keep up to date with our upcoming events, announcements and industry news by subscribing to our newsletter.

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 125 Jack Reynolds (BBC R&D)

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel is joined by the musician,  sound engineer, microphone designer and immersive media producer – Jack Reynolds from London, UK.  

We discuss Jack’s extensive career in music and spatial audio, the subsequent move to BBC R&D, UCL, and his own company, Reynolds Microphones. We talk about the importance of ambisonics for location sound and immersive documentary production.

Jack Reynolds founded Reynolds Microphones in 2016, following a diverse and accomplished journey through the music and audio industry. He began as a performer, composer, mix engineer, producer, and studio owner. At just 20, he achieved major label success with the band Jocasta, signed to Sony Music in 1996. Their single Go was A-listed by BBC Radio 1, followed by appearances at major festivals worldwide.

Alongside his musical career, working with artists such as Kieran Hebden (Four Tet) and others, Jack studied music production at Islington Music Workshop. He also worked as a recording engineer, producer, guitarist, and composer at Wendyhouse Studios, collaborating with bands including Geezers of Nazareth, Lowfinger, Human, Dempsey, and many others. This led to roles as a recording engineer for EQ Studios and Fire Records, before he went on to design and build his own facility, Jack In The Box Studios in North London, in 2008.

In 2010, he launched Opal Microphones Ltd, his first venture into developing recording equipment, including the acclaimed OM7 multipattern valve microphone. His growing interest in the science of sound led him to pursue a degree in Electronic and Electrical Engineering at University College London, where he won the Advanced Entrepreneurathon, a milestone that led directly to the founding of Reynolds Microphones.

In parallel, Jack founded Soho VR Audio Ltd in 2016, delivering spatial audio productions for clients including Sky VR, Google, and Nokia, while also completing the BBC Research and Development Graduate Programme.

Today, Jack leads the development of innovative sound capture technologies at Reynolds Microphones, where each product is hand-built in Hertfordshire with a strong focus on precision and quality. Inspired by pioneers such as Georg Neumann, the company has developed a range of high-end microphones, including the R-Type valve series, RM1-V condenser models, and the latest ambisonic A-Type range. Designed for immersive sound capture in virtual reality, 360 video, post-production, and field recording, the A-Type microphones are pushing the boundaries of spatial audio and have become an essential tool for many professionals. Upcoming releases include the O-Type and C-Type miniature omni and cardioid condenser microphones, as well as the S-Type spaced arrays.

Alongside running the company, Jack is also a Senior Lecturer in Spatial Audio Production at University College London, where he contributes to research in immersive audio, live performance in the metaverse, and next-generation sound production, helping to inspire the next generation of audio practitioners. All of this is balanced alongside his role leading immersive media production as Development Producer for BBC R&D’s FWD Team, including recent projects such as The Portal, a series of live volumetric video music experiences for BBC Radio 1’s New Music Show.

Show Notes

Jack Reynolds LinkedIn

Reynolds Microphones Official Website

Listen to Podcast

Immersive Audio Podcast Masterclass

How to access the content from our Immersive Audio Podcast Masterclass series? Head out to our Patreon.

The sessions are designed to enhance your practical learning experience and are delivered by world-class experts. The livestream contains video demonstrations and spatial audio playback with live Q&A. Keep up to date with our upcoming events, announcements and industry news by subscribing to our newsletter.

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 124 John Wills (POSITIVE AMBISONICS)

Announcement

We’re excited to share that we’ll be presenting “Audible Dimensions: Expanding Story Through Spatial Audio” at SXSW 2026 in Texas.

Our panel will be moderated by the creative lead at Resonant Interactions and Immersive Audio Podcast co-host, Monica Bolles, and will feature special guests: The Director of Content Creation & Distribution Solutions at Dolby Laboratories, David Gould. The Director of Immersive Media at Meta – Eric Cheng. Senior Sound Technologist for Immersive Media at 1.618 DIGITAL – Oliver Kadel.

As storytelling expands into extended reality media, such as immersive films, VR, MR, AI glasses and location-based installations, spatial audio is emerging as one of the key narrative forces. With today’s advanced head-mounted display technology, audiences expect not just 3D visuals but equally rich immersive sound that conveys presence, space and movement. This panel explores the latest innovation in immersive sound technology and practical application from three different perspectives: software and hardware capabilities of hosting platforms, the spatial audio formats and content production workflows that bring these experiences to life.

See you soon in Austin!

Summary

In this episode of the Immersive Audio Podcast, Monica Bolles is joined by the musician,  sound engineer, educator – John Wills from Tayvallich, Scotland.  
We discuss John’s initiative, Positive Ambisonics, a non-formal creative and educational residency programme focused on developing practical skills in recording and working with spatial audio.

John has been experimenting with sound recording from an early age. Starting with a family reel-to-reel tape recorder, before moving on to multitracking and looping cassettes on several machines. He spent the summer of 1978 at The Slade School of Art, helping his friend to make soundscapes for his finals with a VCS 3 synth and an eight-track recorder and was hooked. In the early 80’,s he discovered Eno’s ambient speaker system and recreated it with thirty car speakers for a very early spatial installation in London. He is also a musician and has toured internationally with two art noise bands on the Beggars Banquet label (Loop and The Hair & Skin Trading Co.), achieving top 40 albums. In 2017, John went to Orkney to work with BBC sound recordist Chris Watson, creating a quadraphonic piece for the Orkney Festival at St Magnus Cathedral in Kirkwall. During this time, Chris introduced John to ambisonic recording, and since then, he’s specialised in spatial sound production, working on spatial installations. During the COVID lockdown, he presented The Great John Cage Project Podcast, playing 4:33-minute recordings that people sent in of their environment. He co-created with Pinkie Maclure a live musical, ambisonic performance for eight speakers involving audio capture and manipulation performed four times as part of her three-month exhibition at the Centre for Contemporary Arts in Glasgow. Now he has created a small eight-speaker ambisonic studio called +VE on the fringe of the Scottish rainforest in Argyll, inviting artists of different disciplines to record in the studio and on location to experiment with spatial production techniques. Future projects are in Arctic caves and Irish bogs.

Show Notes

John Wills LinkedIn

Positive Ambisonics Official Website

Listen to Podcast

Immersive Audio Podcast Masterclass

How to access the content from our Immersive Audio Podcast Masterclass series? Head out to our Patreon. The sessions are designed to enhance your practical learning experience and are delivered by world-class experts. The livestream contains video demonstrations and spatial audio playback with live Q&A. Keep up to date with our upcoming events, announcements and industry news by subscribing to our newsletter.

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 123 Dan Barry (Spatial Audio & AI)

Announcement

We’re excited to share that we’ll be presenting “Audible Dimensions: Expanding Story Through Spatial Audio” at SXSW 2026 in Texas.

Our panel will be moderated by the creative lead at Resonant Interactions and Immersive Audio Podcast co-host, Monica Bolles, and will feature special guests: The Director of Content Creation & Distribution Solutions at Dolby Laboratories, David Gould. The Director of Immersive Media at Meta – Eric Cheng. Senior Sound Technologist for Immersive Media at 1.618 DIGITAL – Oliver Kadel. As storytelling expands into extended reality media, such as immersive films, VR, MR, AI glasses and location-based installations, spatial audio is emerging as one of the key narrative forces.

With today’s advanced head-mounted display technology, audiences expect not just 3D visuals but equally rich immersive sound that conveys presence, space and movement. This panel explores the latest innovation in immersive sound technology and practical application from three different perspectives: software and hardware capabilities of hosting platforms, the spatial audio formats and content production workflows that bring these experiences to life.

See you soon in Austin!

Summary

In this episode of the Immersive Audio Podcast, Monica Bolles is joined by the software engineer and researcher – Dan Barry from Dublin, Ireland.  We discuss Dan’s work and research in digital signal processing, data analytics and AI machine learning for projects centred around spatial psychoacoustics and sound source separation for music.

Dan is a 4-time startup founder with a PhD in audio signal processing. Throughout his career, he has focused on translating academic research into products, spinouts and licenses. In 2006, he established the Audio Research Group in the Dublin Institute of Technology and served as the manager there until 2011. The group grew to 12 researchers and produced a wide body of published work in music, speech, and audio signal processing, including several patents. In 2007, he licensed his audio source separation patent to Sony for use in the popular game Singstar on the PlayStation 3.

In 2011, he co-founded the music education startup, Riffstation, where he served as CEO until 2015, when it was acquired by Fender, where he served as Vice President of Research and Development. In 2018, he left Fender and co-founded the spatial upmixing company, VRX Audio. 

He returned to academia in 2020 and took up a research position at QxLab within the Insight Centre for Data Analytics at UCD. During that appointment, he designed and delivered Go Listen, an online subjective listening test platform which has been widely adopted by academia and industry. In 2021, he co-founded another startup, GuitarApp, which now provides free online guitar lessons and tools to over 350,000 visitors monthly.

Show Notes

Dan Barry LinkedIn

Go Listen! 

Binamix: A Python Library for Generating Binaural Audio Datasets.

BINAQUAL: A Full-Reference Objective Localisation Similarity Metric for Binaural Audio.

Binaspect: A Python Library for Binaural Audio Analysis, Visualisation & Feature Generation.

Systematic Evaluation of Time-Frequency Features for Binaural Sound Source Localisation. 

EgoMusic: An Egocentric Augmented Reality Glasses Dataset for Music. 

Listen to Podcast

Immersive Audio Podcast Masterclass

How to access the content from our Immersive Audio Podcast Masterclass series? Head out to our Patreon. The sessions are designed to enhance your practical learning experience and are delivered by world-class experts. The livestream contains video demonstrations and spatial audio playback with live Q&A. Keep up to date with our upcoming events, announcements and industry news by subscribing to our newsletter.

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 122 Justin Gray (IMMERSED)

Announcement

We’re excited to share that we’ll be presenting “Audible Dimensions: Expanding Story Through Spatial Audio” at SXSW 2026 in Texas.

Our panel will be moderated by the creative lead at Resonant Interactions and Immersive Audio Podcast co-host, Monica Bolles, and will feature special guests: The Director of Content Creation & Distribution Solutions at Dolby Laboratories, David Gould. The Director of Immersive Media at Meta – Eric Cheng. Senior Sound Technologist for Immersive Media at 1.618 DIGITAL – Oliver Kadel.

As storytelling expands into extended reality media, such as immersive films, VR, MR, AI glasses and location-based installations, spatial audio is emerging as one of the key narrative forces. With today’s advanced head-mounted display technology, audiences expect not just 3D visuals but equally rich immersive sound that conveys presence, space and movement. This panel explores the latest innovation in immersive sound technology and practical application from three different perspectives: software and hardware capabilities of hosting platforms, the spatial audio formats and content production workflows that bring these experiences to life.

 See you soon in Austin!

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the award-winning bassist, composer, producer and engineer – Justin Gray from Toronto, Canada.  

We do a deep dive into Justin’s recently Grammy-nominated album IMMERSED, and what it takes to produce an album in spatial audio from the ground up, starting from the preproduction, writing, arranging, recording, editing, mixing, mastering and of course marketing and distribution.

Justin Gray is a leader in immersive audio, he has mixed and mastered thousands of songs in Dolby Atmos, collaborating with artists, producers, and labels worldwide. His credits as an immersive engineer include Olivia Rodrigo, Snoop Dogg, The Tragically Hip, Brandy, Mother Mother, Blue Rodeo, Arkells, Jann Arden, Karan Aujla, Mae Martin, Marcin, Lola Brooke, and Josh Ross.

Justin’s latest release, Immersed, is a cinematic visual album composed, recorded, and produced specifically for immersive audio. Featuring 38 musicians from around the world, the project places the listener at the centre of a global orchestra, with instruments and voices surrounding them from all directions. Each piece was written to unfold around the listener, evolving across a three-dimensional soundscape. 

A spatial vision guided the recording process; every sound was performed and captured with its place in the immersive field in mind. The result is an experience in which the Dolby Atmos mix isn’t a translation of the music; it is the music. 

A Juno Award winner and lifelong student of Indian classical music, Justin’s work reflects a deep commitment to cross-cultural collaboration and a desire to reimagine how music can be created, shared, and experienced.

Show Notes

Justin Gray Sound Website 

IMMERSED Album Trailer

Justin Gray YouTube 

Justin Gray LinkedIn

Listen to Podcast

Immersive Audio Podcast Masterclass

How to access the content from our Immersive Audio Podcast Masterclass series? Head out to our Patreon. The sessions are designed to enhance your practical learning experience and are delivered by world-class experts. The livestream contains video demonstrations and spatial audio playback with live Q&A. Keep up to date with our upcoming events, announcements and industry news by subscribing to our newsletter.

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 121 Ceri Thomas (IMMERSV)

Announcement

We’re excited to announce the fifth instalment of the Immersive Audio Podcast Masterclass series. As always, the sessions are designed to enhance your practical learning experience and are delivered by world-class experts.

We proudly present our guests for this session: the Professor of Audio Technology at Chalmers University of Technology – Jens Ahrens, the sound designer and co-founder of DELTA Soundworks – Ana Monte, and the technical lead and co-founder of DELTA Soundworks – Daniel Deboy.

This masterclass will focus on “Spatial Audio – Practical Master Guide”, the first Acoucou Courseware programme where technology meets art. This free course blends the technical intricacies of spatial audio with the creative skills needed to craft compelling content. With extensive cross-references to other engineering modules where learners gain a broad understanding of how everything connects.

🔗 https://spatial-audio.acoucou.org/

We’ll cover the structure of the course, history of spatial audio and evolution of sound aesthetics, psychoacoustics of stereophony, and different applications of spatial audio and delivery formats. Please sign up with the link below.

We’re looking forward to seeing you there!

Event Sign Up Page

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the immersive audio workflow engineer Ceri Thomas from LA, US. Ceri shares his personal and professional journey that led him to working with Dolby and Apple at a pivotal moment when spatial audio began to enter the mainstream. He also discusses his latest consultancy venture, IMMERSV. This week’s hot topic was about the critical balance between quality and mass adoption of immersive audio in music.

Ceri Thomas is a leader in immersive audio and workflow innovation whose career spans film, VR, music, and product development. Beginning in London during a period of rapid change in post-production, he enabled creative teams to work without technical barriers as the industry transitioned from tape to disk, RADAR to Pro Tools, and analogue to audio-over-IP. 

At Soundelux, Danetracks, and Todd-AO, he was deeply involved in the early rollout of theatrical Dolby Atmos, exploring its potential and limitations. He later joined Wylie Stateman’s Twenty Four Seven Sound, creating collaborative technical infrastructure and building an “Immersive Lab” for early-stage Atmos editorial, which led to Dolby recruiting him for its VR audio initiative. 

At Dolby, he worked to make VR audio tools more accessible and then spearheaded the global adoption of Dolby Atmos for music, establishing delivery standards, room design guidelines, and integrating Atmos into top studios worldwide. His collaborations include projects with Manny Marroquin, Josh Gudwin, John McBride, Ann Mincieli, and George Massenburg. 

In 2022, Ceri joined Apple Music as Spatial Audio Technology Lead, where he built the Spatial Audio QC process, redefined listening room and studio configurations, produced documentation for creation and broadcast workflows, and advised on immersive audio specifications internationally. In 2025, he founded IMMERSV, an independent consultancy working with consumer electronics manufacturers, DSPs, and broadcasters to deliver studio design, product development input, and workflow solutions for immersive audio.

Show Notes

Ceri Thomas LinkedIn

IMMERSV

Listen to Podcast

Immersive Audio Podcast Masterclass

How to access the content from our Immersive Audio Podcast Masterclass series? Head out to our Patreon. The sessions are designed to enhance your practical learning experience and are delivered by world-class experts. The livestream contains video demonstrations and spatial audio playback with live Q&A. Keep up to date with our upcoming events, announcements and industry news by subscribing to our newsletter.

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 120 Angela McArthur (UCL)

Announcement

We’ve just submitted our panel proposal for SXSW 2026: “Audible Dimensions: Expanding Story Through Spatial Audio” We’ll explore how spatial audio is more than a technical enhancement but a powerful storytelling tool. From shaping emotion and perspective to guiding audience engagement, we’ll discuss how designing for immersive media means writing for space as well as time. As platforms like Meta, Apple Vision Pro, and Android XR raise audience expectations, spatial audio is becoming indispensable to immersive storytelling. 

Please cast your vote and support our session here.

Thank you!

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the spatial sound artist and academic Angela McArthur from London, UK. We dive into the IKO, 3D inside-out playback speaker system developed by Sonible, IEM, which offers unique characteristics, making the space architecture part of the sound presentation.

Dr. Angela McArthur leads an interdisciplinary (art/ science) MA in spatial sound in the Dept of Anthropology at University College London. Her work centres around underrepresented onto-epistemologies, ocean environments, and intersecting sites of artistic practice. She works with the IKO loudspeaker and other spatial systems.  She champions diversity in access and representation in sound and beyond.

Show Notes

Angela McArthur LinkedIn

Angela McArthur Academic Profile

Computer Music Multidisciplinary Research (CMMR) Symposium 2025 

IKO

UCL

Listen to Podcast

Immersive Audio Podcast Masterclass

How to access the content from our Immersive Audio Podcast Masterclass series? Head out to our Patreon.

\The sessions are designed to enhance your practical learning experience and are delivered by world-class experts. The livestream contains video demonstrations and spatial audio playback with live Q&A.

Keep up to date with our upcoming events, announcements and industry news by subscribing to our newsletter.

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 119 Tim Archer (Audio for Giant Screens & Themed Attractions)

Immersive Audio Podcast Masterclass

How to access the content from our Immersive Audio Podcast Masterclass series? Head out to our Patreon.

The sessions are designed to enhance your practical learning experience and are delivered by world-class experts. The livestream contains video demonstrations and spatial audio playback with live Q&A.

Keep up to date with our upcoming events, announcements and industry news by subscribing to our newsletter.

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the production sound mixer, sound designer and mixing engineer – Tim Archer from Victoria, Canada. We discuss Tim’s approach to creating an immersive experience for giant screens and theme attractions, from location sound to sound design, final mix and speaker system design at the installation venues.

Tim Archer is a Canadian-based audio production and post-production designer, specialising in sound design and installation for Giant Screen films (IMAX), Fulldomes, Science Centres, FlyOvers and Themed Attractions. With over 40 years’ experience, Tim is experienced in all aspects of audio production and post-production, from multi-channel location recording to sound design to “in-theatre” mixing. As well as hardware design and installation for special venues and themed attractions. Tim’s approach to hardware design comes from more of a creative rather than technical direction. With over 4 decades of experience creating and mixing multi-channel projects in venues of all different sizes, shapes and configurations, he strives to achieve the best sounding experience within the physical space. Recent projects include, Britannia Mine Museum, Great Bear Rainforest IMAX film, Pier D Vancouver International Airport multimedia installation, Flyover Chicago, Cape Breton Miner’s Museum and the newly constructed exhibits at the Lowell Observatory. At present, Tim is at the early stages of launching a self-funded, personal endeavour, “The Just Listen Project”, creating multi-channel and binaural sound stories around the world to help introduce people to Soundscape Ecology and to encourage the lost art of listening.

Show Notes

Tim Archer LinkedIn

Official Website

Listen to Podcast

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 118 Julian Treasure (The Forgotten Skills of Speaking and Listening)

Immersive Audio Podcast Masterclass

How to access the content from our Immersive Audio Podcast Masterclass series? Head out to our Patreon.

The sessions are designed to enhance your practical learning experience and are delivered by world-class experts. The livestream contains video demonstrations and spatial audio playback with live Q&A.

Keep up to date with our upcoming events, announcements and industry news by subscribing to our newsletter.

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the international speaker, author and communication expert – Julian Treasure, from Orkney, Scotland.

Julian Treasure is a top-rated international speaker on sound and the critical communication skills of listening and speaking. Collectively, Julian’s five TED Talks have been viewed over 150 million times. How to Speak So That People Want to Listen is the sixth most-viewed TED talk of all time. In live or virtual keynotes, or in bite-sized appearances via ThinkersOne, Julian delivers engaging, entertaining and transformational content, and is equally potent and effective with live or virtual delivery. ‍ His talks enhance business effectiveness, particularly for those in sales or leadership roles, as well as enriching relationships at work and at home. His presenting skills and innovative use of sound make his talks visceral and potent experiences that are always highly rated by delegates. In addition to keynote speeches, Julian can also arrange structured training or workshops for your company.

In this episode, we dive deep into the fundamentals of human communication, discussing the profound importance of listening and powerful speaking and its universal impact on our lives.

Show Notes

Julian Treasure LinkedIn 

Official Website

Listening Society 

Sound Affects: How Sound Shapes Our Lives, Our Wellbeing and Our Planet

 

Listen to Podcast

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 117 Jeremy Dalton (Reality Check)

Announcement

We’re excited to announce the fourth instalment of the Immersive Audio Podcast Masterclass series. As always, the sessions are designed to enhance your practical learning experience and are delivered by world-class experts. We’ll be providing video demonstrations, spatial audio playback and an exclusive opportunity to interact with our expert guests.

We proudly present our expert guest from Nomono, the lead audio engineer and head of content – Ruben Åeng. Nomono is a portable spatial audio recording system paired with a powerful cloud-based processing for content creators with different needs. In this session, we’ll cover recording location sound and dialogue in 3D, post-production and authoring using the Nonomo spatial sound recorder and software.https://nomono.co/spatial-audio-with-nomono

Event Sign Up Page

How to access the content from our Immersive Audio Podcast Masterclass series? Head out to our page on Patreon www.patreon.com/c/immersiveaudiopodcast. The sessions are designed to enhance your practical learning experience and are delivered by world-class experts. The livestream contains video demonstrations and spatial audio playback with live Q&A.

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the author of Reality Check, advisor to FOV Ventures, and a global speaker on the subject of innovation and business – Jeremy Dalton, from London, UK.

Jeremy Dalton has delivered emerging technology solutions and insights for the Fortune 500, startups, NGOs, and government clients worldwide. He established PwC’s immersive technologies team globally and served as an expert on the topic for the World Economic Forum. His work has featured in the Financial Times, the Economist and the BBC, and has been cited by industry leaders including Google and Microsoft. He is currently focused on the cutting-edge integration of XR with AI and other emerging technologies, implementing convergent solutions to help businesses deliver even greater impact.

In this episode, we discuss the emerging technology and convergence between XR and AI, along with its impact on the creative industries.

Show Notes

Jeremy Dalton LinkedIn

Official Website

Get a copy of the new edition of Reality Check (40% off with code SALE40 until 31 May)

VR vs video conferencing study

In My Shoes 

FOV Ventures 

Listen to Podcast

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.