Episode 88 Dave Marston & Matt Firth (BBC R&D)

Binaural Audio

Viewing posts tagged Binaural Audio

Episode 88 Dave Marston & Matt Firth (BBC R&D)

Summary

This episode is sponsored by Innovate Audio. Innovate Audio offers a range of software-based spatial audio processing tools. Their latest product, panLab Console, is a macOS application that adds 3D spatial audio rendering capabilities to live audio mixing consoles, including popular models from Yamaha, Midas and Behringer. This means you can achieve an object-based audio workflow, utilising the hardware you already own. Immersive Audio Podcast listeners can get an exclusive 20% discount on all panLab licences, use code Immersive20 at checkout. Find out more at innovateaudio.co.uk *Offer available until June 2024.*

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the members of the BBC R&D Audio team Dave Marston and Matt Firth from the United Kingdom.

Dave is part of the audio team in BBC R&D, having joined the corporation in 2000. His role in the audio team is mainly involved in the development of the Audio Definition Model (ADM), Next Generation Audio (NGA), and standardisation.   His main area of standardisation work for many years has been at the ITU, representing both the BBC and the UK. Over this period, he has been involved in the standardisation of the ADM, Serial ADM, and the BW64 file format, as well as other related standards and reports. Over many years Dave has worked closely with the EBU and currently chairs the AS-PSE group on personalised sound experiences. One area the group is currently working on is a production profile for the ADM. Past EBU work includes audio codec subjective testing, the BWAV file format, and other ADM-related projects.  Dave has also worked on many collaborative projects over many years, some of which were EU-funded projects (such as ICoSOLE and Orpheus) and some were part of the BBC’s Audio Research Partnership working with Universities.   The most recent area of ADM-related work Dave has been involved in is looking at its use in live production scenarios. This work has included a live trial of Serial ADM, the ADM-OSC protocol, and NGA codecs with the 2023 Eurovision Song Contest.

Matt Firth is a Project R&D Engineer in the Audio Team at BBC Research and Development and leads a workstream on the production of audio experiences. Matt joined BBC R&D in 2015 and has been working on audio production tools and workflows for the past 8 years with a particular focus on Next Generation Audio (NGA) and spatial audio. His work with the BBC has included developing spatial audio tools for live binaural production at scale for the BBC Proms and developing the production tools used for the ORPHEUS project which demonstrated an end-to-end object-based media chain for audio content. Matt has also been involved in standardisation work around the Audio Definition Model (ADM) through the ITU since 2019. He is part of the development team behind the EAR Production Suite which facilitates NGA production using ADM. Recently, Matt was involved in running the live ADM production trials for the Eurovision Song Contest. He also developed some of the production tools and rendering software used during the trial.

We talk about the Next Generation Audio for Live Event Broadcasting, covering aspects such as immersion, interactivity, personalisation and workflows featuring cutting-edge codecs and metadata for Audio Definition Model (ADM), Serial ADM (S-ADM), and OSC-ADM.

Listen to Podcast

Newsboard

Our friends at the Sennheiser Ambeo Mobility team are on the lookout for a Senior Audio Engineer, check out the link for more details – https://www.linkedin.com/jobs/view/3725223871/

Show Notes

Dave Marston Linkedin – https://www.linkedin.com/in/dave-marston-5231961/

Matt Firth Linkedin – https://www.linkedin.com/in/matt-firth-mf/

BBC R&D Website – https://www.bbc.co.uk/rd

BBC R&D Blog – https://www.bbc.co.uk/rd/blog

Live Next Generation Audio trial at Eurovision 2023 – https://www.bbc.co.uk/rd/blog/2023-06-eurovision-next-generation-audio

The EAR Production Suite (EPS) – https://ear-production-suite.ebu.io/

L-ISA – https://l-isa.l-acoustics.com/

New AirPods Pro Support ‘groundbreaking ultra-low latency audio protocol’ for Vision Pro – https://www.roadtovr.com/apple-vision-pro-low-latency-audio-protocol-airpods-pro/

Razer is Releasing Noise Cancelling Wireless Earbuds for Quest 3 – https://www.roadtovr.com/razer-quest-3-noise-cancelling-earbuds/

Audiomovers release landmark plugin Binaural Renderer for Apple Music – https://audiomediainternational.com/audiomovers-release-landmark-plugin-binaural-renderer-for-apple-music/

SPAT Revolution Now Supports Audio-Technica’s BP3600 Immersive Audio Microphone – https://audioxpress.com/news/spat-revolution-now-supports-audio-technica-s-bp3600-immersive-audio-microphone

Sphere Las Vegas – https://www.thespherevegas.com/

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 87 Lorenzo Picinali (Imperial College London)

Summary

This episode is sponsored by Innovate Audio. Innovate Audio offers a range of software-based spatial audio processing tools. Their latest product, panLab Console, is a macOS application that adds 3D spatial audio rendering capabilities to live audio mixing consoles, including popular models from Yamaha, Midas and Behringer. This means you can achieve an object-based audio workflow, utilising the hardware you already own. Immersive Audio Podcast listeners can get an exclusive 20% discount on all panLab licences, use code Immersive20 at checkout. Find out more at innovateaudio.co.uk *Offer available until June 2024.*

In this episode of the Immersive Audio Podcast, Oliver Kadel is joined by the academic and researcher at Imperial College – Lorenzo Picinali from London, United Kingdom.

Lorenzo Picinali is a Reader at Imperial College London, leading the Audio Experience Design team. His research focuses on spatial acoustics and immersive audio, looking at perceptual and computational matters, as well as real-life applications. In the past years Lorenzo worked on projects related to spatial hearing and rendering, hearing aids technologies, and acoustic virtual and augmented reality. He has also been active in the field of eco-acoustic monitoring, designing autonomous recorders and using audio to better understand humans’ impact on remote ecosystems.

Lorenzo talks about the breadth of research initiatives in spatial audio under his leadership of the Audio Experience Design group and we discuss the recently published SONICOM HRTF Dataset developed to improve personalised listening experience.

Listen to Podcast

Newsboard

Our friends at the Sennheiser Ambeo Mobility team are on the lookout for a Senior Audio Engineer, check out the link for more details – https://www.linkedin.com/jobs/view/3725223871/

Show Notes

Lorenzo Picinali – https://www.imperial.ac.uk/people/l.picinali

Imperial College London – https://www.imperial.ac.uk/

Audio Experience Design – https://www.axdesign.co.uk/

SONICOM Website – http://www.sonicom.eu

The SONICOM HRTF Dataset – https://www.axdesign.co.uk/publications/the-sonicom-hrtf-dataset

The SONICOM HRTF Dataset AES Paper – https://www.aes.org/e-lib/browse.cfm?elib=22128

Immersive Audio Demonstration – https://www.youtube.com/watch?v=FWmKNNQpZJA&t=2s

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 84 Alejandro Cabrera (Audio Brewers)

Summary

This episode is sponsored by Berlin-based pro-audio company HOLOPLOT, which features the multi-award-winning X1 Matrix Array. X1 is software-driven, combining 3D Audio-Beamforming and Wave Field Synthesis to achieve authentic sound localisation and complete control over sound in both the vertical and horizontal axes. HOLOPLOT is pivoting the revolution in sound control, enabling the positioning of virtual loudspeakers within a space, allowing for a completely new way of designing and experiencing immersive audio on a large scale. To find more, visit  https://holoplot.com.

In this episode of the Immersive Audio Podcast, Oliver Kadel is joined by the Audio Software Developer at Audio Brewers – Alejandro Cabrera from Athens, Greece.

Alejandro Cabrera is an Audio Software Developer and the Founder of Audio Brewers. Originally from Colombia, currently based in Athens, Greece. He studied Modern Music with an Emphasis in Jazz (Taller de Musics, Barcelona – Spain), BA (Hons) in Music Production and Sound Engineering (University of Wales), and MSc in Sound Design (Edinburgh Napier University). Alejandro has been developing audio tools for over 10 years, including his first Sample Library ‘kFootsteps’.  While working at 8Dio Productions as a producer and later a creative director, Alejandro was involved in the development of over 100 Sample Libraries. Additionally, he participated in the development of the Sequential Dave Smith’s Prophet X/XL which won the TEC Award for Best New Musical Instrument in 2019.  Alejandro founded Audio Brewers in 2020, the first company to develop Virtual Instruments recorded, mixed, and delivered in Ambisonics, for dedicated immersive audio productions.

We talk about their unique set of tools and features for spatial audio designed for a fast workflow for different creative applications and Alejandro explains his concept of impressionism in immersive audio.

Listen to Podcast

Show Notes

Alejandro Cabrera LinkedIn – https://www.linkedin.com/in/alejocazu/

Audio Brewers Website – https://www.audiobrewers.com/

Audio Brewers Youtube Channel – https://www.youtube.com/audiobrewers

ab Encoder – https://www.audiobrewers.com/plugins/p/ab-encoder

ab PitchShifter – https://www.audiobrewers.com/plugins/p/ab-pitchshifter

ab Stutter – https://www.audiobrewers.com/plugins/p/ab-stutter

Audio for extended realities: A case study informed exposition – https://shorturl.at/gjxD3

Sound Experience Survey: Fulldome and Planetariums – https://www.ips-planetarium.org/news/632118/IPS-Sound-Survey.htm

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 83 SXSW 2024 Panel Picker Announcement

Summary

SXSW 2024 Panel Picker Announcement 📢 “State of Play of Immersive Audio: Past, Present & Future”

It’s been almost six years since we started the Immersive Audio Podcast and as we are coming up to our 100th release anniversary we wanted to mark this milestone with a special edition at SXSW 2024.

With the hindsight of releasing almost 100 episodes, we’ve met a lot of companies and experts covering a broad spectrum of topics fundamental to our industry. This panel will highlight the key indicative developments that have defined the immersive audio industry for the past decade, reflect on the current trends and look forward to the future. Our four expert guests and moderators Audioscenic, HOLOPLOT, 1.618 DIGITAL, Monica Bolles will cover the key sectors: large-scale immersive events, interactive live performance, spatial audio for consumer devices, virtual training for VR and immersive media production.

Listen to Podcast

Show Notes

Please support our idea and give us your vote!

Voting link -> https://panelpicker.sxsw.com/vote/132288

Voting Deadline Midnight 20th of August 2023.

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 79 Jelmer Althuis (Spatial Audio for Wellness)

Summary

This episode is sponsored by Berlin-based pro-audio company HOLOPLOT, which features the multi-award-winning X1 Matrix Array. X1 is software-driven, combining 3D Audio-Beamforming and Wave Field Synthesis to achieve authentic sound localisation and complete control over sound in both the vertical and horizontal axes. HOLOPLOT is pivoting the revolution in sound control, enabling the positioning of virtual loudspeakers within a space, allowing for a completely new way of designing and experiencing immersive audio on a large scale. To find more, visit  https://holoplot.com.

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the sound designer – Jelmer Althuis from Groningen, Netherlands.

Jelmer Althuis is the founder of The Sphere of Sound, audio director at VRelax and sound designer at Aku.World. With a passion for audio and technology, Jelmer has dedicated his career to creating immersive audio experiences for a variety of next-generation media. He is an experienced audio professional with a strong background in sound design and audio production.  In 2017 he fully immersed himself in spatial sound design for VR and Art applications in particular. In 2018 he joined the VRelax team as audio director, where he mainly focused on the psychological effect and the influence of spatial sound on well-being. Having created a lot of spatial audio designs helping people to feel relaxed, combining audio and biofeedback techniques and several research activities, he has learned his skill on the job. He has a strong passion for the spatial audio format because he strongly believes in the added value of spatial audio experiences for eHealth & Wellbeing as well as for the digital arts and web3 applications.

Jelmer speaks about his work dedicated to wellness and relaxation through sound and we look at the recent proliferation of spatial audio in the E-Health product market.

Listen to Podcast

Show Notes

LinkedIn – https://www.linkedin.com/in/jelmeralthuis/

The Sphere of Sound Website – www.sphereofsound.com

VRelax Website – https://vrelax.com/en/

Audio for extended realities: A case study informed exposition – https://shorturl.at/gjxD3

Sound Experience Survey: Fulldome and Planetariums – https://www.ips-planetarium.org/news/632118/IPS-Sound-Survey.htm  

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 78 Sean Hudock & Joseph Discher (Knock At The Gate)

Summary

This episode is sponsored by Berlin-based pro-audio company HOLOPLOT, which features the multi-award-winning X1 Matrix Array. X1 is software-driven, combining 3D Audio-Beamforming and Wave Field Synthesis to achieve authentic sound localisation and complete control over sound in both the vertical and horizontal axes. HOLOPLOT is pivoting the revolution in sound control, enabling the positioning of virtual loudspeakers within a space, allowing for a completely new way of designing and experiencing immersive audio on a large scale. To find more, visit  https://holoplot.com.

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the Knock At The Gate duo Sean Hudock and Joseph Discher from East Coast, US. Sean and Joe speak about the creation of their company that re-imagines the classic storytelling, led by the power of voice and sounds, designed to be experienced in the dark.

Sean Hudock – is a theatre, audio and filmmaker based in New York. Producing work includes the world premiere of “Swept Away,” a new Broadway-bound musical from Grammy-nominated folk rock band The Avett Brothers, Tony Award winner John Logan and Tony Award winner Michael Mayer, as well as the creation and development of original award-winning plays which have premiered at Arena Stage, Primary Stages, Ars Nova and beyond. He co-created the new play “Hans & Sophie” which premiered at Amphibian Stage in early 2020 and received five Dallas Fort Worth Theater Awards including Outstanding New Play. As an actor, in film, he starred in Private Romeo, The Chaperone opposite Elizabeth McGovern and Haley Lu Richardson, Comedy Central’s Alternatino with Arturo Castro, and onstage in leading roles off-Broadway and at Cleveland Play House, Soho Playhouse, Shakespeare Theatre of New Jersey and Alabama Shakespeare Festival. In 2020 he co-founded the non-profit Knock at the Gate which builds transportive 3D audio experiences around works of Shakespeare and the science of sound designed for the dark and a pair of headphones. Knock at the Gate’s unique approach to storytelling has since been featured in the New York Times, Wall Street Journal, NPR, American Theatre Magazine, on NPR and Good Day NY. Sean is drawn to challenging, inventive storytelling which drives his passion for Shakespeare and immersive audio.

Joseph Discher – is a professional stage director with twenty-five years of experience. He has worked in regional theatres across the country and has directed several off-Broadway world premieres. The New York Times has called his work “devastatingly effective,” “enchanting,” and “beautiful.” He is the artistic director and co-founder of Knock at the Gate, which creates immersive audio experiences of Shakespeare’s work “designed for the dark and a pair of headphones.” He has directed Caesar: A Surround Sound Experiment and Macbeth: A Surround Sound Experiment, which was featured on Good Day, New York and NPR and in American Theatre Magazine, The New York Times, The Wall Street Journal, and Playbill. He is currently at work on THE TEMPEST: A Surround Sound Odyssey for Knock at the Gate. Mr. Discher was the associate artistic director and casting director of the Shakespeare Theatre of New Jersey where he was also a resident director. He has been coaching actors privately for twenty years. Most recently, he directed Irish Rep’s audio drama of Bikeman in honour of the 20th Anniversary of 9/11, starring Broadway veteran Robert Cuccioli. Off-Broadway credits: Butler and (59E59), The Violin (59E59) and Vilna (St. Clement’s Theatre). Selected credits at Shakespeare Theatre of NJ: The Diary of Anne Frank, To Kill a Mockingbird, Our Town, Henry IV: Part One, A Child’s Christmas in Wales, The Tempest, Amadeus, Of Mice and Men, The Grapes of Wrath, Twelfth Night, Much Ado About Nothing, Romeo and Juliet, Charley’s Aunt, and Wittenberg. Other regional directing credits include Antony and Cleopatra, starring Michael Dorn (Orlando Shakespeare Theatre), Julius Caesar (Shakespeare Festival St. Louis), A Moon for the Misbegotten and My Name is Asher Lev (Playhouse on Park), As You Like It, Red, and The Weir (Theatreworks). Mr Discher is also a professional singer and an audiobook narrator for Audible.

Listen to Podcast

Show Notes

Knock At The Gate Official Website- https://www.knockatthegate.com/

Caesar: A Surround Sound Experiment (clip): https://soundcloud.com/knockatthegate/caesar

Macbeth: A Surround Sound Experiment (clip): https://soundcloud.com/knockatthegate/macbeth

Explore immersive sound design – https://developer.apple.com/videos/play/wwdc2023/10271/?mibextid=Zxz2cZ

Build spatial experiences with RealityKit – https://developer.apple.com/videos/play/wwdc2023/10080/?time=1255

Pro Tools 2023.6 Update – https://www.avid.com/de/resource-center/whats-new-in-pro-tools-20236

Paper on MPEG-I Immersive Audio – Reference Model For The Virtual/Augmented Reality Audio Standard – https://www.aes.org/journal/online/jaes.cfm?file=JAES_V71_5/JAES_V71_5_PG229.pdf&elibID=22127

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 76 Audioscenic Binaural Audio Over Speakers (Part 2)

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel travels to Southampton, UK, to visit the HQ of Audioscenic, whose mission is to revolutionise and pivot binaural audio over speakers to the masses. Their team developed technology that uses real-time head tracking and sound-field control to create virtual headphones that render personal 3D Audio to the listener’s position. Their first commercial success came in the form of a partnership with Razer and the subsequent release of Leviathan V2 Pro Soundbar, which was announced in early 2023 at CES. Since then,  their technology received a number of tech awards and mainly the support of the user community. We sat down with the core team members and the early adopters to find out where it all started and where it is heading.

Listen to Podcast

Marcos Simón (Co-Founder, CTO) Marcos Simón graduated in 2010 from the Technical University of Madrid with a B.Sc. in telecommunications. In 2011, he joined the Institute of Sound and Vibration Research, where he worked with loudspeaker arrays for sound field control and 3D audio rendering and also in the modelling of cochlear mechanics. He obtained his PhD title in 2014, and between 2014 and 2019, he was part of the S3A Research Programme “Future Spatial Audio for an Immersive Listening Experience at Home”. In 2019 he co-founded Audioscenic to commercialise innovative listener-adaptive audio technologies, where he currently works as Chief Technical Officer. Since the company’s creation, Marcos has been leading the vision of Audioscenic and established himself as a nexus between the commercial and technical world for the start-up, ensuring that the technology is continually evolving and that the customers understand exactly what the technology makes possible.

Professor Filippo Fazi (Co-Founder/Chief Scientist) Prof Fazi is the co-founder and Chief Scientist of Audioscenic Ltd, where he leads the scientific development of new audio technologies and contributes to the company’s strategic decisions.  He is also a Professor of Acoustics and Signal Processing at the Institute of Sound and Vibration Research (ISVR) of the University of Southampton, where he is the Head of the Acoustics Group and the Virtual Acoustics and Audio Engineering teams. He also served as Director of Research at the Institute and sits on the Intellectual Property panel of the Faculty of Engineering of Physical Sciences.  He is an internationally recognised expert in audio technologies, electroacoustics and digital signal processing, with a special focus on 3D audio, acoustical inverse problems, multi-channel systems, and acoustic arrays. He is the author of more than 160 scientific publications and co-inventor of Audioscenic’s patented or patent-pending technologies.  Prof Fazi graduated in Mechanical Engineering from the University of Brescia (Italy) in 2005 with a master’s thesis on room acoustics. He obtained his PhD in acoustics from the Institute of Sound and Vibration Research in 2010, with a thesis on sound field reproduction. Prof Fazi was awarded a research fellowship by the Royal Academy of Engineering in 2010 and the Tyndall Medal by the Institute of Acoustics in 2018. He is a fellow of the Audio Engineering Society and a member of the Institute of Acoustics.

David Monteith (CEO) David is the Chief Executive Office of Audioscenic Ltd, responsible for the strategic direction of the business. David holds a Master’s degree in Physics, in Opto-Electronics and an MBA. David began his career developing optical fibre components before joining EMI Central Research Laboratories, where he led the creation of spin-out Sensaura Ltd. The company’s 3D Audio technology shipped with the Microsoft Xbox and on over 500 million PCs. The Sensaura business was sold to Creative Labs. In 2001 David was part of the Sensaura team that received the Royal Academy of Engineering Mac Robert Award for innovation in engineering. In 2003 David founded Sonaptic Ltd.  In his role as CEO, David led the company to licence its audio technology to Mobile phone vendors and portable games platforms such as the Sony PSP. Sonaptic was sold to Wolfson Semiconductors in 2007. David then held the VP of Business Development position in Wolfson, bringing to market the first Wolfson ANC chip featuring the Sonaptic technology. From 2010- 16 David was CEO/founder of Incus Laboratories. Incus developed and licensed its novel digital ANC technology to companies such as Yamaha before being acquired by AMS AG. In 2019 David joined Audioscenic, working with Marcos and Filippo to raise the initial Seed investment. 

Daniel Wallace (R&D Lead) Daniel studied acoustical engineering at the University of Southampton ISVR, graduating in 2016, then started a PhD at the Centre for Doctoral Training in Next-Generation Computational Modelling. His PhD project was on multi-zone sound field control, specifically for producing private listening zones. Since joining Audioscenic as R&D Lead in 2021, he’s turned ideas into products. Daniel firmly believes that for their technology to be successfully deployed into products, the user experience must be flawless; this means testing lots of edge cases in code and in the lab to make sure that when users sit down in front of our soundbar, it just works and gives them an amazing impression. 

Joe Guarini (Creative Director) Joe is a sound designer who has specialised in 3D audio content creation for over ten years.  He won the jury prize for best binaural sound design in the 2014 Mixage Fou international sound competition, in addition to having his works featured in video games, film trailers, television commercials, and demonstrations at CES.  Joe has been working with the Audioscenic team since 2017 to provide listeners with sounds that highlight the immersive qualities of the audio hardware.  His contributions include creating computer game experiences where players can walk through, and interact with, sounds in 3D space.  Joe’s passion is helping people see the full capabilities of 3D audio technology, which is why he chose to join forces with Audioscenic.

Martin Rieger (VRTonung) Martin Rieger is a sound engineer with years of experience in immersive audio. His studio VRTonung specialises in 360° sound recordings and 3D audio postproduction, making him the auditory contact point for the complete realisation of XR projects from creative storytelling at the beginning until the technical implementation. He is also running the most dedicated blog on 3d audio vrtonung.de/blog, setting guidelines for making spatial audio more accessible and building a foundation for the next generation of immersive content. Martin is a certified delegate of the German Institute for vocational training, teaching teachers what immersive audio means in various media with or without visuals, head tracking or interactive elements. This will set the background for the new job profile “designer for immersive media”.

Show Notes

Audioscenic Official Website – https://www.audioscenic.com/

University of Southampton – https://www.southampton.ac.uk/

Razer Official Website – https://www.razer.com/

Razer Leviathan V2 Pro – https://www.razer.com/gb-en/gaming-speakers/razer-leviathan-v2-pro

Marcos Simón LinkedIn – https://www.linkedin.com/in/drmfsg/

Filippo Fazi LinkedIn – https://www.linkedin.com/in/filippo-fazi-4a822443/

David Monteith LinkedIn – https://www.linkedin.com/in/david-monteith-8a66221/

Daniel Wallace LinkedIn – https://www.linkedin.com/in/danielwallace42/

Joe Gaurini – https://www.linkedin.com/in/joseph-guarini-695b8053/

Martin Rieger – https://www.linkedin.com/in/martin-rieger/

VRTonung – https://www.vrtonung.de/en/blog/

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 75 Audioscenic Binaural Audio Over Speakers (Part 1)

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel travels to Southampton, UK, to visit the HQ of Audioscenic, whose mission is to revolutionise and pivot binaural audio over speakers to the masses. Their team developed technology that uses real-time head tracking and sound-field control to create virtual headphones that render personal 3D Audio to the listener’s position. Their first commercial success came in the form of a partnership with Razer and the subsequent release of Leviathan V2 Pro Soundbar, which was announced in early 2023 at CES. Since then,  their technology received a number of tech awards and mainly the support of the user community. We sat down with the core team members and the early adopters to find out where it all started and where it is heading.

Listen to Podcast

Marcos Simón (Co-Founder, CTO)

Marcos Simón graduated in 2010 from the Technical University of Madrid with a B.Sc. in telecommunications. In 2011, he joined the Institute of Sound and Vibration Research, where he worked with loudspeaker arrays for sound field control and 3D audio rendering and also in the modelling of cochlear mechanics. He obtained his PhD title in 2014, and between 2014 and 2019, he was part of the S3A Research Programme “Future Spatial Audio for an Immersive Listening Experience at Home”. In 2019 he co-founded Audioscenic to commercialise innovative listener-adaptive audio technologies, where he currently works as Chief Technical Officer. Since the company’s creation, Marcos has been leading the vision of Audioscenic and established himself as a nexus between the commercial and technical world for the start-up, ensuring that the technology is continually evolving and that the customers understand exactly what the technology makes possible.  

Professor Filippo Fazi (Co-Founder/Chief Scientist)

Prof Fazi is the co-founder and Chief Scientist of Audioscenic Ltd, where he leads the scientific development of new audio technologies and contributes to the company’s strategic decisions.  He is also a Professor of Acoustics and Signal Processing at the Institute of Sound and Vibration Research (ISVR) of the University of Southampton, where he is the Head of the Acoustics Group and the Virtual Acoustics and Audio Engineering teams. He also served as Director of Research at the Institute and sits on the Intellectual Property panel of the Faculty of Engineering of Physical Sciences.  He is an internationally recognised expert in audio technologies, electroacoustics and digital signal processing, with a special focus on 3D audio, acoustical inverse problems, multi-channel systems, and acoustic arrays. He is the author of more than 160 scientific publications and co-inventor of Audioscenic’s patented or patent-pending technologies.  Prof Fazi graduated in Mechanical Engineering from the University of Brescia (Italy) in 2005 with a master’s thesis on room acoustics. He obtained his PhD in acoustics from the Institute of Sound and Vibration Research in 2010, with a thesis on sound field reproduction. Prof Fazi was awarded a research fellowship by the Royal Academy of Engineering in 2010 and the Tyndall Medal by the Institute of Acoustics in 2018. He is a fellow of the Audio Engineering Society and a member of the Institute of Acoustics.

David Monteith (CEO)

David is the Chief Executive Office of Audioscenic Ltd, responsible for the strategic direction of the business. David holds a Master’s degree in Physics, in Opto-Electronics and an MBA. David began his career developing optical fibre components before joining EMI Central Research Laboratories, where he led the creation of spin-out Sensaura Ltd. The company’s 3D Audio technology shipped with the Microsoft Xbox and on over 500 million PCs. The Sensaura business was sold to Creative Labs. In 2001 David was part of the Sensaura team that received the Royal Academy of Engineering Mac Robert Award for innovation in engineering. In 2003 David founded Sonaptic Ltd.  In his role as CEO, David led the company to licence its audio technology to Mobile phone vendors and portable games platforms such as the Sony PSP. Sonaptic was sold to Wolfson Semiconductors in 2007. David then held the VP of Business Development position in Wolfson, bringing to market the first Wolfson ANC chip featuring the Sonaptic technology. From 2010- 16 David was CEO/founder of Incus Laboratories. Incus developed and licensed its novel digital ANC technology to companies such as Yamaha before being acquired by AMS AG. In 2019 David joined Audioscenic, working with Marcos and Filippo to raise the initial Seed investment.

Daniel Wallace (R&D Lead)

Daniel studied acoustical engineering at the University of Southampton ISVR, graduating in 2016, then started a PhD at the Centre for Doctoral Training in Next-Generation Computational Modelling. His PhD project was on multi-zone sound field control, specifically for producing private listening zones. Since joining Audioscenic as R&D Lead in 2021, he’s turned ideas into products. Daniel firmly believes that for their technology to be successfully deployed into products, the user experience must be flawless; this means testing lots of edge cases in code and in the lab to make sure that when users sit down in front of our soundbar, it just works and gives them an amazing impression.

Joe Guarini (Creative Director)

Joe is a sound designer who has specialised in 3D audio content creation for over ten years.  He won the jury prize for best binaural sound design in the 2014 Mixage Fou international sound competition, in addition to having his works featured in video games, film trailers, television commercials, and demonstrations at CES.  Joe has been working with the Audioscenic team since 2017 to provide listeners with sounds that highlight the immersive qualities of the audio hardware.  His contributions include creating computer game experiences where players can walk through, and interact with, sounds in 3D space.  Joe’s passion is helping people see the full capabilities of 3D audio technology, which is why he chose to join forces with Audioscenic.

Martin Rieger (VRTonung)

Martin Rieger is a sound engineer with years of experience in immersive audio. His studio VRTonung specialises in 360° sound recordings and 3D audio postproduction, making him the auditory contact point for the complete realisation of XR projects from creative storytelling at the beginning until the technical implementation. He is also running the most dedicated blog on 3d audio vrtonung.de/blog, setting guidelines for making spatial audio more accessible and building a foundation for the next generation of immersive content. Martin is a certified delegate of the German Institute for vocational training, teaching teachers what immersive audio means in various media with or without visuals, head tracking or interactive elements. This will set the background for the new job profile “designer for immersive media”.

Show Notes

Audioscenic Official Website – https://www.audioscenic.com/

University of Southampton – https://www.southampton.ac.uk/

Razer Official Website – https://www.razer.com/

Razer Leviathan V2 Pro – https://www.razer.com/gb-en/gaming-speakers/razer-leviathan-v2-pro

Marcos Simón LinkedIn – https://www.linkedin.com/in/drmfsg/

Filippo Fazi LinkedIn – https://www.linkedin.com/in/filippo-fazi-4a822443/

David Monteith LinkedIn – https://www.linkedin.com/in/david-monteith-8a66221/

Daniel Wallace LinkedIn – https://www.linkedin.com/in/danielwallace42/

Joe Gaurini – https://www.linkedin.com/in/joseph-guarini-695b8053/

Martin Rieger – https://www.linkedin.com/in/martin-rieger/

VRTonung – https://www.vrtonung.de/en/blog/

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 67 Adam Ganz & Rachel Donnelly (StoryFutures Academy & IWM)

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the Head of Screenwriting at Royal Holloway, University of London and Head of the Writers Room at StoryFutures Academy – Professor Adam Ganz and the Project Manager for the Second World War and Holocaust Partnership Programme at Imperial War Museums Rachel Donnelly, from London, UK.

Professor Adam Ganz is Head of Screenwriting at Royal Holloway, University of London and Head of the Writers Room at StoryFutures Academy, the UK’s National Centre for Immersive Storytelling. In addition to leading StoryFutures Academy on the ‘One Story, Many Voices’ Project with Imperial War Museum, he designed and ran a project on writing for Immersive Audio with Inua Ellams, Jayde Adams, Georgina Campbell, Fryars and Rae Morris. He was also nominated for best single drama for the BBC for his play The Gestapo Minutes.

Rachel Donnelly is Project Manager for the Second World War and Holocaust Partnership Programme (SWWHPP) at Imperial War Museums. She began working on SWWHPP in early 2020 having previously been the Learning and Audience Advocate for IWM’s new Holocaust Galleries and Holocaust Learning Manager for schools. SWWHPP is a three-year project led by IWM and funded by the National Lottery Heritage Fund to support cultural organisations across the UK to engage with local communities to share lesser-known stories related to the Second World and Holocaust. As part of the programme, the cultural organisations, local communities and IWM worked with StoryFutures Academy, to create an immersive touring sound installation with stories written by a group of celebrated UK-based writers.

In this episode, Adam and Rachel explain how binaural audio was used to enhance their traditional and immersive storytelling techniques and discuss the ‘One Story, many voices’ museum installation case study.

Listen to Podcast

Show Notes

Adam Ganz – https://pure.royalholloway.ac.uk/portal/en/persons/adam-ganz(55937d7c-9684-41f8-9a7f-680f94bd13b1).html

‘One Story, Many Voices’, A StoryFutures Academy Immersive Audio Project with Imperial War Museums – https://www.storyfutures.com/news/one-story-many-voices-a-storyfutures-academy-immersive-audio-project-with-the-imperial-war-museums  

To listen to all of the stories in full from the ‘One story, many voices’ project – https://www.storyfutures.com/resources/imperial-war-museum-one-story-many-voices  

For more information about StoryFutures Academy, the UK’s National Centre for Immersive Storytelling and resources for immersive audio, virtual production, AR, VR visit – www.storyfutures.com/academy

 To find out more about the Second World War and Holocaust Partnership Programme at Imperial War Museums – https://www.iwm.org.uk

CreativeXR 2020 StoryFutures Academy masterclass Spatial storytelling as creative practice – https://www.youtube.com/watch?v=60rJHsaLvFo 

Instagram and Twitter – @storyfuturesa / @I_W_M 

Dome Fest West –  https://www.domefestwest.com

IMERSA – https://summit.imersa.org

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.