Episode 84 Alejandro Cabrera (Audio Brewers)

Future of Audio

Viewing posts tagged Future of Audio

Episode 84 Alejandro Cabrera (Audio Brewers)

Summary

This episode is sponsored by Berlin-based pro-audio company HOLOPLOT, which features the multi-award-winning X1 Matrix Array. X1 is software-driven, combining 3D Audio-Beamforming and Wave Field Synthesis to achieve authentic sound localisation and complete control over sound in both the vertical and horizontal axes. HOLOPLOT is pivoting the revolution in sound control, enabling the positioning of virtual loudspeakers within a space, allowing for a completely new way of designing and experiencing immersive audio on a large scale. To find more, visit  https://holoplot.com.

In this episode of the Immersive Audio Podcast, Oliver Kadel is joined by the Audio Software Developer at Audio Brewers – Alejandro Cabrera from Athens, Greece.

Alejandro Cabrera is an Audio Software Developer and the Founder of Audio Brewers. Originally from Colombia, currently based in Athens, Greece. He studied Modern Music with an Emphasis in Jazz (Taller de Musics, Barcelona – Spain), BA (Hons) in Music Production and Sound Engineering (University of Wales), and MSc in Sound Design (Edinburgh Napier University). Alejandro has been developing audio tools for over 10 years, including his first Sample Library ‘kFootsteps’.  While working at 8Dio Productions as a producer and later a creative director, Alejandro was involved in the development of over 100 Sample Libraries. Additionally, he participated in the development of the Sequential Dave Smith’s Prophet X/XL which won the TEC Award for Best New Musical Instrument in 2019.  Alejandro founded Audio Brewers in 2020, the first company to develop Virtual Instruments recorded, mixed, and delivered in Ambisonics, for dedicated immersive audio productions.

We talk about their unique set of tools and features for spatial audio designed for a fast workflow for different creative applications and Alejandro explains his concept of impressionism in immersive audio.

Listen to Podcast

Show Notes

Alejandro Cabrera LinkedIn – https://www.linkedin.com/in/alejocazu/

Audio Brewers Website – https://www.audiobrewers.com/

Audio Brewers Youtube Channel – https://www.youtube.com/audiobrewers

ab Encoder – https://www.audiobrewers.com/plugins/p/ab-encoder

ab PitchShifter – https://www.audiobrewers.com/plugins/p/ab-pitchshifter

ab Stutter – https://www.audiobrewers.com/plugins/p/ab-stutter

Audio for extended realities: A case study informed exposition – https://shorturl.at/gjxD3

Sound Experience Survey: Fulldome and Planetariums – https://www.ips-planetarium.org/news/632118/IPS-Sound-Survey.htm

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 83 SXSW 2024 Panel Picker Announcement

Summary

SXSW 2024 Panel Picker Announcement 📢 “State of Play of Immersive Audio: Past, Present & Future”

It’s been almost six years since we started the Immersive Audio Podcast and as we are coming up to our 100th release anniversary we wanted to mark this milestone with a special edition at SXSW 2024.

With the hindsight of releasing almost 100 episodes, we’ve met a lot of companies and experts covering a broad spectrum of topics fundamental to our industry. This panel will highlight the key indicative developments that have defined the immersive audio industry for the past decade, reflect on the current trends and look forward to the future. Our four expert guests and moderators Audioscenic, HOLOPLOT, 1.618 DIGITAL, Monica Bolles will cover the key sectors: large-scale immersive events, interactive live performance, spatial audio for consumer devices, virtual training for VR and immersive media production.

Listen to Podcast

Show Notes

Please support our idea and give us your vote!

Voting link -> https://panelpicker.sxsw.com/vote/132288

Voting Deadline Midnight 20th of August 2023.

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 82 Les Stuck (Meow Wolf)

Summary

This episode is sponsored by Berlin-based pro-audio company HOLOPLOT, which features the multi-award-winning X1 Matrix Array. X1 is software-driven, combining 3D Audio-Beamforming and Wave Field Synthesis to achieve authentic sound localisation and complete control over sound in both the vertical and horizontal axes. HOLOPLOT is pivoting the revolution in sound control, enabling the positioning of virtual loudspeakers within a space, allowing for a completely new way of designing and experiencing immersive audio on a large scale. To find more, visit  https://holoplot.com.

In this episode of the Immersive Audio Podcast, Monica Bolles is joined by the musician and Senior Sound Technologist at Meow Wolf – Les Stuck from New Mexico, US.

Les began working in spatial audio while working for the Ensemble Modern and the Frankfurt Ballet in Frankfurt, Germany. He designed the touring six-channel sound system for Frank Zappa’s Yellow Shark Tour, which included a 6-channel ring microphone. He then worked at IRCAM in Paris, where he built several spatializers in Max/FTS – a 6-channel version for Pierre Boulez’s …explosante-fixe… premiere, an unusual 8-channel version specifically adapted to classical opera houses for Philippe Manoury’s opera 60e Parallèle, and a signal-controlled panner that allowed extremely fast movement. He designed a 7-channel sound system at Mills College that featured an overhead speaker and built a variety of spatializers for students and guest composers. To celebrate the 50th anniversary of John Chowning’s seminal work on the digital simulation of sound spatialization, Les realized a version of his algorithm for release with Max/MSP in 2021, including panned reverb and the Doppler effect, all controlled at signal rate. Currently Les works at Meow Wolf, where he designs interactive sound installations and acoustical treatments. He has developed several spatial plugins for Ableton Live, which typically include a binaural output to preview the results in headphones before going on-site. He led a collaboration with Spatial, Inc for Meow Wolf’s installation at South by Southwest, and did extensive testing of Holoplot speakers for a future Meow Wolf project.

Les talks about his extensive career, working with spatial audio since the 1980s, including projects with Frank Zappa, IRCAM, Cycling74, and we dive into the topic of interactive spatial audio for physical installations.

Listen to Podcast

Show Notes

Les Stuck Website –  https://www.lesstuck.com/

LinkedIn – https://www.linkedin.com/in/lesstuckartandtechnology/

Meow Wolf Website – https://meowwolf.com/

QSYS – https://www.qsys.com/products-solutions/q-sys/software/q-sys-designer-software/

Audio for extended realities: A case study informed exposition – https://shorturl.at/gjxD3

Sound Experience Survey: Fulldome and Planetariums – https://www.ips-planetarium.org/news/632118/IPS-Sound-Survey.htm

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 81 Felix Deufel & Paul Hauptmeier (ZiMMT)

Summary

This episode is sponsored by Berlin-based pro-audio company HOLOPLOT, which features the multi-award-winning X1 Matrix Array. X1 is software-driven, combining 3D Audio-Beamforming and Wave Field Synthesis to achieve authentic sound localisation and complete control over sound in both the vertical and horizontal axes. HOLOPLOT is pivoting the revolution in sound control, enabling the positioning of virtual loudspeakers within a space, allowing for a completely new way of designing and experiencing immersive audio on a large scale. To find more, visit  https://holoplot.com.

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the sound artists and co-founders (ZiMMT) Felix Deufel and Paul Hauptmeier, from Leipzig, Germany.

Felix Deufel is a sound artist focusing on sound and space, spatial hearing, and the significance of soundscapes for both humans and the environment. His artistic works encompass room installations, compositions, field trips, and research. Deufel is the founder of Not a Number Studio, which is currently developing 3D audio software. In 2020, he established the Centre for immersive Media art, Music, and Technology (ZiMMT) in Leipzig, providing a platform for interdisciplinary collaboration and innovation in the realms of immersive media art and technology.

Paul Hauptmeier, born in 1993 in Jakarta, is a composer and sound artist based in Leipzig. He studied electroacoustic composition with Robin Minard and Maximilian Marcoll at the University of Music FRANZ LISZT Weimar and at the “University of California San Diego” with Katharina Rosenberger, Natacha Diels and Miller Puckette. Since 2009 he works together with Martin Recker as an artist duo in the field of composition, sound and multimedia art. In addition to works for theatre and opera, live electronics, radio and electro-acoustic music, their focus lies on sound installations and multichannel audio productions. Since October 2022 they work and teach in the field of sound art at the art university Burg Giebichenstein in Halle Germany. Additionally, they teach spatial composition at the University for Music and Theatre Leipzig. He is a founding member of the ZiMMT (Center for Immersive Media Art, Music and Technology) in Leipzig, where he conducts research in the field of spatial audio and organises workshops, panels, concerts and exhibitions on the subject. Besides multichannel-based 3D audio, he has a strong interest in augmented reality in the field of sound and multimedia art. His latest work in this realm was a large-scale multi-media installation for lasers, position tracking and binaural audio and was shown at the Biennale Musica 2022 in Venice.

Felix and Paul talk about the founding of the Center for Immersive Media Art, Music and Technology (ZiMMT) as an independent organisation and Spatial Audio Network Germany (SANG) initiative collectively aimed at educating and nurturing artists who want to learn and incorporate spatial audio in their artwork. 

Listen to Podcast

Show Notes

Felix Deufel – https://www.linkedin.com/in/felix-deufel-4494ba10a/

Paul Hauptmeier – www.hauptmeier-recker.de

Not a number Studio – https://notanumber.space

ZiMMT – https://zimmt.net/en/

Spatial Audio Network Germany – https://spatialaudionetwork.de/en/about-sang/

Hybrid Space Lab Berlin – https://hybridspacelab.net/

Spatial Connect for Wwise – https://shorturl.at/buFT2

Audio for extended realities: A case study informed exposition – https://shorturl.at/gjxD3

Sound Experience Survey: Fulldome and Planetariums – https://www.ips-planetarium.org/news/632118/IPS-Sound-Survey.htm

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 80 Mélia Roger & Grégoire Chauvot (3D Audio Field Recording for SFX Libraries)

Summary

This episode is sponsored by Berlin-based pro-audio company HOLOPLOT, which features the multi-award-winning X1 Matrix Array. X1 is software-driven, combining 3D Audio-Beamforming and Wave Field Synthesis to achieve authentic sound localisation and complete control over sound in both the vertical and horizontal axes. HOLOPLOT is pivoting the revolution in sound control, enabling the positioning of virtual loudspeakers within a space, allowing for a completely new way of designing and experiencing immersive audio on a large scale. To find more, visit  https://holoplot.com.

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the field recordists Mélia Roger and Grégoire Chauvot, from Paris, France.

Mélia Roger is a sound designer for film and art installations. She has a classical music background and has a Master’s Degree in sound engineering (ENS Louis-Lumière, Paris). She spent the last year of her Master’s in the Transdisciplinary Studies Program at the Zurich University of the Arts, Switzerland, where she developed an artistic approach to sound, working with voice and field recordings. She is now living between Paris and Zurich, working in post-production for film and her personal art projects.

Grégoire Chauvot is a sound designer who graduated from the prestigious La Femis in Paris. Working mainly with cinema, he uses field recording to the foreground in his approach to film sound and never hesitates to leave the studio in search of new material. His interest in collecting unique and immersive sounds led him to develop in collaboration with  Mélia Roger and HAL a recording rig designed specifically for Dolby Atmos capture.

Mélia and Grégoire talk about their collaboration with HAL Audio resulting in the development of a 7.0.2 audio recording rig designed to capture Dolby Atmos natively and the recent launch of the Urban Atmos SFX library for the multichannel postproduction workflow.

Listen to Podcast

Show Notes

Mélia Roger LinkedIn – https://www.linkedin.com/in/m%C3%A9lia-roger-65474a150/

Grégoire Chauvot LinkedIn –https://www.linkedin.com/in/gr%C3%A9goire-chauvot-663864a6/

HAL Audio – https://www.hal-audio.com

Urban Atmos Library –  https://www.hal-audio.com/product/urban-atmos

Urban Atmos SFX Library Discount Code – immersive30 (Expires on July the 22nd 2023)

Audio for extended realities: A case study informed exposition – https://shorturl.at/gjxD3 

Sound Experience Survey: Fulldome and Planetariums – https://www.ips-planetarium.org/news/632118/IPS-Sound-Survey.htm    

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 79 Jelmer Althuis (Spatial Audio for Wellness)

Summary

This episode is sponsored by Berlin-based pro-audio company HOLOPLOT, which features the multi-award-winning X1 Matrix Array. X1 is software-driven, combining 3D Audio-Beamforming and Wave Field Synthesis to achieve authentic sound localisation and complete control over sound in both the vertical and horizontal axes. HOLOPLOT is pivoting the revolution in sound control, enabling the positioning of virtual loudspeakers within a space, allowing for a completely new way of designing and experiencing immersive audio on a large scale. To find more, visit  https://holoplot.com.

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the sound designer – Jelmer Althuis from Groningen, Netherlands.

Jelmer Althuis is the founder of The Sphere of Sound, audio director at VRelax and sound designer at Aku.World. With a passion for audio and technology, Jelmer has dedicated his career to creating immersive audio experiences for a variety of next-generation media. He is an experienced audio professional with a strong background in sound design and audio production.  In 2017 he fully immersed himself in spatial sound design for VR and Art applications in particular. In 2018 he joined the VRelax team as audio director, where he mainly focused on the psychological effect and the influence of spatial sound on well-being. Having created a lot of spatial audio designs helping people to feel relaxed, combining audio and biofeedback techniques and several research activities, he has learned his skill on the job. He has a strong passion for the spatial audio format because he strongly believes in the added value of spatial audio experiences for eHealth & Wellbeing as well as for the digital arts and web3 applications.

Jelmer speaks about his work dedicated to wellness and relaxation through sound and we look at the recent proliferation of spatial audio in the E-Health product market.

Listen to Podcast

Show Notes

LinkedIn – https://www.linkedin.com/in/jelmeralthuis/

The Sphere of Sound Website – www.sphereofsound.com

VRelax Website – https://vrelax.com/en/

Audio for extended realities: A case study informed exposition – https://shorturl.at/gjxD3

Sound Experience Survey: Fulldome and Planetariums – https://www.ips-planetarium.org/news/632118/IPS-Sound-Survey.htm  

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 77 HOLOPLOT (3D Audio-Beamforming and Wave Field Synthesis)

Summary

This episode is sponsored by Berlin-based pro-audio company HOLOPLOT, which features the multi-award-winning X1 Matrix Array. X1 is software-driven, combining 3D Audio-Beamforming and Wave Field Synthesis to achieve authentic sound localisation and complete control over sound in both the vertical and horizontal axes. HOLOPLOT is pivoting the revolution in sound control, enabling the positioning of virtual loudspeakers within a space, allowing for a completely new way of designing and experiencing immersive audio on a large scale. To find more, visit  https://holoplot.com.

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the HOLOPLOT team, Segment Manager for Performing Arts and Live – Reese Kirsh and Segment Manager for Immersive and Experiential Applications – Natalia Szczepanczyk and the award-winning sound designer Gareth Fry. We hold a detailed discussion on HOLOPLOT’s technical hardware and software capabilities and talk about the recent David Hockney exhibition at Lightroom, where Gareth shares his experience in creating content and working with this paradigm-shifting technology.

Reese Kirsh has been working within the performing arts sector for over a decade in various roles, including Head of Sound for some of the largest West End and Broadway productions, before joining HOLOPLOT as Performing Arts Segment Manager. He’s very aware of the narrative around immersive and what it means to deliver the right tech to empower creative content rather than distract from it.

Natalia Szczepanczyk is the Segment Manager for Immersive and Experiential Applications at HOLOPLOT. She has a design and consultancy background and previously worked with loudspeaker manufacturer Genelec and consultancies Mouchel and Buro Happold. Natalia specialises in audio system design and acoustics for experiential audience experiences within the themed entertainment sectors.

Gareth Fry is a sound designer best known for his cutting-edge work in theatre and his collaborations with many leading UK theatre directors and companies. His work includes over 20 productions at the National Theatre, over 20 at the Royal Court and countless more at venues such as the Bridge Theatre, Old Vic, Young Vic, in the West End and many more. He has also designed events and exhibitions, from the V&A’s landmark David Bowie Is exhibition to being asked by Danny Boyle to design the sound effects for the Opening Ceremony of the 2012 Olympic Games and having received a number of awards for his work.  

Listen to Podcast

Show Notes

HOLOPLOT Offical Website – https://holoplot.com/

Reese Kirsh – https://www.linkedin.com/in/reesekirsh/

Natalia Szczepanczyk – https://www.linkedin.com/in/nszcz/

Gareth Fry – https://www.linkedin.com/in/gareth-fry-32b8217/

HOLOPLOT Plan Software – https://holoplot.com/?/software/

Lightroom – https://holoplot.com/lp_lightroom/

Lightroom (David Hockney: Bigger & Closer (not smaller & further away) – https://lightroom.uk/?gad=1&gclid=Cj0KCQjwsIejBhDOARIsANYqkD269P44zmkGRBKcwg-hRQEfn8FckxGBcBRzJBxTcwxGjmWQ7Rdhl8AaAncTEALw_wcB

The soundscapes of Illuminarium – https://holoplot.com/applications/

HOLOPLOT Official Rental Provider – https://www.ct-group.com/uk/

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 76 Audioscenic Binaural Audio Over Speakers (Part 2)

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel travels to Southampton, UK, to visit the HQ of Audioscenic, whose mission is to revolutionise and pivot binaural audio over speakers to the masses. Their team developed technology that uses real-time head tracking and sound-field control to create virtual headphones that render personal 3D Audio to the listener’s position. Their first commercial success came in the form of a partnership with Razer and the subsequent release of Leviathan V2 Pro Soundbar, which was announced in early 2023 at CES. Since then,  their technology received a number of tech awards and mainly the support of the user community. We sat down with the core team members and the early adopters to find out where it all started and where it is heading.

Listen to Podcast

Marcos Simón (Co-Founder, CTO) Marcos Simón graduated in 2010 from the Technical University of Madrid with a B.Sc. in telecommunications. In 2011, he joined the Institute of Sound and Vibration Research, where he worked with loudspeaker arrays for sound field control and 3D audio rendering and also in the modelling of cochlear mechanics. He obtained his PhD title in 2014, and between 2014 and 2019, he was part of the S3A Research Programme “Future Spatial Audio for an Immersive Listening Experience at Home”. In 2019 he co-founded Audioscenic to commercialise innovative listener-adaptive audio technologies, where he currently works as Chief Technical Officer. Since the company’s creation, Marcos has been leading the vision of Audioscenic and established himself as a nexus between the commercial and technical world for the start-up, ensuring that the technology is continually evolving and that the customers understand exactly what the technology makes possible.

Professor Filippo Fazi (Co-Founder/Chief Scientist) Prof Fazi is the co-founder and Chief Scientist of Audioscenic Ltd, where he leads the scientific development of new audio technologies and contributes to the company’s strategic decisions.  He is also a Professor of Acoustics and Signal Processing at the Institute of Sound and Vibration Research (ISVR) of the University of Southampton, where he is the Head of the Acoustics Group and the Virtual Acoustics and Audio Engineering teams. He also served as Director of Research at the Institute and sits on the Intellectual Property panel of the Faculty of Engineering of Physical Sciences.  He is an internationally recognised expert in audio technologies, electroacoustics and digital signal processing, with a special focus on 3D audio, acoustical inverse problems, multi-channel systems, and acoustic arrays. He is the author of more than 160 scientific publications and co-inventor of Audioscenic’s patented or patent-pending technologies.  Prof Fazi graduated in Mechanical Engineering from the University of Brescia (Italy) in 2005 with a master’s thesis on room acoustics. He obtained his PhD in acoustics from the Institute of Sound and Vibration Research in 2010, with a thesis on sound field reproduction. Prof Fazi was awarded a research fellowship by the Royal Academy of Engineering in 2010 and the Tyndall Medal by the Institute of Acoustics in 2018. He is a fellow of the Audio Engineering Society and a member of the Institute of Acoustics.

David Monteith (CEO) David is the Chief Executive Office of Audioscenic Ltd, responsible for the strategic direction of the business. David holds a Master’s degree in Physics, in Opto-Electronics and an MBA. David began his career developing optical fibre components before joining EMI Central Research Laboratories, where he led the creation of spin-out Sensaura Ltd. The company’s 3D Audio technology shipped with the Microsoft Xbox and on over 500 million PCs. The Sensaura business was sold to Creative Labs. In 2001 David was part of the Sensaura team that received the Royal Academy of Engineering Mac Robert Award for innovation in engineering. In 2003 David founded Sonaptic Ltd.  In his role as CEO, David led the company to licence its audio technology to Mobile phone vendors and portable games platforms such as the Sony PSP. Sonaptic was sold to Wolfson Semiconductors in 2007. David then held the VP of Business Development position in Wolfson, bringing to market the first Wolfson ANC chip featuring the Sonaptic technology. From 2010- 16 David was CEO/founder of Incus Laboratories. Incus developed and licensed its novel digital ANC technology to companies such as Yamaha before being acquired by AMS AG. In 2019 David joined Audioscenic, working with Marcos and Filippo to raise the initial Seed investment. 

Daniel Wallace (R&D Lead) Daniel studied acoustical engineering at the University of Southampton ISVR, graduating in 2016, then started a PhD at the Centre for Doctoral Training in Next-Generation Computational Modelling. His PhD project was on multi-zone sound field control, specifically for producing private listening zones. Since joining Audioscenic as R&D Lead in 2021, he’s turned ideas into products. Daniel firmly believes that for their technology to be successfully deployed into products, the user experience must be flawless; this means testing lots of edge cases in code and in the lab to make sure that when users sit down in front of our soundbar, it just works and gives them an amazing impression. 

Joe Guarini (Creative Director) Joe is a sound designer who has specialised in 3D audio content creation for over ten years.  He won the jury prize for best binaural sound design in the 2014 Mixage Fou international sound competition, in addition to having his works featured in video games, film trailers, television commercials, and demonstrations at CES.  Joe has been working with the Audioscenic team since 2017 to provide listeners with sounds that highlight the immersive qualities of the audio hardware.  His contributions include creating computer game experiences where players can walk through, and interact with, sounds in 3D space.  Joe’s passion is helping people see the full capabilities of 3D audio technology, which is why he chose to join forces with Audioscenic.

Martin Rieger (VRTonung) Martin Rieger is a sound engineer with years of experience in immersive audio. His studio VRTonung specialises in 360° sound recordings and 3D audio postproduction, making him the auditory contact point for the complete realisation of XR projects from creative storytelling at the beginning until the technical implementation. He is also running the most dedicated blog on 3d audio vrtonung.de/blog, setting guidelines for making spatial audio more accessible and building a foundation for the next generation of immersive content. Martin is a certified delegate of the German Institute for vocational training, teaching teachers what immersive audio means in various media with or without visuals, head tracking or interactive elements. This will set the background for the new job profile “designer for immersive media”.

Show Notes

Audioscenic Official Website – https://www.audioscenic.com/

University of Southampton – https://www.southampton.ac.uk/

Razer Official Website – https://www.razer.com/

Razer Leviathan V2 Pro – https://www.razer.com/gb-en/gaming-speakers/razer-leviathan-v2-pro

Marcos Simón LinkedIn – https://www.linkedin.com/in/drmfsg/

Filippo Fazi LinkedIn – https://www.linkedin.com/in/filippo-fazi-4a822443/

David Monteith LinkedIn – https://www.linkedin.com/in/david-monteith-8a66221/

Daniel Wallace LinkedIn – https://www.linkedin.com/in/danielwallace42/

Joe Gaurini – https://www.linkedin.com/in/joseph-guarini-695b8053/

Martin Rieger – https://www.linkedin.com/in/martin-rieger/

VRTonung – https://www.vrtonung.de/en/blog/

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 75 Audioscenic Binaural Audio Over Speakers (Part 1)

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel travels to Southampton, UK, to visit the HQ of Audioscenic, whose mission is to revolutionise and pivot binaural audio over speakers to the masses. Their team developed technology that uses real-time head tracking and sound-field control to create virtual headphones that render personal 3D Audio to the listener’s position. Their first commercial success came in the form of a partnership with Razer and the subsequent release of Leviathan V2 Pro Soundbar, which was announced in early 2023 at CES. Since then,  their technology received a number of tech awards and mainly the support of the user community. We sat down with the core team members and the early adopters to find out where it all started and where it is heading.

Listen to Podcast

Marcos Simón (Co-Founder, CTO)

Marcos Simón graduated in 2010 from the Technical University of Madrid with a B.Sc. in telecommunications. In 2011, he joined the Institute of Sound and Vibration Research, where he worked with loudspeaker arrays for sound field control and 3D audio rendering and also in the modelling of cochlear mechanics. He obtained his PhD title in 2014, and between 2014 and 2019, he was part of the S3A Research Programme “Future Spatial Audio for an Immersive Listening Experience at Home”. In 2019 he co-founded Audioscenic to commercialise innovative listener-adaptive audio technologies, where he currently works as Chief Technical Officer. Since the company’s creation, Marcos has been leading the vision of Audioscenic and established himself as a nexus between the commercial and technical world for the start-up, ensuring that the technology is continually evolving and that the customers understand exactly what the technology makes possible.  

Professor Filippo Fazi (Co-Founder/Chief Scientist)

Prof Fazi is the co-founder and Chief Scientist of Audioscenic Ltd, where he leads the scientific development of new audio technologies and contributes to the company’s strategic decisions.  He is also a Professor of Acoustics and Signal Processing at the Institute of Sound and Vibration Research (ISVR) of the University of Southampton, where he is the Head of the Acoustics Group and the Virtual Acoustics and Audio Engineering teams. He also served as Director of Research at the Institute and sits on the Intellectual Property panel of the Faculty of Engineering of Physical Sciences.  He is an internationally recognised expert in audio technologies, electroacoustics and digital signal processing, with a special focus on 3D audio, acoustical inverse problems, multi-channel systems, and acoustic arrays. He is the author of more than 160 scientific publications and co-inventor of Audioscenic’s patented or patent-pending technologies.  Prof Fazi graduated in Mechanical Engineering from the University of Brescia (Italy) in 2005 with a master’s thesis on room acoustics. He obtained his PhD in acoustics from the Institute of Sound and Vibration Research in 2010, with a thesis on sound field reproduction. Prof Fazi was awarded a research fellowship by the Royal Academy of Engineering in 2010 and the Tyndall Medal by the Institute of Acoustics in 2018. He is a fellow of the Audio Engineering Society and a member of the Institute of Acoustics.

David Monteith (CEO)

David is the Chief Executive Office of Audioscenic Ltd, responsible for the strategic direction of the business. David holds a Master’s degree in Physics, in Opto-Electronics and an MBA. David began his career developing optical fibre components before joining EMI Central Research Laboratories, where he led the creation of spin-out Sensaura Ltd. The company’s 3D Audio technology shipped with the Microsoft Xbox and on over 500 million PCs. The Sensaura business was sold to Creative Labs. In 2001 David was part of the Sensaura team that received the Royal Academy of Engineering Mac Robert Award for innovation in engineering. In 2003 David founded Sonaptic Ltd.  In his role as CEO, David led the company to licence its audio technology to Mobile phone vendors and portable games platforms such as the Sony PSP. Sonaptic was sold to Wolfson Semiconductors in 2007. David then held the VP of Business Development position in Wolfson, bringing to market the first Wolfson ANC chip featuring the Sonaptic technology. From 2010- 16 David was CEO/founder of Incus Laboratories. Incus developed and licensed its novel digital ANC technology to companies such as Yamaha before being acquired by AMS AG. In 2019 David joined Audioscenic, working with Marcos and Filippo to raise the initial Seed investment.

Daniel Wallace (R&D Lead)

Daniel studied acoustical engineering at the University of Southampton ISVR, graduating in 2016, then started a PhD at the Centre for Doctoral Training in Next-Generation Computational Modelling. His PhD project was on multi-zone sound field control, specifically for producing private listening zones. Since joining Audioscenic as R&D Lead in 2021, he’s turned ideas into products. Daniel firmly believes that for their technology to be successfully deployed into products, the user experience must be flawless; this means testing lots of edge cases in code and in the lab to make sure that when users sit down in front of our soundbar, it just works and gives them an amazing impression.

Joe Guarini (Creative Director)

Joe is a sound designer who has specialised in 3D audio content creation for over ten years.  He won the jury prize for best binaural sound design in the 2014 Mixage Fou international sound competition, in addition to having his works featured in video games, film trailers, television commercials, and demonstrations at CES.  Joe has been working with the Audioscenic team since 2017 to provide listeners with sounds that highlight the immersive qualities of the audio hardware.  His contributions include creating computer game experiences where players can walk through, and interact with, sounds in 3D space.  Joe’s passion is helping people see the full capabilities of 3D audio technology, which is why he chose to join forces with Audioscenic.

Martin Rieger (VRTonung)

Martin Rieger is a sound engineer with years of experience in immersive audio. His studio VRTonung specialises in 360° sound recordings and 3D audio postproduction, making him the auditory contact point for the complete realisation of XR projects from creative storytelling at the beginning until the technical implementation. He is also running the most dedicated blog on 3d audio vrtonung.de/blog, setting guidelines for making spatial audio more accessible and building a foundation for the next generation of immersive content. Martin is a certified delegate of the German Institute for vocational training, teaching teachers what immersive audio means in various media with or without visuals, head tracking or interactive elements. This will set the background for the new job profile “designer for immersive media”.

Show Notes

Audioscenic Official Website – https://www.audioscenic.com/

University of Southampton – https://www.southampton.ac.uk/

Razer Official Website – https://www.razer.com/

Razer Leviathan V2 Pro – https://www.razer.com/gb-en/gaming-speakers/razer-leviathan-v2-pro

Marcos Simón LinkedIn – https://www.linkedin.com/in/drmfsg/

Filippo Fazi LinkedIn – https://www.linkedin.com/in/filippo-fazi-4a822443/

David Monteith LinkedIn – https://www.linkedin.com/in/david-monteith-8a66221/

Daniel Wallace LinkedIn – https://www.linkedin.com/in/danielwallace42/

Joe Gaurini – https://www.linkedin.com/in/joseph-guarini-695b8053/

Martin Rieger – https://www.linkedin.com/in/martin-rieger/

VRTonung – https://www.vrtonung.de/en/blog/

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 74 Agnieszka Roginska (NYU)

Summary

This episode is sponsored by Spatial, the immersive audio software that gives a new dimension to sound. Spatial gives creators the tools to create interactive soundscapes using our powerful 3D authoring tool, Spatial Studio. Their software modernises traditional channel-based audio; by rethinking how we hear and feel immersive experiences anywhere. To find more, go to https://www.spatialinc.com.

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the Professor of Music Technology at NYU – Agnieszka Roginska, from New York, US.

Agnieszka Roginska is a Professor of Music Technology at New York University. She conducts research in the simulation and applications of immersive and 3D audio, including the capture, analysis and synthesis of auditory environments. Applications of her work include AR/VR/XR, gaming, mission-critical, and augmented acoustic sensing. She is the author of numerous publications on the topics of acoustics and psychoacoustics of immersive audio. Agnieszka is a Fellow of the Audio Engineering Society (AES) and a Past-President of the AES. She is the faculty sponsor of the Society for Women in TeCHnology (SWiTCH) at NYU.

Agnieszka speaks about the importance of the Audio Engineering Society and initiatives for education for the unrepresented communities and her involvement in a wide spectrum of research and publishing activities on spatial audio.

Listen to Podcast

https://open.spotify.com/episode/4rMdK2RQdzm2W8QeK5buzE?si=7f70288c9aff4e23

Show Notes

Agnieszka Roginska LinkedIn – https://www.linkedin.com/in/agnieszka-roginska-784a07/

NYU Official Website – https://www.nyu.edu/

NYU Music Technology Program – https://steinhardt.nyu.edu/programs/music-technology

AES Official Website – https://aes2.org/

Designing Effective Playful Collaborative Science Learning in VR – https://link.springer.com/chapter/10.1007/978-3-031-15325-9_3

Insight into postural control in unilateral sensorineural hearing loss and vestibular hypofunction – https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0276251

Multilayered Affect-Audio Research System for Virtual Reality Learning Environments – https://nyuscholars.nyu.edu/en/publications/multilayered-affect-audio-research-system-for-virtual-reality-lea

Methodology for perceptual evaluation of plausibility with self-translation of the listener – https://www.aes.org/e-lib/browse.cfm?elib=21874

Sound design and reproduction techniques for co-located narrative VR experiences – https://www.aes.org/e-lib/browse.cfm?elib=20660

Evaluation of Binaural Renderers: Multidimensional Sound Quality Assessment – https://www.aes.org/e-lib/browse.cfm?elib=19694

Immersive Sound: The Art and Science of Binaural and Multi-Channel Audio – Audio Engineering Society Presents (Paperback) – https://www.waterstones.com/book/immersive-sound/agnieszka-roginska/paul-geluso/9781138900004

2023 AES International Conference on Spatial and Immersive Audio – https://aes2.org/events-calendar/2023-aes-international-conference-on-spatial-and-immersive-audio/  

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.