Episode 95 Gavin Kearney & Helena Daffern (AudioLab, University of York)

Interactive Audio

Viewing posts tagged Interactive Audio

Episode 95 Gavin Kearney & Helena Daffern (AudioLab, University of York)

This episode is sponsored by HHB Communications, the UK’s leader in Pro Audio Technology. For years HHB has been delivering the latest and most innovative pro audio solutions to the world’s top recording studios, post facilities, and broadcasters. The team at HHB provide best-in-class consultation, installation, training, and technical support to customers who want to build or upgrade their studio environment for immersive audio workflow. To find out more or book a demo at their HQ facility visit www.hhb.co.uk

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel is joined by Professors Gavin Kearney and Helena Daffern from the AudioLab at the School of Physics Engineering and Technology at the University of York, UK.

Gavin Kearney is a Professor of Audio Engineering at the School of Physics Engineering and Technology at the University of York. He is an active researcher, technologist and sound designer for immersive technologies and has published over a hundred articles and patents relating to immersive audio. He graduated from Dublin Institute of Technology in 2002 with an honours degree in Electronic Engineering and has since obtained both MSc and PhD degrees in Audio Signal Processing from Trinity College Dublin. He joined the University of York as a Lecturer in Sound Design at the Department of Theatre, Film and Television in January 2011 and moved to the Department of Electronic Engineering in 2016. He leads a team of researchers at York Audiolab which focuses on different facets of immersive and interactive audio, including spatial audio and surround sound, real-time audio signal processing, Ambisonics and spherical acoustics, game audio/audio for virtual and augmented reality and recording and audio post-production technique development.

Helena Daffern is currently a Professor in Music Science and Technology at the School of Physics Engineering and Technology at the University of York. Her research utilises interdisciplinary approaches to investigate voice science and acoustics, particularly singing performance, vocal pedagogy, choral singing and singing for health and well-being. Recent projects explore the potential of virtual reality to improve access to group singing activities and as a tool for singing performance research. She received a BA (Hons.) degree in music, an M.A. degree in music, and a PhD in music technology, all from the University of York, UK, in 2004, 2005, and 2009. She went on to complete training as a classical singer at Trinity College of Music and worked in London as a singer and teacher before returning to York. Her research utilises interdisciplinary approaches to investigate voice science and acoustics, particularly singing performance, vocal pedagogy, choral singing and singing for health and well-being. Recent projects explore the potential of virtual reality to improve access to group singing activities and as a tool for singing performance research.

Helena and Gavin talk about the recently announced CoSTAR project – the initiative focuses on leveraging a novel R&D in virtual production technologies including CGI, spatial audio, motion capture and extended reality to create groundbreaking live performance experiences.

Listen to Podcast

Show Notes

Gavin Kearney Linkedin – https://www.linkedin.com/in/gavin-p-kearney/?originalSubdomain=uk

Helena Daffern Linkedin – https://www.linkedin.com/in/helena-daffern-32822439/?originalSubdomain=uk

AudioLab – https://audiolab.york.ac.uk/

University of York – https://www.york.ac.uk/

CoSTAR Project – https://audiolab.york.ac.uk/audiolab-at-the-forefront-of-pioneering-the-future-of-live-performance-with-a-new-rd-lab/

BBC Maida Vale Studios – https://www.bbc.co.uk/showsandtours/venue/bbc-maida-vale-studios 

AudioLab goes to BBC Maida Vale Recording Studios – https://audiolab.york.ac.uk/audiolab-goes-to-bbc-maida-vale-recording-studios/

Project SAFFIRE – https://audiolab.york.ac.uk/saffire/

Our Sponsors

Innovate Audio offers a range of software-based spatial audio processing tools. Their latest product, panLab Console, is a macOS application that adds 3D spatial audio rendering capabilities to live audio mixing consoles, including popular models from Yamaha, Midas and Behringer. This means you can achieve an object-based audio workflow, utilising the hardware you already own. Immersive Audio Podcast listeners can get an exclusive 20% discount on all panLab licences, use code Immersive20 at checkout. Find out more at innovateaudio.co.uk *Offer available until June 2024.*

HOLOPLOT is a Berlin-based pro-audio company, which features the multi-award-winning X1 Matrix Array. X1 is software-driven, combining 3D Audio-Beamforming and Wave Field Synthesis to achieve authentic sound localisation and complete control over sound in both the vertical and horizontal axes. HOLOPLOT is pivoting the revolution in sound control, enabling the positioning of virtual loudspeakers within a space, allowing for a completely new way of designing and experiencing immersive audio on a large scale. To find more, visit holoplot.com.

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 93 Juergen Herre (MPEG-I)

Summary

This episode is sponsored by Innovate Audio. Innovate Audio offers a range of software-based spatial audio processing tools. Their latest product, panLab Console, is a macOS application that adds 3D spatial audio rendering capabilities to live audio mixing consoles, including popular models from Yamaha, Midas and Behringer. This means you can achieve an object-based audio workflow, utilising the hardware you already own. Immersive Audio Podcast listeners can get an exclusive 20% discount on all panLab licences, use code Immersive20 at checkout. Find out more at innovateaudio.co.uk *Offer available until June 2024.*

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the Chief Executive Scientist for Audio and Multimedia fields at Fraunhofer International Audio Laboratories – Juergen Herre from Erlangen, Germany.

Juergen Herre received a degree in Electrical Engineering from  Friedrich-Alexander-Universität in 1989 and a Ph.D. degree for his work on error concealment of coded audio. He joined the Fraunhofer Institute for Integrated Circuits (IIS) in Erlangen, Germany, in 1989. There he has been involved in the development of perceptual coding algorithms for high-quality audio, including the well-known ISO/MPEG-Audio Layer III  coder (aka “MP3”). In 1995, he joined Bell Laboratories for a PostDoc term working on the development of MPEG-2 Advanced Audio Coding (AAC).  By the end of 1996, he went back to Fraunhofer IIS to work on the development of more advanced multimedia technology including MPEG-4,  MPEG-7, MPEG-D, MPEG-H and MPEG-I, currently as the Chief Executive  Scientist for the Audio/Multimedia activities at Fraunhofer IIS,  Erlangen. In September 2010, Prof. Dr. Herre was appointed full professor at the University of Erlangen and the International Audio  Laboratories Erlangen. He is an expert in low-bit-rate audio coding/perceptual audio coding, spatial audio coding, parametric audio object coding, perceptual signal processing and semantic audio processing.  Prof. Dr.-Ing. Herre is a fellow member of the Audio Engineering Society  (AES), chair of the AES Technical Committee on Coding of Audio Signals and vice chair of the AES Technical Council. Prof. Dr.-Ing. Juergen  Herre is a senior member of the IEEE, a member of the IEEE Technical  Committee on Audio and Acoustic Signal Processing, served as an associate editor of the IEEE Transactions on Speech and Audio Processing and was an active member of the MPEG audio subgroup for almost three decades.

Juergen explains the science of the key technology concepts behind the worldwide adopted family of MPEG codecs and we discuss the latest addition of the reference model for the virtual and augmented reality audio standard – MPEG-I Immersive Audio.

Listen to Podcast

Show Notes

Juergen Herre – https://www.audiolabs-erlangen.de/fau/professor/herre

International Audio Laboratories, Erlangen – https://www.audiolabs-erlangen.de

Fraunhofer Institute for Integrated Circuits (IIS) – https://www.iis.fraunhofer.de/en/ff/amm/for/audiolabs.html

Friedrich-Alexander-Universität Erlangen-Nürnberg –  https://www.fau.eu/

AES Paper MPEG-I Immersive Audio — Reference Model For The Virtual/Augmented Reality Audio Standard – https://www.aes.org/e-lib/browse.cfm?elib=22127

Perceptual Audio Codecs Tutorial “What to Listen For” – https://aes2.org/resources/audio-topics/audio_coding/perceptual-audio-codecs/

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 91 SXSW 2024 Panel Acceptance Announcement

Summary

We have a big announcement to make – Immersive Audio Podcast are going to SXSW 2024!

We are hosting the panel “State of Play of Immersive Audio: Past, Present & Future”.

It’s been almost six years since we started the Immersive Audio Podcast and as we are coming up to our 100th release anniversary we wanted to mark this milestone with a special edition at SXSW 2024. With the hindsight of releasing almost 100 episodes, we’ve met a lot of companies and experts covering a broad spectrum of topics fundamental to our industry. This panel will highlight the key indicative developments that have defined the immersive audio industry for the past decade, reflect on the current trends and look forward to the future.

Our expert guests will feature Audioscenic Limited covering binaural audio over speakers on consumer devices and HOLOPLOT discussing spatial audio for large-scale immersive events. We’ll moderate the panel as well as discuss our own work in the space of interactive live performance in immersive spaces and spatial and interactive sound design in immersive media production.

Audioscenic – https://www.audioscenic.com/

HOLOPLOT – https://holoplot.com/

If you’d like to learn more about our panelists go to your favorite podcast app or our website and check out Episodes 75/76 featuring Audioscenic (Binaural Audio Over Speakers) and Episodes 77/85 HOLOPLOT (3D Audio – Beamforming and Wave Field Synthesis).

Our audience will receive a comprehensive overview of what defines immersive audio as a whole in the modern multifaceted world of digital media. We’ll provide an overview of core technologies, formats and distribution platforms along with associated challenges and opportunities. Attentive audience members will get a range of unique perspectives from panel experts that can be helpful for education and business development.

This panel is aimed at Sound Engineers, Sound Designers, Musicians, Mixers, Gamers, Writers, Audio Tech Consumers, Academics, Immersive Content Makers and general Tech Enthusiasts…

If you’re planning to attend and would like to arrange to meet us, please get in touch via [email protected]

The panel will be recorded and released on our podcast channel as Episode 100th.

We’d like to thank all our listeners who voted for us and your continued support!

Listen to Podcast

Show Notes

Panel Details, State of Play of Immersive Audio: Past, Present & Future – https://schedule.sxsw.com/2024/events/PP132288

Episode 75 Audioscenic Binaural Audio Over Speakers (Part 1) – https://immersiveaudiopodcast.com/episode-75-audioscenic-binaural-audio-over-speakers-part-1/

Episode 76 Audioscenic Binaural Audio Over Speakers (Part 2) – https://immersiveaudiopodcast.com/episode-76-audioscenic-binaural-audio-over-speakers-part-2/

Episode 77 HOLOPLOT (3D Audio-Beamforming and Wave Field Synthesis) – https://immersiveaudiopodcast.com/episode-77-holoplot-3d-audio-beamforming-and-wave-field-synthesis/

Episode 85 Roman Sick (HOLOPLOT) – https://immersiveaudiopodcast.com/episode-85-roman-sick-holoplot/

Episode 87 Lorenzo Picinali (Imperial College London)

Summary

This episode is sponsored by Innovate Audio. Innovate Audio offers a range of software-based spatial audio processing tools. Their latest product, panLab Console, is a macOS application that adds 3D spatial audio rendering capabilities to live audio mixing consoles, including popular models from Yamaha, Midas and Behringer. This means you can achieve an object-based audio workflow, utilising the hardware you already own. Immersive Audio Podcast listeners can get an exclusive 20% discount on all panLab licences, use code Immersive20 at checkout. Find out more at innovateaudio.co.uk *Offer available until June 2024.*

In this episode of the Immersive Audio Podcast, Oliver Kadel is joined by the academic and researcher at Imperial College – Lorenzo Picinali from London, United Kingdom.

Lorenzo Picinali is a Reader at Imperial College London, leading the Audio Experience Design team. His research focuses on spatial acoustics and immersive audio, looking at perceptual and computational matters, as well as real-life applications. In the past years Lorenzo worked on projects related to spatial hearing and rendering, hearing aids technologies, and acoustic virtual and augmented reality. He has also been active in the field of eco-acoustic monitoring, designing autonomous recorders and using audio to better understand humans’ impact on remote ecosystems.

Lorenzo talks about the breadth of research initiatives in spatial audio under his leadership of the Audio Experience Design group and we discuss the recently published SONICOM HRTF Dataset developed to improve personalised listening experience.

Listen to Podcast

Newsboard

Our friends at the Sennheiser Ambeo Mobility team are on the lookout for a Senior Audio Engineer, check out the link for more details – https://www.linkedin.com/jobs/view/3725223871/

Show Notes

Lorenzo Picinali – https://www.imperial.ac.uk/people/l.picinali

Imperial College London – https://www.imperial.ac.uk/

Audio Experience Design – https://www.axdesign.co.uk/

SONICOM Website – http://www.sonicom.eu

The SONICOM HRTF Dataset – https://www.axdesign.co.uk/publications/the-sonicom-hrtf-dataset

The SONICOM HRTF Dataset AES Paper – https://www.aes.org/e-lib/browse.cfm?elib=22128

Immersive Audio Demonstration – https://www.youtube.com/watch?v=FWmKNNQpZJA&t=2s

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 86 Daniel Higgott & Luke Swaffield (Innovate Audio)

Summary

This episode is sponsored by Innovate Audio. Innovate Audio offers a range of software-based spatial audio processing tools. Their latest product, panLab Console, is a macOS application that adds 3D spatial audio rendering capabilities to live audio mixing consoles, including popular models from Yamaha, Midas and Behringer. This means you can achieve an object-based audio workflow, utilising the hardware you already own. Immersive Audio Podcast listeners can get an exclusive 20% discount on all panLab licences, use code Immersive20 at checkout. Find out more at innovateaudio.co.uk *Offer available until June 2024.*

In this episode of the Immersive Audio Podcast, Oliver Kadel is joined by theatre sound designer and engineer at Autograph Sound – Luke Swaffield and co-founder and lead developer at Innovate Audio Daniel Higgott from London, United Kingdom.

Daniel Higgott is a software developer and sound engineer. He has worked in live audio for over 15 years and has a career spanning theatre, music and live events. He has worked as a sound operator, mixing major West End musicals and touring productions. Daniel has also worked as a sound designer, programmer and associate sound designer for a variety of companies.  Daniel is an experienced macOS and iOS software developer. Through his work with Innovate Audio, his software is installed on over 10,000 devices worldwide. Daniel specialises in software for the live events industries and enjoys being able to use his own software tools alongside his work in live audio.  Daniel has twin toddlers, and when he’s not working he loves being able to spend time with them and watch them grow up. Daniel Higgott trained at the Liverpool Institute for Performing Arts. 

Recent creative credits include Sound Design credits: Frameless Immersive Art Gallery (Marble Arch), When the Rain Stops Falling (RADA), Jack and the Beanstalk (Hackney Empire 2021), When Darkness Falls (Park Theatre and UK Tour), In the Mood (Mountview), Candide (Mountview), Dick Whittington (Hackney Empire 2019), Aladdin (Hackney Empire 2018), Growl (UK Tour), A Spoonful of Sherman (UK and Ireland Tour). Associate Sound Design credits: Moulin Rouge! The Musical (Piccadilly Theatre, London), Epochal Banquet (Dubai Expo 2020), Secret Cinema Presents: Casino Royale, Moll Flanders (Mercury Theatre, Colchester), Secret Cinema X: Tell No One – The Handmaiden, Our Country’s Good (Royal Alexandra Theatre, Toronto), Good People (Noel Coward, London). Audio Programmer credits: Secret Cinema Presents: Romeo & Juliet, Secret Cinema Presents: Moulin Rouge, Secret Cinema Presents: 28 Days Later, Secret Cinema: Tell No One – Dr. Strangelove, All My Sons (Apollo Theatre, London).

Luke Swaffield has worked on a variety of theatre and events as a Sound Designer for the past 15 years. Luke has extensive Audio and Show Control programming experience and alongside the team at Autograph Sound has developed systems to provide Audio and Control data across large sites for complex productions. Luke specialises in working in large-scale immersive and non-linear performances and some of his past work as a Sound Designer includes; ‘Peaky Blinders: The Rise (London & Riyadh), ‘Saw: The Escape Experience’, ‘Dr Who Time Fracture’ (London) & ‘Monopoly Life-Sized’ (London & Riyadh). Since 2016 Luke has also worked extensively as a Sound Designer and Show Control Programmer for Secret Cinema’s events in bespoke locations around London and abroad, both indoors and outdoors. Luke’s previous credits for Secret Cinema include; ‘Guardians of the Galaxy’ (London), ’Bridgerton’ (London), ’Stranger Things’ (London & LA), ’Casino Royale’ (London & Shanghai), ‘Romeo & Juliet’, ‘Blade Runner’, ‘Moulin Rouge’, ‘Project Spring’, ’Dirty Dancing’, ‘28 Days Later’ and ‘Dr Strangelove’. Luke has also supported other companies with immersive events including as a Sound Designer on Les Enfants Terribles’ ‘Stella Artois Time Portal’, as an Audio Producer on DotDotDot’s immersive production of ‘Jeff Wayne’s War of The Worlds and Creative Agency Collider as Sound Designer on ‘League of Legends European 10th Anniversary’ at the Excel Centre, London. In the theatre world Luke’s past Sound Design credits include: ‘The Curious Case of Benjamin Button The Musical’ (Southwark Playhouse Elephant, London), ’TRAPLORD’ (180 Strand, London), ’Pride & Prejudice* (*Sort Of)’ (Criterion Theatre, London) ’Anything Goes’ (The Other Palace, London), ‘Parade’ (The Other Palace, London) ‘Shakespeare’s Rose Theatre’ (Blenheim Palace), ‘The Limit’ (Embassy Theatre, London), ‘Catch Me’ (Embassy Theatre, London), ‘Forgotten’ (Arcola, London), ‘The Full Monty’ (UK Tour), ‘The Legend of Sleepy Hollow’ (The Other Palace, London), ‘Big Fish The Musical’ (The Other Palace, London), ‘Billy The Kid’ (Leicester Curve), ‘Stay Awake, Jake’ (The Vaults Waterloo, London) and ‘The Wasp’ (Trafalgar Studios, London). Luke is a full-time member of the Sound Design department at Autograph Sound. Autograph Sound is Europe’s leading Sound Design and supply company with an unparalleled 45-year history. They are currently designing and/or supplying amongst others; ‘Moulin Rouge’, ‘Frozen’, ’Hamilton’, ‘Guys & Dolls’, ‘Cabaret’, ‘Les Misérables’ and ‘Harry Potter and the Cursed Child’. For a full list of current and past productions please visit www.autograph.co.uk.

Dan and Luke talk about the world of theatre sound and immersive live events and the fast adoption of spatial audio which offers creative opportunities and the need for innovative solutions.

Listen to Podcast

Newsboard

Our friends at the Sennheiser Ambeo Mobility team are on the lookout for a Senior Audio Engineer, check out the link for more details – https://www.linkedin.com/jobs/view/3725223871/

Show Notes

Daniel Higgott Linkedin – https://www.linkedin.com/in/daniel-higgott-62b0341b7

Luke Swaffield – https://www.linkedin.com/in/luke-swaffield-0a278482/

Innovate Audio Website – https://innovateaudio.co.uk/

Autograph Website – https://www.autograph.co.uk/

panLab Console – https://innovateaudio.co.uk/panlab-console/

panLab 3 – https://innovateaudio.co.uk/panlab/

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 83 SXSW 2024 Panel Picker Announcement

Summary

SXSW 2024 Panel Picker Announcement 📢 “State of Play of Immersive Audio: Past, Present & Future”

It’s been almost six years since we started the Immersive Audio Podcast and as we are coming up to our 100th release anniversary we wanted to mark this milestone with a special edition at SXSW 2024.

With the hindsight of releasing almost 100 episodes, we’ve met a lot of companies and experts covering a broad spectrum of topics fundamental to our industry. This panel will highlight the key indicative developments that have defined the immersive audio industry for the past decade, reflect on the current trends and look forward to the future. Our four expert guests and moderators Audioscenic, HOLOPLOT, 1.618 DIGITAL, Monica Bolles will cover the key sectors: large-scale immersive events, interactive live performance, spatial audio for consumer devices, virtual training for VR and immersive media production.

Listen to Podcast

Show Notes

Please support our idea and give us your vote!

Voting link -> https://panelpicker.sxsw.com/vote/132288

Voting Deadline Midnight 20th of August 2023.

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 82 Les Stuck (Meow Wolf)

Summary

This episode is sponsored by Berlin-based pro-audio company HOLOPLOT, which features the multi-award-winning X1 Matrix Array. X1 is software-driven, combining 3D Audio-Beamforming and Wave Field Synthesis to achieve authentic sound localisation and complete control over sound in both the vertical and horizontal axes. HOLOPLOT is pivoting the revolution in sound control, enabling the positioning of virtual loudspeakers within a space, allowing for a completely new way of designing and experiencing immersive audio on a large scale. To find more, visit  https://holoplot.com.

In this episode of the Immersive Audio Podcast, Monica Bolles is joined by the musician and Senior Sound Technologist at Meow Wolf – Les Stuck from New Mexico, US.

Les began working in spatial audio while working for the Ensemble Modern and the Frankfurt Ballet in Frankfurt, Germany. He designed the touring six-channel sound system for Frank Zappa’s Yellow Shark Tour, which included a 6-channel ring microphone. He then worked at IRCAM in Paris, where he built several spatializers in Max/FTS – a 6-channel version for Pierre Boulez’s …explosante-fixe… premiere, an unusual 8-channel version specifically adapted to classical opera houses for Philippe Manoury’s opera 60e Parallèle, and a signal-controlled panner that allowed extremely fast movement. He designed a 7-channel sound system at Mills College that featured an overhead speaker and built a variety of spatializers for students and guest composers. To celebrate the 50th anniversary of John Chowning’s seminal work on the digital simulation of sound spatialization, Les realized a version of his algorithm for release with Max/MSP in 2021, including panned reverb and the Doppler effect, all controlled at signal rate. Currently Les works at Meow Wolf, where he designs interactive sound installations and acoustical treatments. He has developed several spatial plugins for Ableton Live, which typically include a binaural output to preview the results in headphones before going on-site. He led a collaboration with Spatial, Inc for Meow Wolf’s installation at South by Southwest, and did extensive testing of Holoplot speakers for a future Meow Wolf project.

Les talks about his extensive career, working with spatial audio since the 1980s, including projects with Frank Zappa, IRCAM, Cycling74, and we dive into the topic of interactive spatial audio for physical installations.

Listen to Podcast

Show Notes

Les Stuck Website –  https://www.lesstuck.com/

LinkedIn – https://www.linkedin.com/in/lesstuckartandtechnology/

Meow Wolf Website – https://meowwolf.com/

QSYS – https://www.qsys.com/products-solutions/q-sys/software/q-sys-designer-software/

Audio for extended realities: A case study informed exposition – https://shorturl.at/gjxD3

Sound Experience Survey: Fulldome and Planetariums – https://www.ips-planetarium.org/news/632118/IPS-Sound-Survey.htm

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 77 HOLOPLOT (3D Audio-Beamforming and Wave Field Synthesis)

Summary

This episode is sponsored by Berlin-based pro-audio company HOLOPLOT, which features the multi-award-winning X1 Matrix Array. X1 is software-driven, combining 3D Audio-Beamforming and Wave Field Synthesis to achieve authentic sound localisation and complete control over sound in both the vertical and horizontal axes. HOLOPLOT is pivoting the revolution in sound control, enabling the positioning of virtual loudspeakers within a space, allowing for a completely new way of designing and experiencing immersive audio on a large scale. To find more, visit  https://holoplot.com.

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the HOLOPLOT team, Segment Manager for Performing Arts and Live – Reese Kirsh and Segment Manager for Immersive and Experiential Applications – Natalia Szczepanczyk and the award-winning sound designer Gareth Fry. We hold a detailed discussion on HOLOPLOT’s technical hardware and software capabilities and talk about the recent David Hockney exhibition at Lightroom, where Gareth shares his experience in creating content and working with this paradigm-shifting technology.

Reese Kirsh has been working within the performing arts sector for over a decade in various roles, including Head of Sound for some of the largest West End and Broadway productions, before joining HOLOPLOT as Performing Arts Segment Manager. He’s very aware of the narrative around immersive and what it means to deliver the right tech to empower creative content rather than distract from it.

Natalia Szczepanczyk is the Segment Manager for Immersive and Experiential Applications at HOLOPLOT. She has a design and consultancy background and previously worked with loudspeaker manufacturer Genelec and consultancies Mouchel and Buro Happold. Natalia specialises in audio system design and acoustics for experiential audience experiences within the themed entertainment sectors.

Gareth Fry is a sound designer best known for his cutting-edge work in theatre and his collaborations with many leading UK theatre directors and companies. His work includes over 20 productions at the National Theatre, over 20 at the Royal Court and countless more at venues such as the Bridge Theatre, Old Vic, Young Vic, in the West End and many more. He has also designed events and exhibitions, from the V&A’s landmark David Bowie Is exhibition to being asked by Danny Boyle to design the sound effects for the Opening Ceremony of the 2012 Olympic Games and having received a number of awards for his work.  

Listen to Podcast

Show Notes

HOLOPLOT Offical Website – https://holoplot.com/

Reese Kirsh – https://www.linkedin.com/in/reesekirsh/

Natalia Szczepanczyk – https://www.linkedin.com/in/nszcz/

Gareth Fry – https://www.linkedin.com/in/gareth-fry-32b8217/

HOLOPLOT Plan Software – https://holoplot.com/?/software/

Lightroom – https://holoplot.com/lp_lightroom/

Lightroom (David Hockney: Bigger & Closer (not smaller & further away) – https://lightroom.uk/?gad=1&gclid=Cj0KCQjwsIejBhDOARIsANYqkD269P44zmkGRBKcwg-hRQEfn8FckxGBcBRzJBxTcwxGjmWQ7Rdhl8AaAncTEALw_wcB

The soundscapes of Illuminarium – https://holoplot.com/applications/

HOLOPLOT Official Rental Provider – https://www.ct-group.com/uk/

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 76 Audioscenic Binaural Audio Over Speakers (Part 2)

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel travels to Southampton, UK, to visit the HQ of Audioscenic, whose mission is to revolutionise and pivot binaural audio over speakers to the masses. Their team developed technology that uses real-time head tracking and sound-field control to create virtual headphones that render personal 3D Audio to the listener’s position. Their first commercial success came in the form of a partnership with Razer and the subsequent release of Leviathan V2 Pro Soundbar, which was announced in early 2023 at CES. Since then,  their technology received a number of tech awards and mainly the support of the user community. We sat down with the core team members and the early adopters to find out where it all started and where it is heading.

Listen to Podcast

Marcos Simón (Co-Founder, CTO) Marcos Simón graduated in 2010 from the Technical University of Madrid with a B.Sc. in telecommunications. In 2011, he joined the Institute of Sound and Vibration Research, where he worked with loudspeaker arrays for sound field control and 3D audio rendering and also in the modelling of cochlear mechanics. He obtained his PhD title in 2014, and between 2014 and 2019, he was part of the S3A Research Programme “Future Spatial Audio for an Immersive Listening Experience at Home”. In 2019 he co-founded Audioscenic to commercialise innovative listener-adaptive audio technologies, where he currently works as Chief Technical Officer. Since the company’s creation, Marcos has been leading the vision of Audioscenic and established himself as a nexus between the commercial and technical world for the start-up, ensuring that the technology is continually evolving and that the customers understand exactly what the technology makes possible.

Professor Filippo Fazi (Co-Founder/Chief Scientist) Prof Fazi is the co-founder and Chief Scientist of Audioscenic Ltd, where he leads the scientific development of new audio technologies and contributes to the company’s strategic decisions.  He is also a Professor of Acoustics and Signal Processing at the Institute of Sound and Vibration Research (ISVR) of the University of Southampton, where he is the Head of the Acoustics Group and the Virtual Acoustics and Audio Engineering teams. He also served as Director of Research at the Institute and sits on the Intellectual Property panel of the Faculty of Engineering of Physical Sciences.  He is an internationally recognised expert in audio technologies, electroacoustics and digital signal processing, with a special focus on 3D audio, acoustical inverse problems, multi-channel systems, and acoustic arrays. He is the author of more than 160 scientific publications and co-inventor of Audioscenic’s patented or patent-pending technologies.  Prof Fazi graduated in Mechanical Engineering from the University of Brescia (Italy) in 2005 with a master’s thesis on room acoustics. He obtained his PhD in acoustics from the Institute of Sound and Vibration Research in 2010, with a thesis on sound field reproduction. Prof Fazi was awarded a research fellowship by the Royal Academy of Engineering in 2010 and the Tyndall Medal by the Institute of Acoustics in 2018. He is a fellow of the Audio Engineering Society and a member of the Institute of Acoustics.

David Monteith (CEO) David is the Chief Executive Office of Audioscenic Ltd, responsible for the strategic direction of the business. David holds a Master’s degree in Physics, in Opto-Electronics and an MBA. David began his career developing optical fibre components before joining EMI Central Research Laboratories, where he led the creation of spin-out Sensaura Ltd. The company’s 3D Audio technology shipped with the Microsoft Xbox and on over 500 million PCs. The Sensaura business was sold to Creative Labs. In 2001 David was part of the Sensaura team that received the Royal Academy of Engineering Mac Robert Award for innovation in engineering. In 2003 David founded Sonaptic Ltd.  In his role as CEO, David led the company to licence its audio technology to Mobile phone vendors and portable games platforms such as the Sony PSP. Sonaptic was sold to Wolfson Semiconductors in 2007. David then held the VP of Business Development position in Wolfson, bringing to market the first Wolfson ANC chip featuring the Sonaptic technology. From 2010- 16 David was CEO/founder of Incus Laboratories. Incus developed and licensed its novel digital ANC technology to companies such as Yamaha before being acquired by AMS AG. In 2019 David joined Audioscenic, working with Marcos and Filippo to raise the initial Seed investment. 

Daniel Wallace (R&D Lead) Daniel studied acoustical engineering at the University of Southampton ISVR, graduating in 2016, then started a PhD at the Centre for Doctoral Training in Next-Generation Computational Modelling. His PhD project was on multi-zone sound field control, specifically for producing private listening zones. Since joining Audioscenic as R&D Lead in 2021, he’s turned ideas into products. Daniel firmly believes that for their technology to be successfully deployed into products, the user experience must be flawless; this means testing lots of edge cases in code and in the lab to make sure that when users sit down in front of our soundbar, it just works and gives them an amazing impression. 

Joe Guarini (Creative Director) Joe is a sound designer who has specialised in 3D audio content creation for over ten years.  He won the jury prize for best binaural sound design in the 2014 Mixage Fou international sound competition, in addition to having his works featured in video games, film trailers, television commercials, and demonstrations at CES.  Joe has been working with the Audioscenic team since 2017 to provide listeners with sounds that highlight the immersive qualities of the audio hardware.  His contributions include creating computer game experiences where players can walk through, and interact with, sounds in 3D space.  Joe’s passion is helping people see the full capabilities of 3D audio technology, which is why he chose to join forces with Audioscenic.

Martin Rieger (VRTonung) Martin Rieger is a sound engineer with years of experience in immersive audio. His studio VRTonung specialises in 360° sound recordings and 3D audio postproduction, making him the auditory contact point for the complete realisation of XR projects from creative storytelling at the beginning until the technical implementation. He is also running the most dedicated blog on 3d audio vrtonung.de/blog, setting guidelines for making spatial audio more accessible and building a foundation for the next generation of immersive content. Martin is a certified delegate of the German Institute for vocational training, teaching teachers what immersive audio means in various media with or without visuals, head tracking or interactive elements. This will set the background for the new job profile “designer for immersive media”.

Show Notes

Audioscenic Official Website – https://www.audioscenic.com/

University of Southampton – https://www.southampton.ac.uk/

Razer Official Website – https://www.razer.com/

Razer Leviathan V2 Pro – https://www.razer.com/gb-en/gaming-speakers/razer-leviathan-v2-pro

Marcos Simón LinkedIn – https://www.linkedin.com/in/drmfsg/

Filippo Fazi LinkedIn – https://www.linkedin.com/in/filippo-fazi-4a822443/

David Monteith LinkedIn – https://www.linkedin.com/in/david-monteith-8a66221/

Daniel Wallace LinkedIn – https://www.linkedin.com/in/danielwallace42/

Joe Gaurini – https://www.linkedin.com/in/joseph-guarini-695b8053/

Martin Rieger – https://www.linkedin.com/in/martin-rieger/

VRTonung – https://www.vrtonung.de/en/blog/

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 75 Audioscenic Binaural Audio Over Speakers (Part 1)

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel travels to Southampton, UK, to visit the HQ of Audioscenic, whose mission is to revolutionise and pivot binaural audio over speakers to the masses. Their team developed technology that uses real-time head tracking and sound-field control to create virtual headphones that render personal 3D Audio to the listener’s position. Their first commercial success came in the form of a partnership with Razer and the subsequent release of Leviathan V2 Pro Soundbar, which was announced in early 2023 at CES. Since then,  their technology received a number of tech awards and mainly the support of the user community. We sat down with the core team members and the early adopters to find out where it all started and where it is heading.

Listen to Podcast

Marcos Simón (Co-Founder, CTO)

Marcos Simón graduated in 2010 from the Technical University of Madrid with a B.Sc. in telecommunications. In 2011, he joined the Institute of Sound and Vibration Research, where he worked with loudspeaker arrays for sound field control and 3D audio rendering and also in the modelling of cochlear mechanics. He obtained his PhD title in 2014, and between 2014 and 2019, he was part of the S3A Research Programme “Future Spatial Audio for an Immersive Listening Experience at Home”. In 2019 he co-founded Audioscenic to commercialise innovative listener-adaptive audio technologies, where he currently works as Chief Technical Officer. Since the company’s creation, Marcos has been leading the vision of Audioscenic and established himself as a nexus between the commercial and technical world for the start-up, ensuring that the technology is continually evolving and that the customers understand exactly what the technology makes possible.  

Professor Filippo Fazi (Co-Founder/Chief Scientist)

Prof Fazi is the co-founder and Chief Scientist of Audioscenic Ltd, where he leads the scientific development of new audio technologies and contributes to the company’s strategic decisions.  He is also a Professor of Acoustics and Signal Processing at the Institute of Sound and Vibration Research (ISVR) of the University of Southampton, where he is the Head of the Acoustics Group and the Virtual Acoustics and Audio Engineering teams. He also served as Director of Research at the Institute and sits on the Intellectual Property panel of the Faculty of Engineering of Physical Sciences.  He is an internationally recognised expert in audio technologies, electroacoustics and digital signal processing, with a special focus on 3D audio, acoustical inverse problems, multi-channel systems, and acoustic arrays. He is the author of more than 160 scientific publications and co-inventor of Audioscenic’s patented or patent-pending technologies.  Prof Fazi graduated in Mechanical Engineering from the University of Brescia (Italy) in 2005 with a master’s thesis on room acoustics. He obtained his PhD in acoustics from the Institute of Sound and Vibration Research in 2010, with a thesis on sound field reproduction. Prof Fazi was awarded a research fellowship by the Royal Academy of Engineering in 2010 and the Tyndall Medal by the Institute of Acoustics in 2018. He is a fellow of the Audio Engineering Society and a member of the Institute of Acoustics.

David Monteith (CEO)

David is the Chief Executive Office of Audioscenic Ltd, responsible for the strategic direction of the business. David holds a Master’s degree in Physics, in Opto-Electronics and an MBA. David began his career developing optical fibre components before joining EMI Central Research Laboratories, where he led the creation of spin-out Sensaura Ltd. The company’s 3D Audio technology shipped with the Microsoft Xbox and on over 500 million PCs. The Sensaura business was sold to Creative Labs. In 2001 David was part of the Sensaura team that received the Royal Academy of Engineering Mac Robert Award for innovation in engineering. In 2003 David founded Sonaptic Ltd.  In his role as CEO, David led the company to licence its audio technology to Mobile phone vendors and portable games platforms such as the Sony PSP. Sonaptic was sold to Wolfson Semiconductors in 2007. David then held the VP of Business Development position in Wolfson, bringing to market the first Wolfson ANC chip featuring the Sonaptic technology. From 2010- 16 David was CEO/founder of Incus Laboratories. Incus developed and licensed its novel digital ANC technology to companies such as Yamaha before being acquired by AMS AG. In 2019 David joined Audioscenic, working with Marcos and Filippo to raise the initial Seed investment.

Daniel Wallace (R&D Lead)

Daniel studied acoustical engineering at the University of Southampton ISVR, graduating in 2016, then started a PhD at the Centre for Doctoral Training in Next-Generation Computational Modelling. His PhD project was on multi-zone sound field control, specifically for producing private listening zones. Since joining Audioscenic as R&D Lead in 2021, he’s turned ideas into products. Daniel firmly believes that for their technology to be successfully deployed into products, the user experience must be flawless; this means testing lots of edge cases in code and in the lab to make sure that when users sit down in front of our soundbar, it just works and gives them an amazing impression.

Joe Guarini (Creative Director)

Joe is a sound designer who has specialised in 3D audio content creation for over ten years.  He won the jury prize for best binaural sound design in the 2014 Mixage Fou international sound competition, in addition to having his works featured in video games, film trailers, television commercials, and demonstrations at CES.  Joe has been working with the Audioscenic team since 2017 to provide listeners with sounds that highlight the immersive qualities of the audio hardware.  His contributions include creating computer game experiences where players can walk through, and interact with, sounds in 3D space.  Joe’s passion is helping people see the full capabilities of 3D audio technology, which is why he chose to join forces with Audioscenic.

Martin Rieger (VRTonung)

Martin Rieger is a sound engineer with years of experience in immersive audio. His studio VRTonung specialises in 360° sound recordings and 3D audio postproduction, making him the auditory contact point for the complete realisation of XR projects from creative storytelling at the beginning until the technical implementation. He is also running the most dedicated blog on 3d audio vrtonung.de/blog, setting guidelines for making spatial audio more accessible and building a foundation for the next generation of immersive content. Martin is a certified delegate of the German Institute for vocational training, teaching teachers what immersive audio means in various media with or without visuals, head tracking or interactive elements. This will set the background for the new job profile “designer for immersive media”.

Show Notes

Audioscenic Official Website – https://www.audioscenic.com/

University of Southampton – https://www.southampton.ac.uk/

Razer Official Website – https://www.razer.com/

Razer Leviathan V2 Pro – https://www.razer.com/gb-en/gaming-speakers/razer-leviathan-v2-pro

Marcos Simón LinkedIn – https://www.linkedin.com/in/drmfsg/

Filippo Fazi LinkedIn – https://www.linkedin.com/in/filippo-fazi-4a822443/

David Monteith LinkedIn – https://www.linkedin.com/in/david-monteith-8a66221/

Daniel Wallace LinkedIn – https://www.linkedin.com/in/danielwallace42/

Joe Gaurini – https://www.linkedin.com/in/joseph-guarini-695b8053/

Martin Rieger – https://www.linkedin.com/in/martin-rieger/

VRTonung – https://www.vrtonung.de/en/blog/

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.