Episode 75 Audioscenic Binaural Audio Over Speakers (Part 1)

XR Industry

Viewing posts tagged XR Industry

Episode 75 Audioscenic Binaural Audio Over Speakers (Part 1)

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel travels to Southampton, UK, to visit the HQ of Audioscenic, whose mission is to revolutionise and pivot binaural audio over speakers to the masses. Their team developed technology that uses real-time head tracking and sound-field control to create virtual headphones that render personal 3D Audio to the listener’s position. Their first commercial success came in the form of a partnership with Razer and the subsequent release of Leviathan V2 Pro Soundbar, which was announced in early 2023 at CES. Since then,  their technology received a number of tech awards and mainly the support of the user community. We sat down with the core team members and the early adopters to find out where it all started and where it is heading.

Listen to Podcast

Marcos Simón (Co-Founder, CTO)

Marcos Simón graduated in 2010 from the Technical University of Madrid with a B.Sc. in telecommunications. In 2011, he joined the Institute of Sound and Vibration Research, where he worked with loudspeaker arrays for sound field control and 3D audio rendering and also in the modelling of cochlear mechanics. He obtained his PhD title in 2014, and between 2014 and 2019, he was part of the S3A Research Programme “Future Spatial Audio for an Immersive Listening Experience at Home”. In 2019 he co-founded Audioscenic to commercialise innovative listener-adaptive audio technologies, where he currently works as Chief Technical Officer. Since the company’s creation, Marcos has been leading the vision of Audioscenic and established himself as a nexus between the commercial and technical world for the start-up, ensuring that the technology is continually evolving and that the customers understand exactly what the technology makes possible.  

Professor Filippo Fazi (Co-Founder/Chief Scientist)

Prof Fazi is the co-founder and Chief Scientist of Audioscenic Ltd, where he leads the scientific development of new audio technologies and contributes to the company’s strategic decisions.  He is also a Professor of Acoustics and Signal Processing at the Institute of Sound and Vibration Research (ISVR) of the University of Southampton, where he is the Head of the Acoustics Group and the Virtual Acoustics and Audio Engineering teams. He also served as Director of Research at the Institute and sits on the Intellectual Property panel of the Faculty of Engineering of Physical Sciences.  He is an internationally recognised expert in audio technologies, electroacoustics and digital signal processing, with a special focus on 3D audio, acoustical inverse problems, multi-channel systems, and acoustic arrays. He is the author of more than 160 scientific publications and co-inventor of Audioscenic’s patented or patent-pending technologies.  Prof Fazi graduated in Mechanical Engineering from the University of Brescia (Italy) in 2005 with a master’s thesis on room acoustics. He obtained his PhD in acoustics from the Institute of Sound and Vibration Research in 2010, with a thesis on sound field reproduction. Prof Fazi was awarded a research fellowship by the Royal Academy of Engineering in 2010 and the Tyndall Medal by the Institute of Acoustics in 2018. He is a fellow of the Audio Engineering Society and a member of the Institute of Acoustics.

David Monteith (CEO)

David is the Chief Executive Office of Audioscenic Ltd, responsible for the strategic direction of the business. David holds a Master’s degree in Physics, in Opto-Electronics and an MBA. David began his career developing optical fibre components before joining EMI Central Research Laboratories, where he led the creation of spin-out Sensaura Ltd. The company’s 3D Audio technology shipped with the Microsoft Xbox and on over 500 million PCs. The Sensaura business was sold to Creative Labs. In 2001 David was part of the Sensaura team that received the Royal Academy of Engineering Mac Robert Award for innovation in engineering. In 2003 David founded Sonaptic Ltd.  In his role as CEO, David led the company to licence its audio technology to Mobile phone vendors and portable games platforms such as the Sony PSP. Sonaptic was sold to Wolfson Semiconductors in 2007. David then held the VP of Business Development position in Wolfson, bringing to market the first Wolfson ANC chip featuring the Sonaptic technology. From 2010- 16 David was CEO/founder of Incus Laboratories. Incus developed and licensed its novel digital ANC technology to companies such as Yamaha before being acquired by AMS AG. In 2019 David joined Audioscenic, working with Marcos and Filippo to raise the initial Seed investment.

Daniel Wallace (R&D Lead)

Daniel studied acoustical engineering at the University of Southampton ISVR, graduating in 2016, then started a PhD at the Centre for Doctoral Training in Next-Generation Computational Modelling. His PhD project was on multi-zone sound field control, specifically for producing private listening zones. Since joining Audioscenic as R&D Lead in 2021, he’s turned ideas into products. Daniel firmly believes that for their technology to be successfully deployed into products, the user experience must be flawless; this means testing lots of edge cases in code and in the lab to make sure that when users sit down in front of our soundbar, it just works and gives them an amazing impression.

Joe Guarini (Creative Director)

Joe is a sound designer who has specialised in 3D audio content creation for over ten years.  He won the jury prize for best binaural sound design in the 2014 Mixage Fou international sound competition, in addition to having his works featured in video games, film trailers, television commercials, and demonstrations at CES.  Joe has been working with the Audioscenic team since 2017 to provide listeners with sounds that highlight the immersive qualities of the audio hardware.  His contributions include creating computer game experiences where players can walk through, and interact with, sounds in 3D space.  Joe’s passion is helping people see the full capabilities of 3D audio technology, which is why he chose to join forces with Audioscenic.

Martin Rieger (VRTonung)

Martin Rieger is a sound engineer with years of experience in immersive audio. His studio VRTonung specialises in 360° sound recordings and 3D audio postproduction, making him the auditory contact point for the complete realisation of XR projects from creative storytelling at the beginning until the technical implementation. He is also running the most dedicated blog on 3d audio vrtonung.de/blog, setting guidelines for making spatial audio more accessible and building a foundation for the next generation of immersive content. Martin is a certified delegate of the German Institute for vocational training, teaching teachers what immersive audio means in various media with or without visuals, head tracking or interactive elements. This will set the background for the new job profile “designer for immersive media”.

Show Notes

Audioscenic Official Website – https://www.audioscenic.com/

University of Southampton – https://www.southampton.ac.uk/

Razer Official Website – https://www.razer.com/

Razer Leviathan V2 Pro – https://www.razer.com/gb-en/gaming-speakers/razer-leviathan-v2-pro

Marcos Simón LinkedIn – https://www.linkedin.com/in/drmfsg/

Filippo Fazi LinkedIn – https://www.linkedin.com/in/filippo-fazi-4a822443/

David Monteith LinkedIn – https://www.linkedin.com/in/david-monteith-8a66221/

Daniel Wallace LinkedIn – https://www.linkedin.com/in/danielwallace42/

Joe Gaurini – https://www.linkedin.com/in/joseph-guarini-695b8053/

Martin Rieger – https://www.linkedin.com/in/martin-rieger/

VRTonung – https://www.vrtonung.de/en/blog/

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 74 Agnieszka Roginska (NYU)

Summary

This episode is sponsored by Spatial, the immersive audio software that gives a new dimension to sound. Spatial gives creators the tools to create interactive soundscapes using our powerful 3D authoring tool, Spatial Studio. Their software modernises traditional channel-based audio; by rethinking how we hear and feel immersive experiences anywhere. To find more, go to https://www.spatialinc.com.

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the Professor of Music Technology at NYU – Agnieszka Roginska, from New York, US.

Agnieszka Roginska is a Professor of Music Technology at New York University. She conducts research in the simulation and applications of immersive and 3D audio, including the capture, analysis and synthesis of auditory environments. Applications of her work include AR/VR/XR, gaming, mission-critical, and augmented acoustic sensing. She is the author of numerous publications on the topics of acoustics and psychoacoustics of immersive audio. Agnieszka is a Fellow of the Audio Engineering Society (AES) and a Past-President of the AES. She is the faculty sponsor of the Society for Women in TeCHnology (SWiTCH) at NYU.

Agnieszka speaks about the importance of the Audio Engineering Society and initiatives for education for the unrepresented communities and her involvement in a wide spectrum of research and publishing activities on spatial audio.

Listen to Podcast

https://open.spotify.com/episode/4rMdK2RQdzm2W8QeK5buzE?si=7f70288c9aff4e23

Show Notes

Agnieszka Roginska LinkedIn – https://www.linkedin.com/in/agnieszka-roginska-784a07/

NYU Official Website – https://www.nyu.edu/

NYU Music Technology Program – https://steinhardt.nyu.edu/programs/music-technology

AES Official Website – https://aes2.org/

Designing Effective Playful Collaborative Science Learning in VR – https://link.springer.com/chapter/10.1007/978-3-031-15325-9_3

Insight into postural control in unilateral sensorineural hearing loss and vestibular hypofunction – https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0276251

Multilayered Affect-Audio Research System for Virtual Reality Learning Environments – https://nyuscholars.nyu.edu/en/publications/multilayered-affect-audio-research-system-for-virtual-reality-lea

Methodology for perceptual evaluation of plausibility with self-translation of the listener – https://www.aes.org/e-lib/browse.cfm?elib=21874

Sound design and reproduction techniques for co-located narrative VR experiences – https://www.aes.org/e-lib/browse.cfm?elib=20660

Evaluation of Binaural Renderers: Multidimensional Sound Quality Assessment – https://www.aes.org/e-lib/browse.cfm?elib=19694

Immersive Sound: The Art and Science of Binaural and Multi-Channel Audio – Audio Engineering Society Presents (Paperback) – https://www.waterstones.com/book/immersive-sound/agnieszka-roginska/paul-geluso/9781138900004

2023 AES International Conference on Spatial and Immersive Audio – https://aes2.org/events-calendar/2023-aes-international-conference-on-spatial-and-immersive-audio/  

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 70 Michael Plitkins (SPATIAL)

Summary

This episode is sponsored by Spatial, the immersive audio software that gives a new dimension to sound. Spatial gives creators the tools to create interactive soundscapes using our powerful 3D authoring tool, Spatial Studio. Their software modernises traditional channel-based audio; by rethinking how we hear and feel immersive experiences, anywhere. To find more go to https://www.spatialinc.com.

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the Co-Founder and Co-Der of SPATIAL – Michael Plitkins, from Los Angeles, California. Before founding Spatial, Michael was a founding engineer at Nest which was ultimately acquired by Google. Michael helped develop a set of groundbreaking smart consumer products for the home like the Nest Thermostat and helped define the category we know as Home IoT today. Prior to Nest, he was a founding engineer at Tellme Networks, which was acquired by Microsoft. He also has experience in developing tools and technologies for 3D modelling, animation, VR and graphics. Michael has over 35 patents in UI design, streaming audio, smart home optimization and more.

Michael co-founded Spatial with the amazing idea that sound should not always be linear or channel based, where he patented the Spatial Reality technology; the Spatial audio rendering platform that allows sound to move, using object-based audio and acoustic physics. Michael also developed key integrations within Spatial’s platform that allows new tools to take advantage of Spatial IP.

Michael shares the story of the creation of SPATIAL as a company which subsequently developed into a multifaceted platform for sound designers. We look at the key elements of software architecture, discuss the most recent case studies featuring Spatial technology and a newly launched educational 101 course for new users.

Listen to Podcast

Show Notes

Michael Plitkins – https://www.linkedin.com/in/michael-plitkins-720341145/

SPATIAL LinkedIn – https://www.linkedin.com/company/spatialinc/

SPATIAL Official Website – https://www.spatialinc.com

SPATIAL Creators Space – https://www.spatialinc.com/creators

SPATIAL 101 (FREE COURSE) – https://guide.spatialinc.com/hc/en-us/categories/8192004472723-Spatial-Studio-101

Immersive Audio Podcast Episode 60 – Ken Felton (SPATIAL) – https://immersiveaudiopodcast.com/episode-60-ken-felton-spatial/

In this episode of the Immersive Audio Podcast, Oliver Kadel, Monica Bolles and Bjørn Jacobsen are joined by Ken Felton, Sound Designer at Spatial, from the San Francisco Bay Area, US. Ken is an Audio Director and Sound Designer with decades of experience in professional audio, and a passion for storytelling and immersive soundscapes. Ken started in pro-audio by touring N. America and running sound reinforcement systems. In 1994 he moved to Northern California and started working with interactive audio at Electronic Arts. Most recently in 2021, Ken joined Spatial where he works as a sound designer and brand ambassador. In this episode, we discuss Spatial technology and its entire ecosystem of tools, exploring the breadth of implementation options and some recent case studies.

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 68 Markus Zaunschirm (Atmoky)

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the CEO of Atmoky – Markus Zaunschirm from Graz, Austria.   

Markus Zaunschirm MSc, PhD is a renowned spatial audio expert and co-founder of Atmoky. During his research, he combined advanced signal processing with the models of human perception and invented a unique method for spatial audio playback for headphones. Building on that technological foundation, Markus co-founded Atmoky, a spatial audio software company with the mission to shape the future of audio in virtual worlds and the metaverse.  

Markus discusses Atmoky’s creation and their philosophy on spatial audio and we take part in a live interactive demo of their recently released Web SDK which you can listen to binaurally in this episode.

Listen to Podcast

Show Notes

Markus Zaunschirm – https://www.linkedin.com/in/markus-zaunschirm

Atmoky Webpage – https://atmoky.com

Atmoky Interactive Web Demo – https://demo.atmoky.com

Atmoky Image Video – https://www.youtube.com/watch?v=yqndq1tTfqU

Portal App – https://portal.app

Europe’s Sixth Student 3D Audio Production Competition and virtual Finals are held on October 2022 – https://ambisonics.iem.at/s3dapc/2022

One Square Inch – https://onesquareinch.org

Logitech Chorus, off-ear integrated audio for Meta Quest 2 – https://www.logitech.com/en-us/products/vr/chorus-for-meta-quest-2.982-000153.html?fbclid=IwAR2qyFUCwURylLAlAXXBY1GfwASnBzS_pRqqutZLh5AfGp9MDVWpVI8yNxw

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 67 Adam Ganz & Rachel Donnelly (StoryFutures Academy & IWM)

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by the Head of Screenwriting at Royal Holloway, University of London and Head of the Writers Room at StoryFutures Academy – Professor Adam Ganz and the Project Manager for the Second World War and Holocaust Partnership Programme at Imperial War Museums Rachel Donnelly, from London, UK.

Professor Adam Ganz is Head of Screenwriting at Royal Holloway, University of London and Head of the Writers Room at StoryFutures Academy, the UK’s National Centre for Immersive Storytelling. In addition to leading StoryFutures Academy on the ‘One Story, Many Voices’ Project with Imperial War Museum, he designed and ran a project on writing for Immersive Audio with Inua Ellams, Jayde Adams, Georgina Campbell, Fryars and Rae Morris. He was also nominated for best single drama for the BBC for his play The Gestapo Minutes.

Rachel Donnelly is Project Manager for the Second World War and Holocaust Partnership Programme (SWWHPP) at Imperial War Museums. She began working on SWWHPP in early 2020 having previously been the Learning and Audience Advocate for IWM’s new Holocaust Galleries and Holocaust Learning Manager for schools. SWWHPP is a three-year project led by IWM and funded by the National Lottery Heritage Fund to support cultural organisations across the UK to engage with local communities to share lesser-known stories related to the Second World and Holocaust. As part of the programme, the cultural organisations, local communities and IWM worked with StoryFutures Academy, to create an immersive touring sound installation with stories written by a group of celebrated UK-based writers.

In this episode, Adam and Rachel explain how binaural audio was used to enhance their traditional and immersive storytelling techniques and discuss the ‘One Story, many voices’ museum installation case study.

Listen to Podcast

Show Notes

Adam Ganz – https://pure.royalholloway.ac.uk/portal/en/persons/adam-ganz(55937d7c-9684-41f8-9a7f-680f94bd13b1).html

‘One Story, Many Voices’, A StoryFutures Academy Immersive Audio Project with Imperial War Museums – https://www.storyfutures.com/news/one-story-many-voices-a-storyfutures-academy-immersive-audio-project-with-the-imperial-war-museums  

To listen to all of the stories in full from the ‘One story, many voices’ project – https://www.storyfutures.com/resources/imperial-war-museum-one-story-many-voices  

For more information about StoryFutures Academy, the UK’s National Centre for Immersive Storytelling and resources for immersive audio, virtual production, AR, VR visit – www.storyfutures.com/academy

 To find out more about the Second World War and Holocaust Partnership Programme at Imperial War Museums – https://www.iwm.org.uk

CreativeXR 2020 StoryFutures Academy masterclass Spatial storytelling as creative practice – https://www.youtube.com/watch?v=60rJHsaLvFo 

Instagram and Twitter – @storyfuturesa / @I_W_M 

Dome Fest West –  https://www.domefestwest.com

IMERSA – https://summit.imersa.org

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 66 Sennheiser AMBEO Mobility (Part 3)

Summary

In this three-part series of the Immersive Audio Podcast, Oliver Kadel will travel to Zurich in Switzerland for an exclusive opportunity to speak to the entire Ambeo Mobility team. The use of spatial audio is becoming ever so ubiquitous, it seems that it’s just a matter of time before we get to enjoy immersive content from the comfort of our cars on a regular basis. Join us on this journey to explore and learn about the cutting-edge applications of spatial audio in automotive.

Veronique Larcher (Director)

Dr Veronique Larcher is the Director of AMBEO Immersive Audio, Sennheiser’s brand for immersive audio products and experiences. She is also a founding advisor at Earkick, a startup focused on monitoring anxiety using audio biomarkers, among others. Veronique has over 20 years of experience in the areas of audio and venture building – before launching AMBEO in 2016, she held multiple roles spearheading innovation at Sennheiser, most notably founding and managing their Strategic Innovation office in San Francisco. Veronique holds a PhD in 3D Audio for Virtual Reality from Ircam (France) and a Bachelor’s degree in Economics and Finance from the Paris Institute of Political Studies.

Luca Brambilla (Project Manager)

Luca is the project manager of the Sennheiser AMBEO Mobility initiative, part of the founding team of the project and a real car enthusiast. As a car audiophile, he is motivated by bringing innovative audio technology into the most prestigious cars in the industry and is excited by the opportunities the future of mobility hold. His background is in Industrial Engineering, and with a Master’s degree in Management, Technology and Economics from ETH Zurich, he is skilled at combining technical knowledge and management expertise. In the team, he covers the role of project coordination, and planning, and is in charge of customer and supplier relations.

Sofia Brazzola (Brand & Marketing Manager)

Sofia is leading Brand & Marketing efforts aimed at the growth of the corporate venture AMBEO Mobility, launching Sennheiser in the automotive industry. She has been in the team for several years under different capacities leveraging her Design Strategy, User Experience and Marketing skills aimed at de-risking and launching product concepts in new opportunity markets for Sennheiser internationally, including VR/AR and immersive music streaming. In her current role, she coordinates digital and event marketing activities designs brand strategies and is responsible for Sennheiser AMBEO’s brand positioning in the automotive world. She recently graduated with an MBA from Imperial College in London and holds a Master’s degree in Design from the Zurich University of the Arts.

Rui Wang (Customer Developer)

Rui is a digital marketer, cross-cultural communicator, and partnership developer. She holds a master’s degree in European and Asian Business Management from the University of Zurich. With four years of experience working with top technology companies, she has a keen understanding of the needs of B2C & B2B clients in the field of innovation. As a bridge between the West and the East within the Sennheiser AMBEO team, she takes pride in providing market insights and data-driven business strategies. In addition to her primary job functions, she has been certified as a Data Analyst with Python for her commitment to data analysis.

Lorenz Bucher (Software Architect)

Lorenz Bucher is part of Sennheiser’s interdisciplinary AMBEO Mobility team. As a software architect, he’s eagerly striving to shape the future of automotive sound experiences and leading Sennheiser towards software excellence. He has a background in electrical engineering and spent many years of his career developing embedded audio DSP systems and implementing audio algorithms for high-end professional mixing consoles and consumer electronics. Being an avid musician in his spare time, Lorenz has found the sweet spot where he can combine his love for music with his passion for tweaking the technology to create the ‘wow’.

Sofia Checa (Software Engineer)

Sofia is a software engineer specializing in spatial audio. She has a degree in Electrical Engineering and Computer Science from Yale University and is also a classically trained cellist. She joined the team in 2019 to help create the first AMBEO Mobility Demo Car and has been fascinated with the project ever since. Sofia loves her job because it allows her to tap into both her technical side and her artistic side each and every day. Her responsibilities range from writing code to critical listening to developing demo experiences.

Henrik Oppermann (Head of Sound)

Henrik Oppermann (M.Mus.) is a leading 3D sound specialist, bringing with him over 15 years of experience in recording studio-quality audio on location for film, advertising, music industry clients and 3D sound installations. Henrik has worked on over 150 VR and AR projects, capturing 3D audio in a number of challenging environments, including low-flying military aircraft, formula one race cars, refugee camps, mountain peaks and concert halls around the world. An expert in his field, Henrik has developed hardware and software audio applications and workflows for VR collaboration with leading sound partners, to deliver the best possible Immersive Sound. Henrik joined Sennheiser AMBEO Immersive Audio as the Head of Sound in February 2022.

Hans-Martin Buff has been a Recording engineer and music producer since 1993, when he graduated from Music Tech vocational college in Minneapolis, USA. He began his career at Pachyderm Studios, where he made excellent coffee and then proceeded to hone his studio chops by assisting on projects such as Live’s million-seller “Throwing Copper”. Hans-Martin worked his way through various Minnesota rock’n’roll factories before he found a more permanent home at Prince’s Paisley Park Studios, whose personal engineer he became and remained for four funky years. During this period he wasn’t only responsible for His Purple Majesty’s recordings, but also for the studio affairs of world-renowned artists, including No Doubt, Chaka Khan and Larry Graham. Being a native of Germany, Hans-Martin Buff relocated to his home country in 2001, where he continued his career as an independent recording engineer and producer, and where he has since mixed and recorded a host of national and international talent, such as Mousse T., Joss Stone, Zucchero, Eric Burdon, Roachford, Maxi Priest and the Scorpions. Since 2018, Hans-Martin has explored the subject 3D-Audio for headphones, and he is now considered an industry expert. He has written a book on the subject and holds an “MA in Immersive Music Production”.

Listen to Podcast

Show Notes

Sennheiser – https://en-uk.sennheiser.com

Immersive Audio – https://en-uk.sennheiser.com/ambeo

AMBEO Mobility Official Page – https://en-uk.sennheiser.com/ambeo-mobility

AMBEO Mobility LinkedIn – https://www.linkedin.com/showcase/ambeo-mobility

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 63 Kevin Bolen (Skywalker Sound)

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by a director for interactive audio at Skywalker Sound – Kevin Bolen from California, US. 

Kevin Bolen supervises Skywalker Sound’s interactive audio department, which combines decades of cinematic audio experience with bleeding-edge technologies to create unforgettable immersive audio experiences such as the Academy Award-winning Carne y Arena, the Peabody Award-winning Queerskins: A Love Story, and the Emmy-nominated Star Wars: Vader Immortal. Kevin’s team collaborates with partners including Disney, Lucasfilm, Marvel, Legendary Entertainment, and ILMxLAB to extend the power of cinematic storytelling into location-based experiences and home entertainment alike.

In this episode, Kevin shares his perspective on how they approach the spatial and interactive audio philosophy at Skywalker and we explore the Star Wars: Tales from the Galaxy’s Edge project case study.

Listen to Podcast

Show Notes

Kevin Bolen – https://www.linkedin.com/in/kbolen/

Skywalker Sound – https://www.skysound.com/

ILMxLAB – https://www.ilmxlab.com/

Facebook Spatial Workstation – https://facebook360.fb.com/spatial-workstation/

Star Wars: Tales from the Galaxy’s Edge – https://www.oculus.com/experiences/quest/3484270071659289/?locale=en_GB

Bandcamp acquired by Epic Games – https://www.epicgames.com/site/en-US/news/bandcamp-joining-epic-games-to-support-fair-open-platforms-for-artists-and-fans

The audio heard through the episode are binaural excerpts from Star Wars: Tales from the Galaxy’s Edge VR Experience.

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 62 Nikunj Raghuvanshi & Noel Cross (Microsoft – Physics Based Virtual Acoustics)

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel, Monica Bolles and Bjørn Jacobsen are joined by a Principal Dev Leader at the Mixed Reality division at Microsft Noel Cross and Senior Principal Researcher at Microsoft Research Nikunj Raghuvanshi from Redmond, US.

Nikunj likes to invent techniques that create immersive sight and sound from computation. He is endlessly fascinated with simulating the laws of physics in real-time and finds it thrilling to search for simple algorithms that unfold into complex physical behaviour. He has over a decade of research and development experience at the intersection of computational audio, graphics, and physics, with over fifty papers and patents. His inventions have been successfully deployed in the industry, particularly Project Acoustics, which is bringing immersive sound propagation to many major AAA game franchises today. Nikunj is currently a Senior Principal Researcher at Microsoft Research. Previously, he initiated interactive sound simulation research at UNC-Chapel Hill during his PhD studies, whose codebase was acquired by Microsoft. 

Noel grew up playing games on my Commodore 64 and Amiga computers. His love for multimedia computing helped him to start working at Microsoft as an intern in 1991 in the multimedia team. This was the age of the SoundBlaster 16 ISA cards and CD-ROMs were just being introduced into PCs.  Out of the multimedia team, the DirectX team was born to accelerate the development of high-quality games for the PC. Noel worked on DirectSound and audio drivers for Windows getting a taste of the game development community attending several GDCs in the 90s. This was the first time he was introduced to 3D audio algorithms and at the time the technology didn’t impress much.  Through the 2000s, he worked on every release of Windows with the focus on improving the audio subsystem. This led to the complete overhaul of the audio infrastructure on the Windows Vista platform which has remained largely intact since introduced in 2006.  The most current stop on his Microsoft journey is working on Mixed Reality devices. He worked on the speech and audio functionality exposed from HoloLens and Windows Mixed Reality devices with a concentration on spatial audio. After having lackluster impact in the 90s with spatial audio, he’s been reinvigorated working on this technology with the introduction of high-quality HRTFs and head-tracking services to complete the experience. Spatial audio processing has also led Noel to better understand the impact of acoustics on virtual 3d worlds. His team is currently working on Project Acoustics which allows developers of 3d titles to take advantage of wave-based simulations to handle how audio propagates in the real world.

In this episode, Nikunj and Noel dive deep into the topic of physics-based virtual acoustics along with Project Triton and Project Acoustics covering fundamental theory, research, technology and case studies.

Listen to Podcast

Show Notes

Project Acoustics: Making Waves with Triton – https://youtu.be/pIzwo-MxCC8

Gears of War 4, Project Triton: Pre-Computed Environmental Wave Acoustics  – https://youtu.be/qCUEGvIgco8

NOTAM – https://notam.no/meetups/the-ambisonics-salon/ 

Nikunj Raghuvanshi’s LinkedIn: https://www.linkedin.com/in/nikunj-raghuvanshi-a499172b/

Noel Cross’s LinkedIn: https://www.linkedin.com/in/noel-cross-0b9a51167/

Microsoft Soundscape – https://www.microsoft.com/en-us/research/product/soundscape/

Microsoft HoloLens – https://www.microsoft.com/en-us/hololens

AVAR 2022: AES 4th International Conference on Audio for Virtual and Augmented Reality – https://aes2.org/contributions/avar-2022/

Senua’s Saga: Hellblade II – Gameplay reveal – https://youtu.be/fukYzbthEVU //

https://twitter.com/dagadi/status/1470000580223504389)

Directional Sources & Listeners in Interactive Sound Propagation using Reciprocal Wave Field Coding – https://www.youtube.com/watch?v=pvWlCQGZpz4

​​Project Acoustics | Game Developers Conference 2019- https://youtu.be/uY4G-GUAQIE

Interactive sound simulation: Rendering immersive soundscapes in games and virtual reality – https://youtu.be/2sKPDGBsM0Q

Project Triton Research website – https://www.microsoft.com/en-us/research/project/project-triton/

Notes on Parametric Wave Field Coding for Precomputed Sound Propagation – https://www.microsoft.com/en-us/research/publication/parametric-wave-field-coding-precomputed-sound-propagation/

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.

Episode 61 – Tom Ffiske (Immersive Wire)

Summary

In this episode of the Immersive Audio Podcast, Oliver Kadel and Monica Bolles are joined by a tech journalist and founder of Immersive Wire – Tom Ffiske from London, UK.   

Tom is Chief Editor of Immersive Wire, the newsletter dedicated to immersive technologies and the metaverse, analysing the sector and charting its steady growth over time.

In this episode, Tom shares his view on the current state of play of the Metaverse vision and talks about his upcoming book The Metaverse: A profession Guide.

Listen to Podcast

Show Notes

Tom Ffisk – https://uk.linkedin.com/in/tom-ffiske-56119174

Immersive Wire – https://www.immersivewire.com/

The Metaverse: A Professional Guide – https://www.immersivewire.com/the-metaverse-a-professional-guide/

Our Patreon

If you enjoy the podcast and would like to show your support please consider becoming a Patreon. Not only are you supporting us, but you will also get special access to bonus content and much more.

Find out more on our official Patreon page – https://www.patreon.com/immersiveaudiopodcast

We thank you kindly in advance!

Survey

We want to hear from you! We really value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you!

Credits

This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott.