Enabling the Metaverse: Immersive Media, AI, and Wireless Communication

Events

Enabling the Metaverse: Immersive Media, AI, and Wireless Communication

The concept of the Metaverse has spurred advancements in haptic, tactile internet, and multimedia applications, including VR/AR/XR services. This progress positions fully immersive sensing as a cornerstone of the next-gen wireless networks required for realizing the Metaverse’s vision. This presentation tackled various types of media exchanges within the Metaverse, highlighting their advantages and addressing key challenges related to 3D data processing, coding, transmission, and rendering. It also emphasized the crucial role that artificial intelligence, encompassing large language models (LLMs) computer vision (CV), and future wireless networks will play in achieving the desired level of immersion by reliably streaming multimedia between the digital twin and its physical counterpart. Furthermore, the presentation explored emerging communication paradigms, such as semantic, holographic, and goal-oriented communication, which promise energy and spectrum-efficient Metaverse experiences with ultra-low latency. In conclusion, the latest TII-developed demonstrations, focusing on 2D and volumetric video streaming within a virtual reality environment, were showcased. These demos are accessible on Quest Pro headsets and smartphones via a web-based player over a 5G network. The results underscore the vast potential of VR/XR technology in tandem with 5G networks to pave the way for creating a fully immersive Metaverse platform.

Quick details about the event:

Date: 08 November 2023

Time: 17:00 – 18:00

Venue: Online Event

Click here to watch  

Speakers’ bio:

Wassim Hamidouche holds the position of Principal Researcher at the Technology Innovation Institute (TII) in Abu Dhabi, UAE. He has also served as an Associate Professor at INSA Rennes and was affiliated with the Institute of Electronics and Telecommunications of Rennes (IETR), UMR CNRS 6164. He earned his Ph.D. in signal and image processing from the University of Poitiers, France, in 2010. He gained valuable industry experience by working as a Research Engineer at the Canon Research Centre in Rennes, France, from 2011 to 2012. Subsequently, he contributed as a researcher at the IRT b<>com research Institute in Rennes from 2017 to 2022. He made impactful contributions to his field extend to over 180 published papers, covering various facets of image processing and computer vision. Notably, he has played a pivotal role in the development of two major open-source software video decoders, OpenHEVC and OpenVVC. His research interests encompass a wide array of areas, including video coding, the design of software and hardware circuits and systems for video coding standards, image quality assessment, and multimedia security.