top of page

Myungin Lee

Signal Processing & Machine Learning
for XR and HCI
myungin_earth.png
A researcher designing multi-modal XR experience
based on HCI, scientific theory, composition,
signal processing & machine learning.


 

Bio

Myungin is a researcher who designs multimodal XR experiences based on HCI, signal processing, and machine learning. He holds a Ph.D. in Media Arts and Technology from the University of California, Santa Barbara, and an M.S. and B.S. in Electronics and Computer Engineering at Hanyang University, Seoul, Korea. He was a Ph.D. research intern in the Experiments in Art & Technology (E.A.T.) center at Nokia Bell Labs and developed a spatial-acoustic parameter estimation algorithm using machine learning. During his Ph.D., Myungin was affiliated with AlloSphere, designing large-scale interactive 3D immersive experiences, and joined the Immersive Media Design faculty at the University of Maryland, College Park. His research are featured at venues including Ars Electronica, Getty's PST ART: Art & Science Collide, IEEE, CHI, New Interfaces for Musical Expression (NIME), International Computer Music Conference (ICMC), and the ACM SIGGRAPH. Myungin holds a patent in the machine learning-based room acoustics estimation algorithm. At UMD, his current research includes brain-computer interaction (BCI), generative AI, environmental Ocean data science, and scientific quantum simulation in XR.

News

Screenshot 2025-10-04 082114.png

(December 3, 2025)


Sensorium Arc: AI Agent System for Oceanic Data Exploration and Interactive Eco-Art is premiering its installation and paper at NeurIPS 2025, San Diego!

bottom of page