top of page

Myungin Lee

Signal Processing & Machine Learning
for XR and HCI
myungin_earth.png
A researcher designing multi-modal XR experience
based on HCI, scientific theory, composition,
signal processing & machine learning.


 

Bio

Myungin is a researcher who designs multimodal XR experiences based on HCI, signal processing, and machine learning. He holds a Ph.D. in Media Arts and Technology from the University of California, Santa Barbara, and an M.S. and B.S. in Electronics and Computer Engineering at Hanyang University, Seoul, Korea. He was a Ph.D. research intern in the Experiments in Art & Technology (E.A.T.) center at Nokia Bell Labs and developed a spatial-acoustic parameter estimation algorithm using machine learning. During his Ph.D., Myungin was affiliated with AlloSphere, designing large-scale interactive 3D immersive experiences, and joined the Immersive Media Design faculty at the University of Maryland, College Park. His research are featured at venues including Ars Electronica, Getty's PST ART: Art & Science Collide, IEEE, CHI, New Interfaces for Musical Expression (NIME), International Computer Music Conference (ICMC), and the ACM SIGGRAPH. Myungin holds a patent in the machine learning-based room acoustics estimation algorithm. At UMD, his current research includes brain-computer interaction (BCI), generative AI, environmental Ocean data science, and scientific quantum simulation in XR.

News

KakaoTalk_20250715_132547073_04_edited.j

(June 12, 2025) I talked about "Multimodal Fusion in XR: Leveraging LLM for Natural Voice-based Interaction" at Hanyang University and discussed the research trends with the grad students.

KakaoTalk_20250715_142811729_edited_edit
KakaoTalk_20250715_142609943.jpg

(July 11, 2025) My daughter is giving a big bow to the visitors of the my website.

(May 1, 2025) Our research team presented a paper and demonstration in CHI 2025, Japan.

bottom of page