|
|
| : |
|
| #A5 : AX¿Í ¹Ì·¡ ¼ºñ½º ¹× º¸¾È |
ÁÂÀå : |
|
| ¹ßÇ¥Á¦¸ñ : Physical Commonsense Reasoning |
|
| ¹ßÇ¥ÀÚ : À¯¿µÀç |
À̸ÞÀÏ : |
|
| ¼Ò¼Ó : ¼¿ï´ëÇб³ |
ºÎ¼ : ÷´ÜÀ¶ÇÕÇкÎ/ÄÄÇ»ÅͰøÇкΠ|
|
| Á÷À§ : Á¶±³¼ö |
¹ßÇ¥ÀϽà : 2026. 1. 30.(±Ý) 10:40~12:30 |
|
|
|
|
| ¹ßÇ¥ÀÚ¾à·Â : |
|
• 2025 – present: Assistant Professor, Department of Computer Science and Engineering / School of Transdisciplinary Innovations, Seoul National University
• 2023 – 2025: Assistant Professor, Department of Computer Science, Yonsei University
• 2021 – 2023: Young Investigator (Postdoctoral Researcher), Mosaic Team, Allen Institute for AI (AI2)
• 2018: Research Intern, Microsoft, Redmond |
|
|
| °¿¬¿ä¾à : |
|
| Physical commonsense reasoning aims to equip AI agents with the ability to understand, predict, and interact with the physical world in humanlike ways. While multimodal foundation models excel at language and perception in digital world, they still lack grounded experience and intuitive physical understanding. In this talk, I introduce recent progress toward bridging this gap through video driven learning, world models, and multimodal reasoning frameworks. I illustrate how these advances support embodied agents that can disambiguate human commands, navigate real environments, assess safety, and generalize from simulated or desktop data to the real world. This direction outlines a path toward Physical AI that can learn, adapt, and collaborate with humans in everyday environments. |
|
|
|
|
|
|