ÁÖ¸Þ´º ¹Ù·Î°¡±â ÄÁÅÙÃ÷ ¹Ù·Î°¡±â
½½¶óÀ̵å À̹ÌÁö
The 36th HSN 2026 ¸ðµÎ¸¦ À§ÇÑ AI: ÀÎÇÁ¶ó¿¡¼­ ¿¡ÀÌÀüÆ®±îÁö
½½¶óÀ̵å À̹ÌÁö
The 36th HSN 2026 ¸ðµÎ¸¦ À§ÇÑ AI: ÀÎÇÁ¶ó¿¡¼­ ¿¡ÀÌÀüÆ®±îÁö
¼¼ºÎÇÁ·Î±×·¥
:
ÆÐ³ÎÅäÀÇ : ÁÂÀå :
¹ßÇ¥Á¦¸ñ : AI 3´ë °­±¹À¸·Î µµ¾àÇϱâ À§ÇÑ Àü·« ¹× AI½Ã´ë¿¡ 6g ¿ªÇÒ: AI-RAN for 6G: When, Where, How to Integrate Intelligence
¹ßÇ¥ÀÚ : ¹Ú¹è¼º À̸ÞÀÏ : AI Computing Solution
¼Ò¼Ó : ³×À̹ö Ŭ¶ó¿ìµå ºÎ¼­ :
Á÷À§ : ¸®´õ ¹ßÇ¥ÀϽà : 2026. 1. 28.(¼ö) 18:30-20:00
¹ßÇ¥ÀÚ¾à·Â :
[Profile]
• Çö) NAVER Cloud, AI Computing Solution, Team Lead
• Àü) »ï¼ºÀüÀÚ »ï¼º¸®¼­Ä¡ (Samsung Research)
• ÃæºÏ´ëÇб³ ÀüÀÚ°øÇкΠÇлç
[Expertise]
• LLM Inference Optimization
• AI/LLM Compression
• On-device AI Optimization
[Selected Publications]
• DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation (NeurIPS 2024)
• LUT-GEMM: Quantized Matrix Multiplication based on LUTs for Efficient Inference in Large-Scale Generative Language Models (ICLR 2024)
• Encoding Weights of Irregular Sparsity for Fixed-to-Fixed Model Compression (ICLR 2022)
• BiQGEMM: Optimal Binary Quantization Matrix Multiplication for Quantized Neural Networks (SC20)
°­¿¬¿ä¾à :
¸ñ·Ïº¸±â
ÈÄ¿ø±â°ü ±¤°í¿µ»ó
more