Ruobing Han

hanruobing@gatech.edu

I am a CS PhD student at Georgia Tech, advised by Prof. Hyesoon Kim. My research areas include compilers, architecture, and ML systems. I received my B.S. degree from the College of EECS at Peking University in 2018.


Education

Georgia Institute of Technology

Ph.D. student, Computer Science
May 2021 - Present

Peking University

Bachelor of Science, Computer Science and Technology
Sep 2014 - Aug 2018

Research Experience

HPArch, Georgia Institute of Technology

Graduate Research Assistant

Advisor: Prof. Hyesoon Kim

May 2021 - Present

HPC-AI, Nation University of Singapore

Research Assistant

Advisor: Prof. Yang You

May 2020 - Apr 2021

Working Experience

Google, Sunnyvale, USA

Software Development Engineer Intern, core ML team
May 2023 - Jul 2023

Google, Sunnyvale, USA

Software Development Engineer Intern, LLVM core team
May 2022 - Aug 2022

Publication

  • Exponentially Expanding the Phase-Ordering Search Space via Dormant Information
    Ruobing Han, Hyesoon Kim
    International Conference on Compiler Construction (CC), 2024
    paper
  • Enabling Fine-Grained Incremental Builds By Making Compiler Stateful
    Ruobing Han, Jisheng Zhao, Hyesoon Kim
    International Symposium on Code Generation and Optimization (CGO), 2024
    paper
  • COX: Exposing CUDA Warp-Level Functions to CPUs
    Ruobing Han, Jaewon Lee, Jaewoong Sim, Hyesoon Kim
    ACM Transactions on Architecture and Code Optimization (TACO), 2022
    paper
  • Supporting CUDA for an extended RISC-V GPU architecture
    Ruobing Han, Blaise Tine, Jaewon Lee, Jaewoong Sim, Hyesoon Kim
    the Fifth Workshop on RISC-V for ComputerArchitecture Research, 2021
    paper
  • Dynamic scaling for low-precision learning
    Ruobing Han, Min Si, James Demmel, Yang You
    the 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2021
    paper
  • Auto-Precision Scaling for Distributed Deep Learning
    Ruobing Han, James Demmel, Yang You
    International Conference on High Performance Computing, 2021
    paper
  • Optimizing Network Performance for Distributed DNN Training on GPU Clusters: ImageNet/AlexNet Training in 1.5 Minutes
    Peng Sun, Wansen Feng, Ruobing Han, Shengen Yan, Yonggang Wen
    IEEE Transactions on Big Data, 2020
    paper

Presentation

  • CuPBoP: CUDA for Parallelized and Broad-range Processors
    San Jose, California, USA
    The LLVM Developers' Meeting, 2022
    link