Self-supervision on IMU Foundation model

Currently open to new students?

Yes
No

Description

Inertial Measurement Units (IMUs) are critical for robotics due to their high-frequency sensing, cost efficiency, and environmental robustness - unaffected by visual degradation, or dynamic object interruption. However, current IMU algorithms are fragmented: they require task-specific tuning (e.g., odometry vs. pose recognition vs. IMU-denoising), sensor-specific tuning (consumer IMUs vs. industrial IMUs), and dynamic-specific adaptation (pedestrian vs. quadrotor dynamics), severely limiting scalability.   Our project aims to overcome these limitations by developing a self-supervised IMU foundation model that learns generalizable IMU representations, enabling cross-task/cross-platform generalization.

This work builds on our prior published research in AirIMU ( https://airimu.github.io/ ) and AirIO ( https://air-io.github.io/ ).

Student Learning Objectives

  • Publish paper on the top conference (IROS, ICRA, CoRL, etc)

Skills Required

Experience in at least one of :
  • PyTorch
  • State estimation/SLAM/IMU kinematic
  • Experience with simulation environments such as Isaac Sim/ Gazebo, etc.

Classes Accepted into Project

Senior
Graduate Student

Compensation

Units 9
Units 12
Pay

Contact