Inertial Measurement Units (IMUs) are critical for robotics due to their high-frequency sensing, cost efficiency, and environmental robustness - unaffected by visual degradation, or dynamic object interruption. However, current IMU algorithms are fragmented: they require task-specific tuning (e.g., odometry vs. pose recognition vs. IMU-denoising), sensor-specific tuning (consumer IMUs vs. industrial IMUs), and dynamic-specific adaptation (pedestrian vs. quadrotor dynamics), severely limiting scalability. Our project aims to overcome these limitations by developing a self-supervised IMU foundation model that learns generalizable IMU representations, enabling cross-task/cross-platform generalization.
- Publish paper on the top conference (IROS, ICRA, CoRL, etc)
Experience in at least one of :
- PyTorch
- State estimation/SLAM/IMU kinematic
- Experience with simulation environments such as Isaac Sim/ Gazebo, etc.