Position Details
About this role
PhD-level AI/ML research internship focused on developing foundation models for multimodal wearable sensor data in digital health. The role emphasizes self-supervised and multimodal representation learning across health tasks.
Key Responsibilities
- Design and implement selfsupervised learning frameworks for wearable timeseries data
- Train foundation models on large-scale unlabeled multimodal sensor datasets
- Develop architectures using transformers, contrastive learning, masked modelling, and crossmodal attention
- Integrate heterogeneous sensors using multimodal fusion strategies
- Evaluate learned representations on downstream health tasks (HAR, sleep, stress, gait, health outcomes)
Technical Overview
Scope includes building AI/ML models for timeseries wearable data, using transformers and self-supervised learning, with emphasis on multimodal fusion across accelerometer, PPG, and ECG signals; aims to publish reproducible research and produce well-documented code.
Ideal Candidate
The ideal candidate is a PhD candidate in ML/CS with strong Python and DL experience (PyTorch or TensorFlow), who has worked with self-supervised and multimodal learning on wearable sensor data for digital health applications.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Not a PhD candidate in ML/CS or related field, Lacks Python or DL experience, No experience with self-supervised learning, No wearable sensor data or digital health exposure
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile