About The Role
As a Research Engineer in our Video Pre-Training team, you will help build the next generation of production‑grade foundation models for human‑centric video generation.
You will join a highly focused team working at the intersection of large‑scale generative modeling, distributed systems, and production engineering. Our mission is to develop and optimize video base models that power realistic, controllable, and emotionally expressive synthetic humans at scale. This is applied research with direct product impact.
You will work on advancing training recipes, scaling distributed systems, improving evaluation frameworks, and optimizing inference to ensure our models are high quality, stable, and efficient enough for real‑world deployment. Your work will directly influence models used by tens of thousands of businesses worldwide.
What You’ll Do
You will own and execute end‑to‑end research and engineering projects, from hypothesis to production impact. This includes:
- Developing and scaling latent video diffusion models tailored for human‑centric video generation
- Designing conditioning mechanisms to improve control (pose, emotion, script, camera) without sacrificing fidelity
- Advancing distributed training strategies (DDP, FSDP, DeepSpeed, sequence parallelism) under real compute constraints
- Improving training stability at multi‑node scale
- Designing rigorous evaluation frameworks combining automated metrics and structured human evaluation
- Optimizing inference for low latency, high resolution, and cost efficiency
- Running controlled ablations and experiments to drive high‑signal modeling decisions
- Contributing to high engineering standards: reproducibility, experiment tracking, CI/CD, monitoring
You will be expected to move fast, run multiple hypotheses in parallel, identify signal early, and focus on outcomes rather than exploration for its own sake.
What We’re Looking For
Must‑have
- Strong experience training deep learning models at scale
- Strong Python and PyTorch skills
- Hands‑on experience with diffusion models (image domain required; video preferred)
- Experience with large scale multi‑GPU / multi‑node training
- Good understanding of distributed training (DDP, FSDP, DeepSpeed or similar)
- Ability to design controlled experiments and interpret noisy results
Nice‑to‑have
- Experience with video diffusion models
- Experience in avatar or human‑centric generation
- Familiarity with world / interactive models
- Experience with GANs or VAEs
Our Stack
- Python, PyTorch, CUDA
- DeepSpeed, distributed training & inference
- Sequence parallelism
- AWS, SLURM, Docker
- GitHub, CI/CD pipelines
Who You Are
- You are research‑driven but outcome‑focused
- You care about shipping, not just publishing
- You can explore multiple ideas quickly and drop low‑signal directions early
- You communicate clearly and present results scientifically
- You operate independently but collaborate actively across teams
Benefits
- Competitive compensation (salary + stock options + bonus)
- Fully remote from Europe or hybrid work setting with an office in London, Amsterdam, Zurich, Munich
- 25 days of annual leave + public holidays
- Great company culture with the option to join regular planning and socials at our hubs
- + other benefits depending on your location
#J-18808-Ljbffr