Foundation models have transformed text and images, but structured data - the largest and most consequential data modality in the world - has remained untouched. Tables power every clinical trial, every financial model, every scientific experiment, every business decision. No one has built a foundation model that truly understands them.
Until now. What LLMs did for language, we're doing for tables.
Momentum: We pioneered tabular foundation models and are now the world-leading organization in structured data ML. Our TabPFN v2 model was published in Nature and set a new state-of-the-art for tabular machine learning. Since its release, we've scaled model capabilities more than 20x, reached 3M+ downloads, 6,000+ GitHub stars, and are seeing accelerating adoption across research and industry - from detecting lung disease with Oxford Cancer Analytics to preventing train failures with Hitachi to improving clinical trial decisions with BostonGene.
The hardest work is in front of us. We're scaling tabular foundation models to handle millions of rows, thousands of features, real-time inference, and entirely new data modalities - while building the infrastructure to deploy them in production across some of the most demanding industries on earth. These are open problems no one else is working on at this level.
Our team: We’re a small, highly selective team of 20+ engineers and researchers, selected from over 5,000 applicants, with backgrounds spanning Google, Apple, Amazon, Microsoft, G-Research, Jane Street, Goldman Sachs, and CERN, led by Frank Hutter, Noah Hollmann and Sauraj Gambhir and advised by world-leading AI researchers such as Bernhard Schölkopf and Turing Award winner Yann LeCun. We ship fast, create top-tier research, and hold each other to an extremely high bar.
What’s Next: In 2025, we raised €9m pre-seed led by Balderton Capital, backed by leaders from Hugging Face, DeepMind, and Black Forest Labs. The next modality shift in AI is happening - and we're hiring the team that makes it.
Most companies treat open source as a side job for researchers who'd rather be doing something else. We think that's wrong. Prior Labs is rooted in open source — TabPFN started as a research project the community adopted, and that's how we became a company.
Language models and image models have had years to build out their ecosystem interfaces and integrations. For tabular foundation models, none of that exists yet. You're not plugging into existing patterns — you're creating them. The engineering is genuinely hard: TabPFN does in-context learning, not traditional fit/predict, so wrapping it behind a clean sklearn interface means solving problems no other library has solved. You're designing APIs for a model whose architecture evolves faster than users can upgrade, and making inference robust to the full chaos of real-world tabular data. You understand the model deeply enough to push back when something will break downstream, and you care enough about the details to write great docs and error messages on top of great code.
What you'll work on:
You may be a good fit if you have:
Bonus:
Veröffentlichungsdatum:
29 Mär 2026Standort:
BerlinTyp:
VollzeitArbeitsmodell:
Vor OrtKategorie:
Development & ITErfahrung:
2+ yearsArbeitsverhältnis:
Angestellt
Möchtest über ähnliche Jobs informiert werden? Dann beauftrage jetzt den Fuchsjobs KI Suchagenten!