Join to apply for the Founding AI/ML Research Engineer role at BJAK
Transform Language Models Into Real-world, High-impact Product Experiences.
A1 is a self‑funded AI group, operating in full stealth. We’re building a new global consumer AI application focused on an important but underexplored use case.
You will shape the core technical direction of A1 – model selection, training strategy, infrastructure, and long‑term architecture. This is a founding technical role: your decisions will define our model stack, our data strategy, and our product capabilities for years ahead.
You won’t just fine‑tune models – you’ll design systems: training pipelines, evaluation frameworks, inference stacks, and scalable deployment architectures. You will have full autonomy to experiment with frontier models (LLaMA, Mistral, Qwen, Claude‑compatible architectures) and build new approaches where existing ones fall short.
Why This Role Matters
- You are creating the intelligence layer of A1’s first product, defining how it understands, reasons, and interacts with users.
- Your decisions shape our entire technical foundation – model architectures, training pipelines, inference systems, and long‑term scalability.
- You will push beyond typical chatbot use cases, working on a problem space that requires original thinking, experimentation, and contrarian insight.
- You influence not just how the product works, but what it becomes, helping steer the direction of our earliest use cases.
- You are joining as a founding builder, setting engineering standards, contributing to culture, and helping create one of the most meaningful AI applications of this wave.
What You’ll Do
- Build end‑to‑end training pipelines: data → training → eval → inference
- Design new model architectures or adapt open‑source frontier models
- Fine‑tune models using state‑of‑the‑art methods (LoRA/QLoRA, SFT, DPO, distillation)
- Architect scalable inference systems using vLLM / TensorRT‑LLM / DeepSpeed
- Build data systems for high‑quality synthetic and real‑world training data
- Develop alignment, safety, and guardrail strategies
- Design evaluation frameworks across performance, robustness, safety, and bias
- Own deployment: GPU optimization, latency reduction, scaling policies
- Shape early product direction, experiment with new use cases, and build AI‑powered experiences from zero
- Explore frontier techniques: retrieval‑augmented training, mixture‑of‑experts, distillation, multi‑agent orchestration, multimodal models
What It’s Like To Work Here
- You take ownership – you solve problems end‑to‑end rather than wait for perfect instructions
- You learn through action – prototype → test → iterate → ship
- You’re calm in ambiguity – zero‑to‑one building energises you
- You bias toward speed with discipline – V1 now > perfect later
- You see failures and feedback as essential to growth
- You work with humility, curiosity, and a founder’s mindset
- You lift the bar for yourself and your teammates every day
Requirements
- Strong background in deep learning and transformer architectures
- Hands‑on experience training or fine‑tuning large models (LLMs or vision models)
- Proficiency with PyTorch, JAX, or TensorFlow
- Experience with distributed training frameworks (DeepSpeed, FSDP, Megatron, ZeRO, Ray)
- Strong software engineering skills – writing robust, production‑grade systems
- Experience with GPU optimization: memory efficiency, quantization, mixed precision
- Comfortable owning ambiguous, zero‑to‑one technical problems end‑to‑end
Nice to Have
- Experience with LLM inference frameworks (vLLM, TensorRT‑LLM, FasterTransformer)
- Contributions to open‑source ML libraries
- Background in scientific computing, compilers, or GPU kernels
- Experience with RLHF pipelines (PPO, DPO, ORPO)
- Experience training or deploying multimodal or diffusion models
#J-18808-Ljbffr