Join to apply for the Senior Data Streaming Platform Engineer role at On
We are seeking a highly skilled and motivated Streaming Platform Engineer to join the Data Streaming Platform team. This is a unique hybrid role that combines the disciplines of platform, software, and data engineering to build, scale, and maintain our high‑performance, real‑time data streaming platform. The ideal candidate should have a passion for architecting robust, scalable systems to enable data‑driven products and services at massive scale.
Your mission
- Design, build, and maintain the core infrastructure for our real‑time data streaming platform, ensuring high availability, reliability, and low latency.
- Implement and optimize data pipelines and stream processing applications using technologies like Apache Kafka, Apache Flink, and Spark Streaming.
- Collaborate with software and data engineering teams to define event schemas, ensure data quality, and support the integration of new services into the streaming ecosystem.
- Develop and maintain automation and tooling for platform provisioning, configuration management and CI/CD pipelines.
- Champion the development of self‑service tools and workflows that empower engineers to manage their own streaming data needs, reducing friction and accelerating development.
- Monitor platform performance, troubleshoot issues, and implement observability solutions (metrics, logging, tracing) to ensure the platform’s health and stability.
- Stay up‑to‑date with the latest advancements in streaming and distributed systems technologies and propose innovative solutions to technical challenges.
Your story
- Strong production experience with Apache Kafka and its ecosystem (Confluent Cloud, Kafka Streams, Kafka Connect). Solid understanding of distributed systems and event‑driven architectures.
- Experience building and optimizing real‑time data pipelines for ML, analytics and reporting, leveraging Apache Flink, Spark Structured Streaming, and integration with low‑latency OLAP systems like Apache Pinot.
- Hands‑on experience with major Cloud Platforms (AWS, GCP, or Azure), Kubernetes and Docker, and proficiency in Infrastructure as Code (Terraform). Experience integrating and managing CI/CD pipelines (GitHub Actions) and implementing observability solutions (New Relic, Prometheus, Grafana).
- Proficiency in at least one of Python, Typescript, Java, Scala, or Go.
- Familiarity with data platform concepts, including data lakes and data warehouses.
What We Offer
On is a place that is centered around growth and progress. We offer an environment designed to give people the tools to develop holistically – to stay active, to learn, explore and innovate. Our distinctive approach combines a supportive, team‑oriented atmosphere with access to personal self‑care for both physical and mental well‑being.
On is an Equal Opportunity Employer. We are committed to creating a work environment that is fair and inclusive, where all decisions related to recruitment, advancement, and retention are free of discrimination.
#J-18808-Ljbffr