Die Stellenanzeige Senior Data Engineer (Search Platform) (m/f/x) in Berlin ist leider nicht mehr verfügbar.
Aber keine Sorge – Wir haben hier eine exzellente Alternative für Dich!

Senior Software Engineer - Data Platform (d/f/m, Berlin)

Stellenbeschreibung:

We build data product sharing software which fuels AI.

Monda believes that any company should be able to share and access the data they need to fuel AI. Therefore we create a borderless data sharing ecosystem to fuel the AI revolution and accelerate human progress. We encourage and empower any company in the world to share and monetize their data safely.

Our Engineering Team:

We are a passionate, multicultural engineering team dedicated to turning complex data challenges into seamless software — always deciding, acting, and delivering with the customer experience at the core.

Our Data Tech Stack :

Snowflake, Prefect, Python, Django, AWS, Terraform, Cloudflare, Docker, Github, Heroku

Our Tech Challenges:

Simplify cross-cloud data product creation: Enable easy onboarding of data sources from multiple cloud environments and ensure reliable data delivery for true cross-cloud sharing, supporting seamless data marketplace integrations across AWS, GCP, Azure, Snowflake, and Databricks.

Fuel the AI revolution: Streamline data customization and multi-asset data product management, empowering integrations with leading data marketplaces such as Datarade, Snowflake Marketplace, Databricks, Google Cloud Analytics Hub, and SAP Datasphere. Drive innovation in data platforms: Tackle the challenges of scalability, reliability, and performance in a rapidly evolving, multi-cloud ecosystem while enabling business-ready, high-quality data products.

Tasks We're looking for a Senior Software Engineer (d/f/m) to join our software engineering team in Berlin. As an individual contributor (IC), you'll work closely with our Head of Engineering. You'll build a

data platform as a product , not just pipelines. You’ll design and implement reusable, generic systems that allow data teams to move faster with less custom engineering.

The start date is asap, work location is Berlin (hybrid), base salary starts at €75K gross / annum (based on experience).

What you'll do :

Design and build generic, configurable data pipelines

that work across customers and use cases, reducing bespoke implementations and operational overhead

Develop core backend components

that enable customers to connect diverse data sources and publish marketplace-ready data products for secure data sharing and monetization

Own data flow reliability end-to-end : observability, performance, error handling, and scalability of Python-based pipelines orchestrated with Prefect and running on AWS ECS

Build cross-cloud data exchange capabilities , enabling secure, high-performance data sharing between Snowflake and other cloud environments

Evolve the platform architecture , balancing flexibility and standardization while keeping long-term maintainability in mind

Collaborate closely with product, engineering, and customers

to translate real-world data workflows into robust platform abstractions

Continuously improve developer and customer experience , reducing operational friction for both internal teams and data consumers

Requirements Minimum qualifications:

5+ years of professional experience as a Software Engineer, Data Engineer, or in a hybrid role , building and operating production-grade data platforms or backend systems

Strong

Python

expertise, with a focus on writing clean, testable, and maintainable code beyond one-off pipelines

Solid experience designing

scalable data systems on Snowflake , including performance optimization, security, and data sharing concepts

Experience building and operating

generic, reusable data pipelines

rather than highly customized, client-specific workflows

Practical knowledge of

workflow orchestration frameworks

(e.g. Prefect, Airflow, Dagster); ability to reason about orchestration patterns independent of specific tools

Good understanding of

cloud-native architectures , including containerization (Docker) and running workloads on AWS (ECS or similar)

Comfortable working in

cross-functional, product-oriented teams , collaborating closely with engineering, product, and customer-facing roles

Bonus qualifications:

Degree in Computer Science, Information Systems, Application Programming, or a related technical field

Hands-on experience with Infrastructure as Code (IaC) tools such as Terraform

Background in international B2B software applications, ideally within the e-commerce industry

In-depth knowledge of multiple cloud service providers (e.g., AWS, GCP, Azure) and experience working in cross-cloud environmentsGenuine passion for Data Engineering, with additional experience in web application development or adjacent software domains

We’d love to hear from you! Apply now and expect a fast, transparent hiring process: a quick intro call, a focused code challenge, and conversations with the team and founders.

#J-18808-Ljbffr
NOTE / HINWEIS:
EnglishEN: Please refer to Fuchsjobs for the source of your application
DeutschDE: Bitte erwähne Fuchsjobs, als Quelle Deiner Bewerbung

Stelleninformationen

  • Veröffentlichungsdatum:

    02 Mär 2026
  • Standort:

    Berlin
  • Typ:

    Vollzeit
  • Arbeitsmodell:

    Vor Ort
  • Kategorie:

    Development & IT
  • Erfahrung:

    2+ years
  • Arbeitsverhältnis:

    Angestellt

KI Suchagent

AI job search

Möchtest über ähnliche Jobs informiert werden? Dann beauftrage jetzt den Fuchsjobs KI Suchagenten!

Diese Jobs passen zu Deiner Suche: