Databricks Engineer
Stellenbeschreibung:

    🌍 Hello World!

    We are The Codest -  International Tech Software Company with tech hubs in Poland delivering global IT solutions and projects. Our core values lie in “Customers and People First” approach that prioritises the needs of our customers and a collaborative environment for our employees, enabling us to deliver exceptional products and services.

    Our expertise centers on web development, cloud engineering, DevOps and quality.  After many years of developing our own product - Yieldbird, which was honored as a laureate of the prestigious Top25 Deloitte awards, we arrived at our mission: to help tech companies build impactful product and scale their IT teams through boosting IT delivery performance. Through our extensive experience with product development challenges, we have become experts in building digital products and scaling IT teams. 

    But our journey does not end here - we want to continue our growth. If you’re goal-driven and looking for new opportunities, join our team! What awaits you is an enriching and collaborative environment that fosters your growth at every step.

    We are currently looking for:

    DATABRICKS ENGINEER

    Here, you will have an opportunity to contribute to a banking app for one of the leading financial groups in Japan. The platform is equipped with bank modules and data management features and it is customer-facing as well. We are seeking an experienced Databricks Engineer to design, build, and manage scalable data solutions and pipelines using Databricks. You’ll work closely with cross-functional teams to ensure data is reliable, accessible, and efficient to power analytics and business intelligence initiatives.


    📈 Your Responsibilities:

    • Architect medallion architecture (Bronze, Silver, Gold) lakehouses with optimized performance patterns

    • Build strong data quality frameworks with automated testing and monitoring

    • Implement advanced Delta Lake features such as time travel, vacuum operations, and Z-ordering

    • Develop and maintain complex ETL/ELT pipelines processing large-scale datasets daily

    • Design and implement CI/CD workflows for data pipelines using Databricks Asset Bundles or equivalent tools

    • Create real-time and batch data processing solutions with Structured Streaming and Delta Live Tables

    • Optimize Spark jobs for cost efficiency and performance, leveraging cluster auto-scaling and resource management

    • Develop custom integrations with Databricks APIs and external systems

    • Design scalable data architectures using Unity Catalog, Delta Lake, and Apache Spark

    • Establish data mesh architectures with governance and lineage tracking

Stelleninformationen
  • Typ:

    Vollzeit
  • Arbeitsmodell:

    Remote
  • Kategorie:

    Development & IT
  • Erfahrung:

    Erfahren
  • Arbeitsverhältnis:

    Angestellt
  • Veröffentlichungsdatum:

    19 Aug 2025
  • Standort:

KI Suchagent
ai job search

Möchtest über ähnliche Jobs informiert werden? Dann beauftrage jetzt den Fuchsjobs KI Suchagenten!