Stellenbeschreibung:
Data Engineering Consultant
About the Company A global technology consultancy is expanding its data practice and looking for skilled Data Engineering Consultants to help clients build scalable, high-quality data platforms. The company partners with major brands across multiple industries, combining technical expertise with industry insight to deliver impactful digital and data solutions.You’ll join a collaborative team focused on solving complex data challenges, shaping modern architectures, and delivering measurable business value through technology.
Role Overview As a Data Engineering Consultant, you’ll design, implement, and optimise data pipelines and architectures that enable analytics, automation, and data-driven decision-making. The role sits at the intersection of engineering and consultancy - translating technical expertise into practical outcomes for clients.
Key ResponsibilitiesData Pipeline DevelopmentBuild and maintain real-time and batch data pipelines for large-scale data processing.Use modern frameworks such as Apache Spark, Databricks, Snowflake, and Airflow to automate ingestion and transformation.
Data Integration & TransformationCollaborate with analysts and scientists to define mappings, quality checks, and validation processes.Develop ETL/ELT workflows that ensure reliability and scalability.
Automation & OptimisationAutomate data reconciliation, error handling, and metadata processes.Continuously refine data pipeline performance and cost efficiency.Collaboration & LeadershipWork closely with multidisciplinary teams to align technical solutions with business goals.Provide technical mentorship and contribute to best practices across data projects.Support client engagements and contribute to solution design discussions.
Governance & ComplianceApply strong security and access control measures, ensuring compliance with relevant standards such as GDPR.Maintain documentation of data lineage and ownership.
Skills & ExperienceStrong programming skills in Python, SQL, Scala, or Java.Experience with distributed data technologies such as Apache Spark, Hadoop, Databricks, and Snowflake.Familiarity with cloud environments (AWS, Azure, or GCP).Understanding of data modelling, relational and NoSQL databases (PostgreSQL, SQL Server, MongoDB, Cassandra).Experience with infrastructure and deployment tools (Docker, Kubernetes, Terraform, CI/CD).Strong architectural awareness of scalable, secure, and high-performance data systems.
What’s on OfferCompetitive compensation and performance incentivesOpportunities for technical and career developmentAccess to training, certifications, and mentorshipFlexible working arrangementsHealth and wellbeing supportA diverse, collaborative, and inclusive environment
NOTE / HINWEIS:
EN: Please refer to Fuchsjobs for the source of your application
DE: Bitte erwähne Fuchsjobs, als Quelle Deiner Bewerbung