Stellenbeschreibung:
Job Title: MLOps Engineer (Databricks)Rate: Depending on experienceLocation: RemoteContract Length: 12-24 months
A European consultancy are seeking a Databricks focused MLOps Engineer to join the team on a long term 12-24 month contract.
This role will be supporting the full end-to-end model lifecycle in production environments built on Azure and Databricks not only internally, but also in close collaboration with business units and customer teams across a international business units.
Databricks expertise is a must.
Core ResponsibilitiesBuild and manage ML/MLOps pipelines using DatabricksDesign, optimise and operate robust end-to-end machine learning pipelines within the Databricks environment on Azure.Support internal project teamsAct as a technical point of contact for internal stakeholders, assisting with onboarding to Databricks, model deployment and pipeline design.Leverage key Databricks featuresUtilise capabilities such as MLflow, Workflows, Unity Catalog, Model Serving and Monitoring to enable scalable and manageable solutions.Implement governance and observabilityIntegrate compliance, monitoring and audit features across the full machine learning lifecycle.Operationalise ML/AI modelsLead efforts to move models into production, ensuring they are stable, secure and scalable.Hands-on with model operationsWork directly on model hosting, monitoring, drift detection and retraining processes.Collaborate with internal teamsParticipate in customer-facing meetings, workshops and solution design sessions across departments.Contribute to platform and knowledge improvementSupport the continuous development of Databricks platform services and promote knowledge sharing across teams.
Essential Skills and Experience:End-to-end ML/AI lifecycle expertiseStrong hands-on experience across the full machine learning lifecycle, from data preparation and model development to deployment, monitoring, and retraining.Proficiency with Azure DatabricksPractical experience using key components such as:MLflow for experiment tracking and model managementDelta Lake for data versioning and reliabilityUnity Catalog for access control and data governanceWorkflows for pipeline orchestrationModel Serving and automation of the model lifecycleMachine learning frameworksWorking knowledge of at least one widely used ML library, such as PyTorch, TensorFlow, or Scikit-learn.DevOps and automation toolingExperience with CI/CD pipelines, infrastructure-as-code (e.g., Terraform), and container technologies like Docker.Cloud platform familiarityExperience working on Azure is preferred; however, a background in AWS or other providers with a willingness to transition is also suitable.Production-grade pipeline designProven ability to design, deploy, and maintain machine learning pipelines in production environments.Stakeholder-focused communicationAbility to explain complex technical concepts in a clear and business-relevant way, especially when working with internal customers and cross-functional teams.Governance and compliance awarenessExposure to model monitoring, data governance, and regulatory considerations such as explainability and security controls.Agile working practicesComfortable contributing within agile teams and using tools like Jira or equivalent project management platforms.Desirable ExperienceExperience working with large language models (LLMs), generative AI or multimodal orchestration toolsFamiliarity with explainability libraries such as SHAP or LIMEPrevious use of Azure services such as Azure Data Factory, Synapse Analytics or Azure DevOpsBackground in regulated industries such as insurance, financial services or healthcare
If this sounds like an exciting opportunity please apply with your CV.
NOTE / HINWEIS:
EN: Please refer to Fuchsjobs for the source of your application
DE: Bitte erwähne Fuchsjobs, als Quelle Deiner Bewerbung