Master information
Data Scientist
Position: Not specified
Start: 21 Apr 2025
End: 31 Dec 2025
Location:
Essen, Germany
Method of collaboration: Project only
Hourly rate: £59
Latest update: 9 Apr 2025
Task description and requirements
Data Scientist
My client, a large consultancy, is in need of a Data Scientist, for a 8 month rolling contract based in Essen, offering remote work.
The ideal candidate will have strong experience in implementing machine learning models such as Prophet, ARIMA, SARIMA, XGBoost, ElasticNet, Ridge, Lasso, Random Forest, and Linear Regression on time-series data, proficient in using Python ML packages such as scikit-learn, sktime, and darts, Must have strong expertise in key techniques for time series feature engineering, including lag features, rolling window statistics, Fourier transforms, and handling seasonality, Proven ability to tune the performance of existing deployed forecasting models.
The ideal candidate will have strong experience in Azure Machine Learning Python SDK v1/v2 to: Manage data, models, and environments, Build/debug AML pipelines to stitch together multiple tasks (feature engineering, training, registering models, etc.) and production workflows using Azure ML pipelines, Schedule Azure ML jobs, Deploy registered models to create endpoints, Good to have experience with K-Means clustering.
The ideal candidate will have strong experience in Azure services such as Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure Key Vault to architect and maintain scalable data solutions, Design, develop, and deploy new Azure Data Factory (ADF) pipelines for data ingestion, transformation, and logging, ensuring robustness and reliability, proficiently transform and manipulate data using PySpark and Python, leveraging their capabilities to derive actionable insights from complex datasets, Collaborate with cross-functional teams to understand data requirements and translate them into effective technical solutions, Lead the implementation and optimization of CI/CD pipelines using Azure DevOps, ensuring a seamless build and release flow for data infrastructure and applications, Drive best practices in data engineering, including data governance, security, and performance optimization, Stay abreast of industry trends and emerging technologies, contributing to the continuous improvement of our data engineering capabilities.