Senior Data Engineer
Mason Alexander View all jobs
- Dublin
- Contract
- Full-time
- Develop Data Pipelines: Create and maintain ETL pipelines to move data into centralized storage, ensuring data quality and integrity.
- Integrate Data: Combine data from various sources, including databases, APIs, and external providers, to build a unified data foundation.
- Data Transformation: Clean, normalize, and aggregate data for analysis, reporting, or machine learning.
- Best Practices & Frameworks: Develop and implement frameworks for data pipeline development, deployment, and automation.
- Data Governance: Enforce data governance standards.
- Collaborate with Teams: Work with analytics, product, and infrastructure teams to advance data and analytics platforms.
- Monitor Systems: Ensure the reliability of data systems through monitoring, issue detection, and automated error handling.
- Experience in designing data solutions and data modeling.
- Strong skills in developing data processing jobs using PySpark or SQL.
- Proficiency in data pipeline orchestration tools like ADF or Airflow.
- Experience with real-time and batch data processing.
- Familiarity with building data pipelines on Azure.
- Proficient in SQL and experienced with advanced features like Window functions.
- Understanding of DevOps, CI/CD pipelines, and Git workflows.
- Capable of working with both business and technical stakeholders.
- Bachelor’s degree in a relevant field.
- 3-5 years of data engineering experience.
- Experience with agile/scrum methodologies.
- Knowledge of Azure Data Factory, Databricks, Snowflake, and shell scripting.
- Proficiency in Python, especially for data-related tasks.
- Familiarity with Apache Spark, Kafka, Kubernetes, Docker, and cloud infrastructure management (e.g., Terraform).
- Experience in ML/AI model development and deployment is a plus.
- By submitting your details you agree to our