✨ About The Role
- This role involves designing, building, and testing end-to-end data pipelines on cloud-based infrastructure using Azure and Cosmos DB.
- The Data Engineer will be responsible for developing and implementing ETL processes to load data into data lakes.
- The position requires extensive work with Databricks and data warehousing concepts, including the creation of custom frameworks and libraries.
- The candidate will be responsible for all aspects of the software development lifecycle, including design, coding, integration testing, deployment, and documentation.
- The role emphasizes following best practices and coding standards while working collaboratively within an agile project team.
âš¡ Requirements
- The ideal candidate will have over 7 years of experience in Big Data technology, demonstrating a strong background in data engineering.
- A successful applicant will possess hands-on development experience with Databricks and a solid understanding of cloud computing, particularly Azure.
- The candidate should be proficient in data warehousing concepts and have experience with ETL processes for data ingestion and transformation.
- Strong problem-solving skills and the ability to troubleshoot data pipelines are essential for this role.
- The individual should be a team player with excellent communication skills and a positive attitude, capable of collaborating with globally situated team members.