$90K - 130K a year
Design, maintain, and optimize Python-based ETL pipelines and cloud data services on Azure Synapse while ensuring high availability and compliance.
Bachelor’s degree, 5+ years data engineering, 2+ years Python and Azure experience, strong SQL skills, knowledge of Big Data and data governance frameworks.
Who we are We are seeking a skilled Data Engineer to join our team. The successful candidate will be responsible for development, maintenance, and optimization of complex data ingestion and ETL pipelines within Allianz Technology's diverse data ecosystem. You will play a key role in ensuring the efficient operation of cloud-based service components, leveraging your expertise in Azure Synapse and other relevant technologies. What you'll be doing Design and implement new functionalities in Python-based ETL pipelines, ensuring scalability and performance optimization. Maintenance and Optimization: Proactively manage existing pipelines, optimizing them for efficiency and reliability. Plan and manage the sizing of service components, ensuring proactive cloud component management. Initiate the planning and setup of new service components, coordinating with architects and modelers. Oversee the operations and maintenance of related software, complex server infrastructure, and databases, including Synapse, PowerBI, and local development environments. Maintain service-related documentation in accordance with Allianz Technology standards, ensuring clarity and accessibility. Implement monitoring solutions and manage Service Level Agreements to ensure high availability and performance. Document, maintain, and optimize processes and configurations related to the environments, applying best practices. Collaborate with development, architecture, rollout, and operations teams, as well as customers, to manage releases, upgrades, and changes. Manage and support service-related risk assessments and audits, ensuring compliance with Allianz's governance frameworks What you'll bring along Bachelor’s degree in Computer Science, Information Technology, or a related field Minimum 5 years of experience in data engineering At least 2 years’ experience with data-oriented Python (Polars, Pandas, Pyspark) At least 2 years’ experience with Microsoft Azure Strong experience in building pipelines using Azure Synapse and/or other similar technologies Experience with SQL and familiarity with database technologies Knowledge of Big Data technologies like Apache Spark Experience with software as a service concept Excellent communication skills with high customer focus Experience with Delta Lake and/or PowerBI Familiarity with DevOps processes and DataVault 2.0 Experience with data governance frameworks, such as Informatica, is a plus Excellent command of both spoken and written English
This job posting was last updated on 8/13/2025