via Jazzhr
$Not specified
The Senior Data Engineer will develop and manage end-to-end ETL pipelines and build optimized data workflows on Azure Databricks. They will also ensure data quality and collaborate with cross-functional teams to support analytics initiatives.
Candidates must have strong hands-on experience with Azure Databricks and PySpark, along with expertise in SQL for data transformation. Proven experience in ETL development and practical knowledge of Apache Airflow is also required.
Senior Data Engineer Location: Remote Visa: USC/GC/Ead We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and analytics solutions. The ideal candidate will work closely with cross-functional teams to ensure efficient data availability, accuracy, and performance across the organization. Key Responsibilities: Develop and manage end-to-end ETL pipelines using PySpark and SQL. Build and optimize data workflows on Azure Databricks for large-scale data processing. Automate workflow orchestration and scheduling using Apache Airflow. Ensure data quality, reliability, and integrity across multiple data sources. Collaborate with Data Scientists, Analysts, and Architects to support business intelligence and analytics initiatives. Required Skills & Experience: Strong hands-on experience with Azure Databricks and PySpark. Expertise in SQL for data transformation, performance tuning, and querying. Proven experience in ETL development and data pipeline optimization. Practical knowledge of Apache Airflow for scheduling and orchestration. Understanding of cloud data architectures, preferably Microsoft Azur
This job posting was last updated on 12/10/2025