$90K - 130K a year
Build and optimize ETL pipelines and data models on AWS, focusing on Redshift performance tuning and orchestration with Airflow and Step Functions.
3-5 years data engineering experience with strong skills in Spark, AWS Glue, Redshift, Python, PySpark, SQL, data modeling, ETL pipeline optimization, and AWS cloud services.
• 3–5 years of experience in data engineering. • Strong experience with distributed data processing (Spark, AWS Glue, EMR, or equivalent). • Hands-on expertise with data modeling, ETL pipelines, and performance optimization. • Strong hands-on expertise in building and optimizing ETL pipelines into Amazon Redshift • Proficiency in Python, PySpark and SQL; familiarity with Iceberg tables preferred. • Solid background in Data Analysis and Data Warehousing concepts (star/snowflake schema design, dimensional modeling, and reporting enablement). • Orchestration experience with Airflow, Step Functions, and Lambda • Experience with Redshift performance tuning, schema design, and workload management. • Cloud experience (AWS ecosystem preferred). About the Company: TechDigital Company Size: 100 to 499 employees Industry: Other/Not Classified Founded: 0
This job posting was last updated on 10/14/2025