$120K - 160K a year
Design and develop data applications and pipelines on AWS using big data technologies to ingest, process, and analyze large datasets, build REST APIs, and implement scalable data architectures.
Minimum 10 years data engineering experience with AWS, Spark, Glue, Python, SQL, and cloud big data technologies, plus ability to collaborate with teams and design scalable data solutions.
AWS Data Engineer (Spark, AWS, Glue) Location : Fort Mill SC (Hybrid) Must Skill:- Work with development teams and other project leaders/stakeholders to provide technical solutions that enable business capabilities- Design and develop data applications using big data technologies (AWS, Spark) to ingest, process, and analyze large disparate datasets- Build robust data pipelines on the Cloud using AWS Glue, Aurora Postgres, EKS, Redshift, PySpark, Lambda, and Snowflake.- Build Rest-based Data API using Python and Lambda.- Build the infrastructure required for optimal extraction, transformation, and loading of data from various data sources using SQL and AWS ‘big data’ technologies.- Work with data and analytics experts to strive for greater functionality in our data systems.- Implement architectures to handle large-scale data and its organization- Execute strategies that inform data design and architecture partnering with enterprise standard- Work across teams to deliver meaningful reference architectures that outline architecture principles and best practices for technology advancement\Minimum 10 years for experience in data engineering.
This job posting was last updated on 9/5/2025