via DailyRemote
$126K - 164K a year
Design, build, and maintain scalable data pipelines and systems, ensuring data security and compliance, primarily in a healthcare context.
Requires at least 10 years of data engineering experience, strong SQL and Python skills, experience with modern data platforms like Snowflake or Redshift, and familiarity with cloud services like AWS.
We are looking for a lead data engineer highly collaborative and comfortable with ambiguity and change in a fast-paced environment to partner closely with various teams to design, build and maintain data systems and applications. Must be a self-starter and highly organized with strong problem-solving and learning skills to work with various teams analyze and design data storage structures and build performant automated data pipelines that are reliable and scalable in a fast growing data ecosystem. Leverage strong understanding of data modeling principles and modern data platforms to properly design and implement data pipeline solutions. Experience using analytic SQL, working with traditional relational databases and/or distributed systems such as AWS S3, Hadoop / Hive, Redshift. Provide production support and adhering to the defined SLA(s). Strong technical skills and demonstrated ability to be detail-oriented. Good understanding of data security, compliance, and policies in healthcare services. Demonstrated ability to test and validate according to industry best practices and to communicate effectively with peers in technology teams. Strong interpersonal, written, and oral communication skills with ability to work with all levels of the organization. Prior experience in the healthcare industry is a plus Estimated salary range for this position is $126,148.63 - $163,993.22 / year depending on experience. Degrees: • Bachelors. Additional Qualifications: • Bachelors degree in computer science or related. • Master's degree preferred. • At least 10 yrs with a Bachelors and at least 8 yrs with a Masters of recent exp in data engineering and end-to-end automation of data pipelines. • Technical Skills: Data Warehousing, Strong SQL, and Python • Strong understanding of data science and business intelligence workflows. • Programming exp ideally in Python & SQL • exp with large-scale data warehousing and analytics projects, including using AWS and GCP technologies. • Proven track record of successful written communication, technically deep and business savvy. • Snowflake, DBTCloud, Gitlab, Terraform, Serverless EMR strongly preferred. • Experience in production support and troubleshooting. • Hands-on experience with modern data platforms (Snowflake, Redshift, etc) required • 10 years of hands-on: data warehouse design/architecture with AWS, Data Vault 2. • Helpful to have Dimensional Modeling, Orchestration tools like Airflow, Control-M, containerization of applications (Docker & Kubernetes) Cloud Formation and Terraform-AWS Services (S3, AWS Glue, Athena, Lake formation, DynamoDB, DMS, RDS, etc) Agile methodology and DevOps process, modern data Integrations tools (DBT, Informatica Power Center, AWS Glue or similar) preferred *3 years(s) of data modeling and familiarity with AI/ML and BI tools, a plus. Minimum Required Experience: 10 Years
This job posting was last updated on 12/12/2025