$120K - 160K a year
Design and maintain scalable data pipelines and infrastructure to support analytics, reporting, and machine learning initiatives.
Requires 3+ years in data engineering, proficiency in SQL and Python, experience with ETL tools, cloud platforms, relational and non-relational databases, and data warehousing.
Job Title: Data Engineer Location: Remote, Job Type: W2 Experience Level: Senior-Level Healthcare is mandatory Job Summary: We are seeking a skilled and detail-oriented Data Engineer to join our data engineering team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support analytics, reporting, and machine learning initiatives. This role involves working closely with data scientists, analysts, and business stakeholders to ensure high data quality and availability. Key Responsibilities: • Design, develop, and manage scalable and reliable data pipelines using ETL/ELT processes. • Develop data models, perform data transformations, and ensure data integrity across systems. • Work with cloud platforms (e.g., AWS, Azure, or GCP) to deploy and manage data infrastructure. • Optimize and troubleshoot complex SQL queries and database performance issues. • Collaborate with cross-functional teams to gather requirements and translate them into data engineering solutions. • Monitor and manage data workflow, logging, and error handling. • Ensure data governance, privacy, and security standards are followed. • Automate data ingestion and processing tasks where possible. Required Qualifications: • Bachelor's or Master's degree in Computer Science, Engineering, or a related field. • 3+ years of hands-on experience in data engineering or similar roles. • Proficiency in SQL and at least one programming language (e.g., Python, Scala). • Experience with ETL/ELT tools like Apache Airflow, Talend, or Informatica. • Familiarity with cloud platforms (AWS, Azure, or GCP) and cloud-native services. • Knowledge of relational and non-relational databases (e.g., PostgreSQL, MongoDB, Snowflake). • Understanding of data warehousing concepts and technologies (e.g., Redshift, BigQuery, Synapse). Preferred Qualifications: • Experience with big data technologies like Spark, Hadoop, or Databricks. • Knowledge of containerization tools like Docker and orchestration using Kubernetes. • Familiarity with CI/CD pipelines for data engineering projects. • Experience working in Agile environments. Kindly share the updated cv to venkatesh.kulkarni@centstone.com
This job posting was last updated on 10/11/2025