$Not specified
As a Data Engineer, you will design, build, and maintain robust ETL pipelines to support large-scale data processing. You will collaborate with cross-functional teams to deliver solutions that align with business needs.
A Bachelor’s degree in a related field is required along with strong experience in Python or Java for data engineering tasks. Familiarity with cloud data platforms and frameworks such as Apache Spark and Airflow is essential.
This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Data Engineer in California (USA). As a Data Engineer, you will play a key role in building and optimizing data pipelines, enabling the seamless flow of information across modern cloud platforms and analytics systems. This role involves working with large-scale, complex datasets to design ETL processes, integrate data sources, and ensure reliability and scalability of infrastructure. You will collaborate with data scientists, analysts, and business teams to create solutions that unlock insights and support decision-making. This is an excellent opportunity to apply your technical expertise while contributing to data-driven innovation in a fast-paced environment. Accountabilities Design, build, and maintain robust ETL pipelines to support large-scale data processing. Work with cloud data platforms (AWS, Azure, GCP) to architect scalable data solutions. Develop and optimize data models, ensuring high performance and accessibility. Integrate structured and unstructured data from multiple sources into analytics-ready systems. Implement data quality checks, monitoring, and performance tuning for reliability. Collaborate with cross-functional teams to deliver solutions aligned with business needs. Support real-time data streaming and automation initiatives using modern tools and frameworks. Bachelor’s degree in Computer Science, Engineering, Information Systems, or related field. Strong experience with Python or Java for data engineering tasks. Hands-on expertise with frameworks such as Apache Spark, Airflow, and Databricks. Proficiency in ETL development and cloud-based data platforms (AWS S3, Redshift, Snowflake, or BigQuery). Solid knowledge of relational and non-relational databases, data modeling, and query optimization. Familiarity with infrastructure-as-code and DevOps practices for data systems. Experience collaborating with data scientists and analysts to support machine learning and analytics use cases. Strong problem-solving skills and ability to work in fast-paced, agile environments. Certifications in AWS, Microsoft, or Google Cloud are a plus. Competitive salary package tailored to experience and skills. Full healthcare coverage and wellness benefits. Flexible remote work options with opportunities for collaboration and mentorship. Access to ongoing certification support and professional development. Career growth opportunities in cutting-edge data engineering and cloud technologies. Jobgether is a Talent Matching Platform that partners with companies worldwide to connect top talent with the right opportunities through AI-driven job matching. When you apply, your profile goes through our AI-powered screening process designed to identify top candidates efficiently and fairly. 🔍 Our AI thoroughly evaluates your CV and LinkedIn profile, analyzing your skills, experience, and achievements. 📊 It compares your profile against the role’s requirements and past success factors to generate a match score. 🎯 The top 3 candidates with the strongest match are automatically shortlisted. 🧠 If needed, our human team conducts a manual review to ensure no strong candidate is overlooked. This process is transparent, skills-based, and free of bias — focusing solely on your fit for the role. Once the shortlist is finalized, it is shared directly with the hiring company. The final decision and any next steps, such as interviews or assessments, are handled by their internal hiring team. Thank you for your interest! #LI-CL1
This job posting was last updated on 9/25/2025