Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
Inficare Technologies

Inficare Technologies

via Monster

Apply Now
All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Remote role of Data Engineer

Anywhere
contractor
Posted 10/7/2025
Verified Source
Key Skills:
Databricks
Apache Spark
Python
SQL
PySpark
AWS
Azure
Delta Lake
dbt
Terraform
CloudFormation

Compensation

Salary Range

$120K - 160K a year

Responsibilities

Design, develop, and optimize scalable data pipelines and workflows on Databricks, collaborate with AI/ML teams, and ensure data quality and compliance in cloud environments.

Requirements

5+ years software/data engineering experience including 2+ years with Databricks and Spark, strong Python, SQL, cloud expertise, infrastructure-as-code knowledge, and leadership skills.

Full Description

Title: Sr. Data Engineer (Databricks) Location: USA/ Remote Job Type: 6-12 Months Contract About the Role We're looking for a hands-on Data Engineer to build reliable, scalable data pipelines on Databricks. You'll turn requirements into production-grade ELT/ETL jobs, Delta Lake tables, and reusable components that speed up our teams. In this role, you'll implement and improve reference patterns, optimize Spark for performance and cost, apply best practices with Unity Catalog and workflow orchestration, and ship high-quality code others can build on. If you love solving data problems at scale and empowering teammates through clean, well-documented solutions, you'll thrive here. Core Qualifications: Bachelor's degree in computer science, Engineering, or a related field 5+ years of experience in software/data engineering, including at least 2 years working with Databricks and Apache Spark Strong proficiency in Python, SQL, and PySpark Deep understanding of AWS and Azure Cloud service Experience with Databricks Data LakeHouse, Databricks Workflows, and Databricks SQL, dbt Solid grasp of data Lakehouse and warehousing architecture Prior experience supporting AI/ML workflows, including training data pipelines and model deployment support Familiarity with infrastructure-as-code tools like Terraform or CloudFormation Strong analytical and troubleshooting skills in a fast-paced, agile environment Excellent collaboration skills for interfacing with both technical and non-technical customer stakeholders Clear communicator with strong documentation habits Comfortable leading discussions, offering strategic input, and mentoring others Key Responsibilities: The ideal candidate will have a strong background in building scalable data pipelines, optimizing big data workflows, and integrating Databricks with cloud services This role will play a pivotal part in enabling the customer's data engineering and analytics initiatives-especially those tied to AI-driven solutions and projects-by implementing cloud-native architectures that fuel innovation and sustainability Partner directly with the customer's data engineering team to design and deliver scalable, cloud-based data solutions Execute complex ad-hoc queries using Databricks SQL to explore large lakehouse datasets and uncover actionable insights Leverage Databricks notebooks to develop robust data transformation workflows using PySpark and SQL Design, develop, and maintain scalable data pipelines using Apache Spark on Databricks Build ETL/ELT workflows with AWS and Azure Services Optimize Spark jobs for both performance and cost within the customer's cloud infrastructure Collaborate with data scientists, ML engineers, and business analysts to support AI and machine learning use cases, including data preparation, feature engineering, and model operationalization Contribute to the development of AI-powered solutions that improve operational efficiency, route optimization, and predictive maintenance in the waste management domain Implement CI/CD pipelines for Databricks jobs using GitHub Actions, Azure DevOps, or Jenkins Ensure data quality, lineage, and compliance through tools like Unity Catalog, Delta Lake, and AWS Lake Formation Troubleshoot and maintain production data pipelines Provide mentorship and share best practices with both internal and customer teams

This job posting was last updated on 10/11/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt