Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
AutoZone

AutoZone

via Oraclecloud

All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Sr. Data Architect

Elk Grove, California
Full-time
Posted 12/4/2025
Direct Apply
Key Skills:
Databricks
Apache Spark
Delta Lake
PySpark
Scala
SQL
GCP Cloud Storage
Data Governance
Unity Catalog
CI/CD
GitLab
Jenkins
Terraform
Vector Databases
Data Pipeline Architecture
Performance Tuning
Data Security
Agile Methodology

Compensation

Salary Range

$119K - 208K a year

Responsibilities

Lead and manage a data engineering team to design, implement, and optimize the Databricks Lakehouse platform and data pipelines ensuring security, scalability, and performance.

Requirements

10+ years in data engineering or architecture with deep expertise in Databricks, Apache Spark, GCP, data governance, performance tuning, team leadership, and CI/CD integration.

Full Description

Position Summary: The Senior Data Architect is a senior technical leader responsible for building and optimizing a robust data platform in the automotive industry. In this full-time role, you will lead a team of data engineers and own the end-to-end architecture and implementation of the Databricks Lakehouse platform. You will collaborate closely with function leaders, domain analysts and other stakeholders to design scalable data solutions that drive business insights. This position demands deep expertise in Databricks (GCP), and ability to build end-to-end data pipelines that handle large volumes of structured, semi structured and unstructured data. You will demonstrate strong leadership to ensure best practices in data engineering, performance tuning, and governance. You will be expected to communicate complex technical concepts and data strategies to technical and non-technical audiences including executive leadership. Position Responsibilities - Other duties may be assigned: Lead, mentor, and manage a team of data engineers, providing technical guidance, code reviews, and foster a high-performing team. Own the Databricks platform architecture and implementation, ensuring the environment is secure, scalable, and optimized for the organization’s data processing needs. Design and oversee the Lakehouse architecture leveraging Delta Lake and Apache Spark. Implement and manage Databricks Unity Catalog for unified data governance. Ensure fine-grained access controls and data lineage tracking are in place to secure sensitive data. Collaborate with analytics teams to develop and optimize Databricks SQL queries and dashboards. Tune SQL workloads and caching strategies for faster performance and ensure efficient use of the query engine. Lead performance tuning initiatives. Profile data processing code to identify bottlenecks and refactor for improved throughput and lower latency. Implement best practices for incremental data processing with Delta Lake, and ensure compute cost efficiency (e.g., by optimizing cluster utilization and job scheduling). Work closely with domain analysts, data scientists and product owners to understand requirements and translate them into robust data pipelines and solutions. Ensure that data architectures support analytics, reporting, and machine learning use cases effectively. Integrate Databricks workflows into the CI/CD pipeline using DevOps principles and Git. Develop automated deployment processes for notebooks and jobs to promote consistent releases. Manage source control for Databricks code (using GitLab) and collaborate with DevOps engineers to implement continuous integration and delivery for data projects. Collaborate with security and compliance teams to uphold data governance standards. Implement data masking, encryption, and audit logging as needed, leveraging Unity Catalog and GCP security features to protect sensitive data. Stay up to date with the latest Databricks features and industry’s best practices. Proactively recommend and implement improvements (such as new performance optimization techniques or cost-saving configurations) to continuously enhance the platform’s reliability and efficiency. Position Requirements: 10+ years of experience in data engineering, data architecture, or related roles, with a track record of designing and deploying data pipelines and platforms at scale. Significant hands-on experience with Databricks (preferably GCP) and the Apache Spark ecosystem. Proficient in building data pipelines using PySpark/Scala and managing data in Delta Lake format. Strong experience working with cloud data platforms (GCP preferred, or AWS/Azure). Familiarity with GCP Storage principles. Strong skills in vector databases and embedding models to support scalable RAG systems. Proficient in optimizing retrieval and indexing for LLM integration. Strong experience in managing structured, semi structured and unstructured data in Databricks. Ability to inspect existing data pipelines, discern their purpose and functionality, and re-implement them efficiently in Databricks. Advanced SQL skills with the ability to write and optimize complex queries. Solid understanding of data warehousing concepts and performance tuning for SQL engines. Proven ability to optimize ETL jobs for performance and cost efficiency. Experience tuning cluster configurations, parallelism, and caching to improve job runtimes and resource utilization. Demonstrated experience implementing data security and governance measures. Comfortable configuring Unity Catalog or similar data catalog tools to manage schemas, tables, and fine-grained access controls. Able to ensure compliance with data security standards and manage user/group access to data assets. Experience leading and mentoring engineering teams. Excellent project leadership abilities to coordinate multiple projects and priorities. Strong communication skills to effectively collaborate with cross-functional teams and present architectural plans or results to stakeholders. Experience working in an Agile environment. Tools & Technologies Databricks Lakehouse Platform: Databricks Workspace, Apache Spark, Delta Lake, Databricks SQL, MLflow (for model tracking), Postgres Database. Data Governance: Databricks Unity Catalog for data catalog and access control. Programming & Data Processing: PySpark and Python for building data pipelines and Spark Jobs; SQL for querying . Cloud Services: GCP Cloud Storage, GCP Pub/Sub technologies and Vector Databases. DevOps & CI/CD: Git for version control (GitLab), Jenkins and experience with Terraform for infrastructure-as-code is a plus. Other Tools: Project and workflow management tools JIRA and confluence. Looker Studio and PowerBI Preferred Databricks Certified Data Engineer Professional or Databricks Certified Data Engineer Associate. Exposure to related big data and streaming tools such as Apache Kafka, GCP Pub/Sub services, Apache Airflow and BI/analytics tools (e.g., Power BI, Looker Studio) is advantageous. The salary range for this position is $119,000 – $208,000. When extending an offer of employment, ALLDATA considers factors such as (but not limited to) the scope and responsibilities of the position, the candidate’s work experience, education/training, key skills, internal peer equity, federal, state, and local laws, company financials, as well as external market and organizational considerations. ALLDATA values and is committed to diversity, equity and inclusion.

This job posting was last updated on 12/4/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt