Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
Bespoke Technologies, Inc

Bespoke Technologies, Inc

via ZipRecruiter

All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Innovation and Automation Specialist

Chantilly, VA
Full-time
Posted 1/29/2026
Verified Source
Key Skills:
Python
SQL
Data pipeline development

Compensation

Salary Range

$120K - 200K a year

Responsibilities

Design, develop, and implement automated data engineering frameworks and pipelines, and mentor other engineers.

Requirements

Requires 5+ years of data engineering experience, expert-level Python and SQL skills, cloud platform experience, and IaC knowledge.

Full Description

BT-162 - Innovation and Automation Specialist Skill Level: Mid Location: Chantilly/Herndon • *MUST HAVE AN ACTIVE TS OR TS/SCI CLEARANCE TO APPLY. Those without an active security clearance will not be considered.** Role Description: As a Data Engineer Specialist on the Innovation and Automation team, you will serve as a subject matter expert, blending deep data engineering expertise with a passion for automation. You will not build individual data pipelines for business users; instead, you will build the factory that produces them. Your mission is to design, develop, and implement the reusable frameworks, automated patterns, and core tooling that our data engineering teams will use to build their own pipelines faster, more reliably, and more consistently. This is a highly technical, hands-on role for a problem-solver who wants to act as a force multiplier for the entire data organization. Responsibilities: • Act as a technical expert on the design and implementation of automated data engineering solutions. • Develop and maintain a library of standardized, reusable ETL/ELT pipeline templates using Python, SQL, and frameworks like Databricks or Snowflake. • Engineer and implement robust, automated data quality and testing frameworks (e.g., using tools like Great Expectations) that are embedded within the core pipeline templates. • Contribute to the development of Infrastructure-as-Code (IaC) modules (Terraform) for the automated provisioning of data infrastructure. • Enhance and optimize the CI/CD for Data (DataOps) pipelines, ensuring seamless and reliable deployment of data workflows. • Serve as an escalation point for the most complex data engineering and automation challenges, providing expert-level troubleshooting and guidance to other engineers. • Mentor other data engineers on automation best practices, code standards, and the use of the frameworks you build. • Research and prototype cutting-edge data engineering and automation technologies to drive continuous improvement. Required Qualifications: • 5+ years of hands-on experience in data engineering. • Expert-level programming skills in Python and advanced SQL. • Proven, in-depth experience building and optimizing data pipelines in a cloud environment (AWS, Azure) on platforms like Databricks or Snowflake. • Strong, hands-on experience with Infrastructure-as-Code (IaC) using Terraform. • Demonstrable experience with CI/CD principles and tools (e.g., GitLab CI, Jenkins, GitHub Actions) applied to data workflows. • Deep understanding of modern data architecture, data modeling, and software engineering best practices. Preferred Qualifications: • Experience in a DevOps or Site Reliability Engineering (SRE) role. • Direct experience developing and operationalizing a "pipeline factory" or similar framework. • Familiarity with data orchestration tools (e.g., Airflow) and containerization (Docker, Kubernetes). • Proven ability to diagnose and resolve complex performance, data quality, and system-level issues.

This job posting was last updated on 2/3/2026

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt