Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
BH

Beazer Homes

via Icims

All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Data Engineer Sr

Atlanta, Georgia
Full-time
Posted 12/5/2025
Direct Apply
Key Skills:
Data pipelines
ETL/ELT processes
SQL and Python programming
Cloud data platforms (AWS, Azure, GCP)
Big Data technologies (Spark, Hadoop, Kafka)

Compensation

Salary Range

$NaNK - NaNK a year

Responsibilities

Design, build, and optimize scalable data pipelines and architectures to support business analytics and operations.

Requirements

Requires 8+ years of experience in data engineering, expertise in SQL, Python, ETL tools, cloud platforms, and big data technologies.

Full Description

Overview Built on a solid, family foundation, we've been building homes across the United States for more than 25 years, but our history started way before that in the 1600s with an English builder named George Beazer. Nine generations later, the Beazer family and name continues to stand for quality homebuilding, craftsmanship, and innovation. Our focus is on individual communities. We strategically build each community to be near places that our customers care about, so that a home is more than a house. The Sr. Data Engineer is responsible for designing, building, and maintaining data pipelines, ensuring data is collected, stored, and processed efficiently to support business operations, analytics, and decision-making. This role involves working with databases, cloud technologies, and ETL pipelines to support business intelligence, analytics, and reporting needs. Primary Duties & Responsibilities Build, maintain, and optimize ETL/ELT pipelines for ingesting, transforming, and processing data. Ensure data pipelines are scalable, reliable, and efficient. Design and implement data models, warehouses, and lakehouse architectures. Optimize data storage solutions using relational (SQL) and NoSQL databases. Work with big data technologies like Spark, Hadoop, and Kafka for large-scale data processing. Deploy and manage data solutions on cloud platforms (AWS, Azure, GCP). Improve query performance and storage efficiency. Implement indexing, partitioning, and caching strategies for faster data retrieval. Performs other duties as assigned. Education & Experience Typically requires a Bachelor’s degree with 8+ years experience, a Master’s degree and 6+ years experience, a PhD with 3+ years experience, or an equivalent combination of education & experience. Skills & Abilities Strong in T-SQL and Python. Expertise with ETL/ELT tools such as SSIS and Azure Data Factory. Knowledge of both on premise and cloud database management systems. Exceled application of data modeling principles and best practices. Knowledge with Big Data concepts such as Multi Parallel Processing and Modern Cloud Data Warehouse Architecture. Excellent communication skills, both verbal and written. Exceptional analytical and problem-solving skills with keen attention to detail. Physical Requirements This position is primarily office-based, operating in a professional and climate-controlled environment. The majority of work is performed on a computer, requiring prolonged periods of sitting, typing, and viewing a screen. The work environment is generally quiet with minimal exposure to noise, hazards, or extreme temperatures. This position requires the ability to maintain focus and productivity in a desk-based setting, with occasional movement throughout the office for meetings or collaborative tasks.

This job posting was last updated on 12/12/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt