BL

BlastPoint

1 open position available

1 location
1 employment type
Actively hiring
Full-time

Latest Positions

Showing 1 most recent job
BL

Senior Data Engineer

BlastPointAnywhereFull-time
View Job
Compensation$140K - 160K a year

Design, develop, and maintain data pipelines and automation tools for client data processing and platform support. | Bachelor's degree in a related field with 3+ years of experience, proficiency in Python, PySpark, SQL, and AWS services, with experience in production ETL/ELT pipelines. | About BlastPoint BlastPoint is a B2B data analytics startup located in the East Liberty neighborhood of Pittsburgh. We give companies the power to engage with customers more effectively by discovering the humans in their data and understanding customer journeys. Serving diverse industries including energy, finance, retail, and transportation, BlastPoint’s Customer Intelligence Platform makes data accessible to business users so they can plan solutions to customer-facing challenges, from encouraging green behavior to managing customers’ financial stress. Founded in 2016 by Carnegie Mellon Alumni, we are a tight-knit, forward-thinking team. Why You Should Work for Us Solve Challenging Problems: BlastPoint’s platform incorporates cutting-edge approaches to geospatial data, psychographic clustering, data enrichment and a dynamic visualization environment, all at scale. We’re working to break new ground by pulling insights from high-dimensional data. And we’re pushing ourselves to try new and better ways to approach every step of our process. Have An Impact: Small but mighty, BlastPoint’s growth is due to big companies increasingly trusting us with supporting key decisions using their most sensitive data. What we do positively impacts the lives of millions of Americans (and beyond). Make Positive Change in the World: Our solutions reduce paper consumption, help struggling families pay their bills, and promote clean energy. We also offer our platform for free to nonprofits and civic-oriented organizations. Employee-Focused Culture: We support the individual needs of our team, offering schedule and work-from-home flexibility, health insurance, 401K, and three weeks of PTO. We also tailor growth opportunities, from skills training to industry conferences. Equal Opportunity Employer: BlastPoint is committed to creating an inclusive and diverse workplace, ensuring equal employment opportunities for individuals regardless of race, color, religion, sex, national origin, age, disability, or genetics. Our Values Everybody matters We beat expectations Innovation built on a foundation Cards on the table, always "The smartest systems from the most  comprehensive data built by the best people” Senior Data Engineer Salary Range: $140-160K Location: Remote About Us BlastPoint is a B2B data analytics startup located in the East Liberty neighborhood of Pittsburgh. We give companies the power to solve business problems through discovering the humans in their data and understanding how they think. Serving diverse industries including energy, retail, finance, and transportation, BlastPoint’s software platform helps companies plan solutions to customer-facing challenges, from encouraging green behavior to managing customers’ financial stress. Founded in 2016 by Carnegie Mellon Alumni, we are a tight-knit, forward-thinking team. Why You Should Work for Us Solve Challenging Problems: BlastPoint’s platform incorporates cutting-edge approaches to geospatial data, psychographic clustering, data enrichment and a dynamic visualization environment, all at scale. We’re working to break new ground by pulling insights from high-dimensional data. And we’re pushing ourselves to try new and better ways to approach every step of our process. Have An Impact: Small but mighty, BlastPoint’s growth is due to big companies increasingly trusting us with supporting key decisions using their most sensitive data. What we do positively impacts the lives of millions of Americans (and beyond). Make Positive Change in the World: Our solutions reduce paper consumption, help struggling families pay their bills, and promote clean energy. We also offer our platform for free to nonprofits and civic-oriented organizations. Employee-Focused Culture: We support the individual needs of our team, offering schedule and work-from-home flexibility, health insurance, 401K, and three weeks of PTO. We also tailor growth opportunities, from skills training to industry conferences. About the Role We are seeking a talented Senior Data Engineer to own and evolve our data processing pipeline. You'll work across a production-scale medallion architecture that ingests, transforms, and delivers customer data through a multi-stage pipeline serving clients in the utility and financial services industries. This role sits at the center of our data infrastructure - building the pipelines and tooling that power everything from daily data refreshes to ML feature engineering to platform delivery. Primary Responsibilities Pipeline Development & Operations (Primary) Design, develop, and maintain our core Python ETL framework by writing reusable, well-tested modules that power data transformations across client pipelines. Develop and optimize our automated refresh pipeline orchestrated through AWS Batch, Lambda, Step Functions, and EventBridge. Build Python integrations with external systems (SFTP, third-party APIs, client platforms) that are robust, testable, and reusable. Identify and eliminate manual bottlenecks in data onboarding and analysis through well-designed automation. Build and extend internal web applications (FastAPI, SQLAlchemy, PostgreSQL) that support pipeline orchestration, client configuration, and data platform operations. Ensure data integrity and security throughout project lifecycles. Client Data Support (Secondary) Write efficient server-side Python code, leveraging the Pandas and PySpark DataFrame APIs for scalable data transformations and aggregations. Optimize Spark jobs for cost and performance at scale. Debug complex data quality issues across client pipelines. Mentor junior engineers on data transformation patterns, aggregation frameworks, and best practices. Internal Tooling (Tertiary) Contribute to our internal metadata management application (FastAPI backend, React/TypeScript frontend). Build API endpoints, write database migrations, and occasionally develop frontend features. Maintain the metadata layer that drives pipeline configuration and data governance. What We're Looking For Required Bachelor's degree in a related field like Data Engineering, Computer Science, Data Science, Math, Statistics with 3+ years of experience or 5+ years of relevant experience. Experience designing and maintaining production ETL/ELT pipelines with proper error handling, idempotency, and monitoring. Advanced proficiency in Python, with deep experience in Pandas and PySpark (DataFrame API, SQL, performance tuning, distributed joins). Strong SQL skills with PostgreSQL, including query optimization, indexing strategies, and schema design. Hands-on experience with AWS services including but not limited to: S3, Lambda, Batch, SageMaker, and StepFunctions. Experience with PyArrow and columnar data formats (Parquet) and data lake patterns. Strong problem-solving skills with the ability to work autonomously, make architectural decisions, and manage multiple concurrent projects. Excellent communication skills with the ability to drive cross-functional collaboration, proactively engaging stakeholders to align on requirements and solutions. Experience using Git for version control and repository management. Authorized to work in the United States. Preferred Experience with Infrastructure as Code (Terraform). Experience implementing observability solutions (monitoring, logging, alerting) for production data pipelines. Experience developing REST APIs with FastAPI, SQLAlchemy, and Alembic (or equivalent web frameworks and ORMs). Understanding of MLOps. Experience building and deploying LLM-powered agents. Experience with Apache Iceberg or similar data lakehouse technologies. Experience with geospatial data processing (geocoding, spatial joins). Familiarity with React/TypeScript for contributing to internal tooling. Understanding of CI/CD (GitHub Actions). Experience mentoring junior engineers. A willingness to travel domestically periodically for company events (roughly 2-4 times per year). Why Join Us Work on a platform that directly impacts how major utilities and financial institutions understand and serve their customers. Own critical infrastructure end-to-end, from data ingestion to platform delivery. Collaborate with data scientists on ML-powered customer intelligence products. Influence architecture decisions on a growing, production-scale data platform. Mentor and grow junior data engineers.

Python
PySpark
SQL
Direct Apply
Posted 8 days ago

Ready to join BlastPoint?

Create tailored applications specifically for BlastPoint with our AI-powered resume builder

Get Started for Free

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt