Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
DI

DigiCert

via Greenhouse

All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Sr. Data Engineer

Anywhere
Full-time
Posted 11/26/2025
Direct Apply
Key Skills:
Data engineering
Stream processing (Kafka, Flink, Spark)
Python/Go/Scala programming
ETL/ELT design
Cloud data services (AWS Athena, Glue, S3)
SQL optimization
Data warehouse technologies (Snowflake, BigQuery, Redshift)
Data pipeline development
Data security and compliance

Compensation

Salary Range

$120K - 200K a year

Responsibilities

Design and optimize large-scale real-time data ingestion, transformation, and analytics pipelines for DNS data in cloud environments.

Requirements

7+ years in data engineering with strong programming, stream processing, cloud data infrastructure, and data warehouse expertise.

Full Description

Who we are We're a leading, global security authority that's disrupting our own category. Our encryption is trusted by the major ecommerce brands, the world's largest companies, the major cloud providers, entire country financial systems, entire internets of things and even down to the little things like surgically embedded pacemakers. We help companies put trust - an abstract idea - to work. That's digital trust for the real world. Job summary We’re revitalizing our engineering culture and embracing modern software design and delivery practices. The UltraDNS Data Services team is seeking a Senior Data Engineer to design and scale the next generation of data infrastructure. You’ll help us build and scale real-time analytics and cloud-native data processing systems that handle hundreds of billions of DNS transactions daily. Your work will directly enhance DigiCert’s ability to deliver insights, reliability, and performance to customers across the globe. What you will do Design, build, and optimize large-scale data ingestion, transformation, and streaming pipelines for DNS exhaust data collected from our global edge infrastructure. Implement real-time and near real-time analytics pipelines using technologies such as Kafka, Flink, Spark Streaming, or Kinesis. Design, develop, and maintain robust data models and warehouse structures (e.g., Snowflake, BigQuery, Redshift, ClickHouse) to support high-throughput analytical workloads. Build tools and frameworks for data quality, validation, and observability across distributed, cloud-based systems. Optimize data storage, retention, and partitioning strategies to balance cost and query performance. Work with data visualization and analytics tools such as Grafana, Tableau, or similar platforms to surface operational metrics and data insights. Collaborate with software and platform engineering teams to integrate real-time data into their services and customer-facing analytics. Ensure data security and compliance in multi-tenant, high-volume cloud environments. What you will have Four-year degree in IT, Computer Science, or a related field, or equivalent professional experience. 7+ years of experience in data engineering or large-scale data infrastructure development. Strong engineering background with experience building and operating distributed data systems capable of processing millions of transactions per second. Proven experience with stream processing frameworks such as Kafka Streams, Flink, Spark Structured Streaming, or equivalent technologies. Strong proficiency in languages supporting data pipeline development, such as Python, Go, or Scala. Deep understanding of ETL/ELT design patterns and data warehouse technologies (e.g., Snowflake, BigQuery, Redshift, ClickHouse, Databricks). Advanced SQL skills, including query optimization, schema design, and partitioning strategies for large-scale analytical datasets. Extensive hands-on experience with cloud-based data services such as Athena, Glue, S3, Kinesis, and Lambda, as well as data lakehouse technologies like Apache Iceberg, Parquet, and Delta Lake. Nice to have Understanding of DNS data concepts and familiarity with DNS traffic, telemetry, or network-level observability data. Familiarity with CI/CD best practices and infrastructure-as-code tools such as Terraform and CloudFormation. Hands-on experience with containerization and orchestration (e.g. Docker, Kubernetes). Familiarity with machine learning concepts and the integration of data pipelines that support future ML or inference workflows. Benefits Generous time off policies Top shelf benefits Education, wellness and lifestyle support #LI-KK1

This job posting was last updated on 12/3/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt