Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
HT

Hex Technologies

via Greenhouse

Apply Now
All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Infra Engineer, Datadog Whisperer

Anywhere
full-time
Posted 10/13/2025
Direct Apply
Key Skills:
Infrastructure Engineering
DevOps
Site Reliability Engineering
Datadog
AWS
Kubernetes
Programming
Automation
Cost Reduction
Monitoring
Observability
OpenTelemetry
Cloud Cost Management
Documentation
Workshop Facilitation
Distributed Tracing

Compensation

Salary Range

$Not specified

Responsibilities

You will mitigate high-cardinality custom metrics and audit log ingestion for every service. Additionally, you will analyze APM usage and enforce cost-saving policies across the engineering organization.

Requirements

The role requires 3+ years of experience in Infrastructure, DevOps, or Site Reliability Engineering, with expert knowledge of Datadog's pricing model. Proficiency in AWS and Kubernetes, along with strong programming skills for infrastructure automation, is essential.

Full Description

About the role We build products people genuinely love. Our features are impactful, our business is growing, and … it’s pretty great! We pride ourselves on many things, but let’s be honest: we have been operating on “ship it first, check the Datadog bill later”. And later has arrived. Our Datadog bill is a threat, an ever-growing line item that threatens to consume what remains of our cloud spend budget. This is a critical juncture, where our monitoring costs are starting to overshadow the systems they're meant to monitor. We need a hero. A detective. A mercenary with a deep-seated love for logs, metrics, and most importantly, savings. You are not just an Infra Engineer; you are an economic covert ops specialist**.** Your sole, glorious mission is to make our Datadog spend dramatically and sustainably go down. We're talking down down. The bill should look like it's been body-slammed by a professional wrestler. You will be embedded within the Infrastructure team, and will have the autonomy to look across every team and service to streamline and purge that which needs streamlining and purging. As you rack up wins, you'll increasingly become the person we introduce at company meetings as, "The reason we could spend $$ on that nice company offsite.” What you will do Mitigation of myriad metrics: Hunt down and decommission all high-cardinality custom metrics that no one actually uses, replacing them with sane, aggregated alternatives, or build a system that insulates us from this risk area entirely. Liberation from legions of logs: Audit the log ingestion for every service. You'll work with engineering teams to tune logging levels, apply intelligent sampling and exclusion filters at the source (i.e., the agent), and implement better categorization and archiving strategies. Analysis of Performance Monitoring (APM): Analyze our APM and trace ingestion and ensure it’s smartly used. You'll champion distributed tracing strategies that are both informative and economical. Standardization: Use automation to enforce cost-saving policies across our entire fleet, ensuring developers can't accidentally check in a new, expensive monitoring configuration Evangelization: Be the champion for cost-aware engineering. Create internal documentation, run "Datadog Dojo" workshops, and embed the mindset of "monitor what matters" across the entire engineering organization. About you 3+ years as an Infrastructure, DevOps, or Site Reliability Engineer. Expert-level, obsessive knowledge of Datadog's pricing model and platform architecture. You know how to read the usage report better than you know your own credit card statement. Deep proficiency with AWS and Kubernetes. Strong programming skills for infrastructure automation. The courage to tell a a founder or principal engineer that their favorite metric is financially irresponsible. Bonus: Experience with other monitoring/observability tools (Prometheus, Grafana, Honeycomb, Splunk) and a view on whether we should be using any of them to displace some Datadog functionality. Experience implementing OpenTelemetry standards and agents for cost-effective vendor neutrality. A proven track record of actually reducing cloud costs, not just talking about it. Our stack Our product is a web-based notebook and app authoring platform. Our frontend is built with Typescript and React, using a combination of Apollo GraphQL and Redux for managing application state and data. On the backend, we also use Typescript to power an Express/Apollo GraphQL server that interacts with Postgres, Redis, and Kubernetes to manage our database and Python kernels. Our backend is tightly integrated with our infrastructure and CI/CD, where we use a combination of Terraform, Helm, and AWS to deploy and maintain our stack. In addition to our unique culture, Hex proudly offers a competitive total rewards package, including but not limited to, market-benched salary & equity, comprehensive health benefits, and flexible paid time off. The salary range for this role is: Variable, depends on how much $$ you save The salary range shown may be a reflection of additional factors such as geographical location and skill ranges/levels we’re open to. Placement in the salary range will be decided upon completion of the interview process, taking into account factors like leaving room for growth, internal fairness & parity, your demonstrated skills, and the depth of your experience. Our Recruiting team will be able to provide more details during the interview process. By submitting an application the candidate consents to the use of their personal information in accordance with the Hex Privacy policy: https://learn.hex.tech/docs/trust/privacy-policy.

This job posting was last updated on 10/14/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt