Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
National University

National University

via Teamtailor

All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

DevOps Engineer, Data Platform

Anywhere
Full-time
Posted 12/5/2025
Direct Apply
Key Skills:
Microsoft Azure
CI/CD pipelines
GitHub Actions
Infrastructure as Code (Bicep, ARM Templates, Terraform)
Python scripting
PowerShell
Bash
Azure Synapse Analytics
Microsoft Fabric
SQL
Data warehousing
Containerization (Docker, Kubernetes)

Compensation

Salary Range

$90K - 120K a year

Responsibilities

Build and manage CI/CD pipelines, automate infrastructure provisioning on Azure, optimize data platforms, and collaborate with data teams to support analytics and ML model deployment.

Requirements

3-5 years DevOps or related experience with strong Azure cloud skills, CI/CD pipeline expertise, IaC proficiency, scripting ability, and knowledge of data platforms and security best practices.

Full Description

The DevOps Engineer is a critical hands-on role responsible for building, automating, and managing the university's modern data platform. They will partner directly with data scientists and engineers to create a scalable, reliable, and secure environment, enabling the rapid development and deployment of analytics and machine learning models. The DevOps Engineer will own our CI/CD processes, primarily using GitHub Actions, and will be responsible for establishing and scaling our Infrastructure as Code (IaC) practices for Microsoft Fabric and the wider Azure ecosystem. They will ensure our platform is robust, cost-efficient, and optimized to support the university's data-driven mission. Essential Functions: Designs, builds, and maintains our CI/CD pipelines using GitHub Actions for data engineering workloads, analytics, and machine learning model deployment. Develops, deploys, and manages our Infrastructure as Code (IaC) to automate the provisioning and configuration of Azure and Microsoft Fabric resources (e.g., using Bicep, ARM Templates, or Terraform). Administers, monitors, and optimizes our core data platforms, including Microsoft Fabric and Azure Synapse Analytics, ensuring high availability and performance. Implements and manages comprehensive monitoring, logging, and alerting solutions to ensure platform health, security, and cost-efficiency. Collaborates closely with data scientists and engineers to troubleshoot, optimize data pipelines, and support the deployment of ML models (using tools like MLFlow). Implements and enforces data governance and security best practices for identity, access, and data protection within the cloud environment. Assists in managing and integrating with secondary cloud infrastructure on Google Cloud Platform (GCP) as needed. Creates and maintains high-quality documentation for our platform architecture, automation, and operational procedures. Performs other duties as assigned. Supervisory Responsibilities: N/A Requirements: Education & Experience: Bachelor's degree in Computer Science, Information Technology, Engineering, or equivalent combination of education and experience, Master’s degree preferred. Three (3) to Five (5) years of hands-on experience in a DevOps, SRE, or Data Engineering role with a strong focus on automation. All skills, abilities and education will be considered for minimum qualifications. Competencies/Technical/Functional Skills: Strong, demonstrable experience with the Microsoft Azure cloud platform. Proven experience building and managing CI/CD pipelines, preferably with GitHub Actions. Hands-on experience with Infrastructure as Code (IaC) tools (e.g., Bicep, ARM templates, Terraform). Proficiency in scripting languages (e.g., Python, PowerShell, Bash). Experience with data platforms like Azure Synapse, Microsoft Fabric, or Databricks is highly desirable. Working knowledge of SQL and data warehousing/data lake concepts. Familiarity with Google Cloud Platform (GCP) is a plus. Experience with containerization (Docker, Kubernetes) is a plus. A strong "automation-first" and "infrastructure-as-code" mindset. Excellent problem-solving and troubleshooting skills. Strong written, oral communication, and interpersonal skills. Strong project management, organizational, and prioritization skills. Ability to work independently, manage competing priorities, and collaborate effectively with technical and non-technical stakeholders. Embraces diverse people, thinking and styles. Location: Remote, USA Travel: up to 10% travel #LI-Remote

This job posting was last updated on 12/8/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt