Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
Tessera Labs

Tessera Labs

via Ashby

All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Data Engineer (Enterprise AI & ERP Modernization)

Anywhere
Full-time
Posted 12/15/2025
Direct Apply
Key Skills:
SQL
Python
ETL pipelines
Relational data modeling
Enterprise system integration

Compensation

Salary Range

$120K - 200K a year

Responsibilities

Design and implement data pipelines for enterprise systems to enable AI-driven ERP modernization.

Requirements

Strong SQL, Python, experience with enterprise systems like SAP or Salesforce, and building scalable ETL pipelines.

Full Description

About Tessera Labs Tessera Labs is redefining how enterprises adopt and operationalize Artificial Intelligence. Backed by Foundation Capital and led by a world-class founding team, we build multi-agent AI systems that can automate complex business workflows across platforms like SAP, Salesforce, Workday, Snowflake, MuleSoft, and more. Our mission: Bring real AI automation to the enterprise — with speed, precision, and measurable impact. We move fast, operate with extreme ownership, and build at the frontier of applied AI. Why This Role Matters Enable FDEs to deliver AI-driven ERP modernization rapidly and safely. Directly impact migration acceleration, operational continuity, and data-driven decision-making. Shape the foundation for enterprise-scale AI and analytics solutions across complex landscapes. Work at the cutting edge of enterprise AI, ERP transformation, and multi-agent automation, where your data engineering expertise accelerates business outcomes. Role Summary As a Data Engineer, you will work closely with Forward Deployment Engineers (FDEs) to enable rapid ERP modernization and AI-driven transformation for enterprise clients. The focus of this role is data harmonization, cross-system integration, and pipeline development, ensuring that AI solutions and enterprise workflows are powered by clean, reliable, and well-structured data. The role emphasizes ETL, relational schema modeling and mapping, joins, data cleaning, and pipeline logic for structured/tabular data. It includes a lightweight upstream MLOps component limited to structured datasets, which may involve distributed processing using PySpark or ML data engineering techniques. There are no downstream responsibilities related to model training, model serving, or deployment. This position requires deeper ERP-centric data understanding than a typical ML data engineering role, while still requiring strong generalist engineering skills to build scalable, production-grade pipelines. Candidates with SAP data expertise and modern data engineering or ML-enablement experience are ideal; strength in one area with the ability to learn the other is acceptable. Key Responsibilities Data Harmonization: Integrate, reconcile, and standardize structured data across ERP, CRM, finance, and analytics systems. Cross-System Pipeline Architecture: Design and implement ETL/ELT pipelines that unify data across enterprise systems for AI-driven use cases. Data Transformation & Validation: Build logic to clean, transform, validate, and prepare structured/tabular datasets for operational and analytical workflows. Schema Interpretation: Analyze complex enterprise schemas, including poorly documented or evolving structures, and document entity relationships across systems. Pipeline Reliability: Monitor, troubleshoot, and optimize data pipelines to ensure consistent, high-quality delivery at scale. AI Enablement: Prepare structured datasets for multi-agent AI platforms, orchestration engines, and decisioning systems, applying lightweight upstream MLOps practices where appropriate. Cross-Functional Collaboration: Work directly with FDEs, architects, and client teams to solve complex enterprise modernization challenges. Problem Solving Under Ambiguity: Decompose unclear requirements and rapidly evolving constraints into clear, actionable technical solutions. Required Skills & Experience Strong SQL skills, including complex joins and queries across multi-schema relational environments. Proficiency in Python or a comparable language for data processing, automation, and pipeline logic. Solid foundations in relational data modeling, schema mapping, and normalized/denormalized design. Experience working with enterprise systems such as SAP S/4HANA, Salesforce, finance systems, or cloud data warehouses. Hands-on experience building and maintaining ETL pipelines for structured/tabular data. Familiarity with distributed data processing (e.g., PySpark) and upstream MLOps concepts applied to structured datasets is a plus. Ability to operate effectively in fast-moving, ambiguous environments. Experience supporting analytics, ML pipelines, or AI workflows is preferred but not required. Demonstrated ability to navigate messy, fragmented enterprise data landscapes with inconsistent schemas and cross-system duplication. Behavioral & Problem-Solving Expectations Comfortable working in a startup environment with high ownership and rapid iteration. Able to think like an engineer while navigating organizational and stakeholder dynamics. Communicates clearly and concisely, adjusting depth and detail to the audience. Operates effectively with incomplete information and adapts quickly to change. Uses AI-assisted tools thoughtfully to accelerate engineering productivity and solution delivery.

This job posting was last updated on 12/17/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt