Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
ND

NTT DATA Romania SA

via Successfactors

All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Data Engineer (AWS)

Anywhere
Full-time
Posted 11/26/2025
Direct Apply
Key Skills:
AWS Glue
AWS Lambda
AWS Step Functions
Amazon S3
Apache Iceberg
Parquet
PySpark
Terraform
CDK for Terraform
CI/CD pipelines
Containerization
AWS API Gateway
Data Governance
Agile Scrum

Compensation

Salary Range

$120K - 160K a year

Responsibilities

Design, develop, and manage cloud-based data engineering solutions and ETL pipelines using AWS technologies to support analytics and business intelligence.

Requirements

3-5 years of data engineering experience with strong AWS skills, Python and PySpark proficiency, infrastructure as code experience, and knowledge of data lakes, ETL, and distributed processing.

Full Description

Who We Are   At the heart of our outsourcing organization, the Data & Intelligence Competence Center serves as a dedicated hub for advanced data-driven solutions. We specialize in data engineering, analytics, and AI-powered insights, helping businesses turn raw information into actionable intelligence. By combining deep technical expertise with industry best practices, we enable smarter decision-making, optimize processes, and foster innovation across diverse sectors.    Building on this foundation, we collaborate with a world-leading reinsurance and risk management company. Our client delivers comprehensive solutions in insurance, underwriting, and data-driven risk assessment. With a strong commitment to innovation and long-term stability, they empower organizations to navigate complex risks and create sustainable value in an ever-changing global landscape. To support this mission, we are seeking a highly skilled AWS Data Engineer to strengthen our data and analytics ecosystem. In this role, you will design, develop, and manage cloud-based data solutions, leveraging big data frameworks to build efficient pipelines, optimize storage, and implement robust processing workflows—ensuring high-quality data availability for analytics and business intelligence.   What you'll be doing   Build and maintain large-scale ETL pipelines using AWS Glue, Lambda, and Step Functions Design and manage data lakes on Amazon S3, implementing robust schema management and lifecycle policies Work with Apache Iceberg and Parquet formats to support efficient and scalable data storage Develop distributed data processing workflows using PySpark Implement secure, governed data environments using AWS Lake Formation Build and maintain integrations using AWS API Gateway and data exchange APIs Automate infrastructure provisioning using Terraform or CDK for Terraform (CDKTF) Develop CI/CD pipelines and containerized solutions within modern DevOps practices Implement logging, observability, and monitoring solutions to maintain reliable data workflows Perform root cause analysis and optimize data processing for improved performance and quality Collaborate with business intelligence teams and analysts to support reporting and analytics needs Work in cross-functional, Agile teams and actively participate in sprint ceremonies, backlog refinement, and planning Provide data-driven insights and recommendations that support business decision-making  What you'll bring along   Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience) Minimum 3–5 years of experience in a Data Engineering role Strong knowledge of AWS services: Glue, Lambda, S3, Athena, Lake Formation, Step Functions, DynamoDB Proficiency in Python and PySpark for data processing, optimization, and automation Hands-on experience with Terraform or CDKTF for Infrastructure as Code Solid understanding of ETL development, data lakes, schema evolution, and distributed processing Experience working with Apache Iceberg and Parquet formats (highly valued) Experience with CI/CD pipelines, automation, and containerization Familiarity with API Gateway and modern integration patterns Strong analytical and problem-solving skills Experience working in Agile Scrum environments Good understanding of data governance, security, and access control principles Experience with visualization/BI tools such as Power BI or AWS QuickSight is a plus Excellent command of both spoken and written English.   Nice to Have   Experience designing data products, implementing tag-based access control, or applying federated governance using AWS Lake Formation Familiarity with Amazon SageMaker for AI/ML workflows Hands-on experience with AWS QuickSight for building analytics dashboards Exposure to data mesh architectures Experience with container orchestration (e.g., Kubernetes, ECS, EKS) Knowledge of modern data architecture patterns (e.g., CDC, event-driven pipelines, near-real-time ingestion)

This job posting was last updated on 12/3/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt