Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
Jobs via Dice

Jobs via Dice

via Remote Jobs USA

Apply Now
All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

BigData Engineer - W2 (H1B/OPT) Accepted

Anywhere
full-time
Posted 9/23/2025
Verified Source
Key Skills:
Hadoop
HDFS
YARN
MapReduce
Hive
Pig
HBase
Java
Scala
SQL
Linux
Cloud platforms (AWS, Azure, GCP)

Compensation

Salary Range

$120K - 160K a year

Responsibilities

Design and implement scalable Hadoop big data architectures, develop MapReduce jobs, manage Hadoop clusters, and collaborate with data teams.

Requirements

6+ years experience with Hadoop ecosystem, proficiency in Java or Scala, knowledge of Hadoop components, scripting with Pig, SQL querying, Linux familiarity, and cloud deployment experience.

Full Description

Dice is the leading career destination for tech experts at every stage of their careers. Our client, Xcelo Group Inc, is seeking the following. Apply via Dice today! Job Description BigData Engineer - W2 (H1B/OPT) Accepted Location: Austin, TX (Remote) Duration: Long Term We are seeking a highly experienced Big Data Architect (Hadoop) to join our growing big data team. You will play a pivotal role in architecting, designing, and implementing robust and scalable big data solutions leveraging the Apache Hadoop ecosystem. Responsibilities: • Lead the design and implementation of enterprise-grade Hadoop architectures to meet evolving big data storage, processing, and analytics needs. • Possess in-depth knowledge of core Hadoop components (HDFS, YARN, MapReduce, Hive, Pig, HBase) and their functionalities. • Design and optimize data pipelines for efficient data ingestion, processing, and analysis. • Develop and implement MapReduce jobs and Pig scripts for parallel data processing on large datasets. • Manage and configure Hadoop clusters for optimal performance, scalability, and high availability. • Integrate Hadoop with other big data tools and technologies (Spark, Kafka, etc.) to create a comprehensive data ecosystem (Preferred). • Implement security best practices to ensure data privacy and access control within the Hadoop environment. • Monitor and maintain the health of the Hadoop ecosystem, proactively identifying and resolving issues. • Stay up-to-date with the latest advancements in big data technologies and best practices. • Collaborate with data analysts, data scientists, and other stakeholders to understand data requirements and translate them into technical solutions. • Document technical designs, architecture decisions, and deployment procedures for future reference. • Mentor and guide junior Hadoop developers within the team (Preferred). Qualifications: • 6+ years of demonstrably successful experience as a Hadoop Developer, Big Data Architect, or a related role. • Proven track record of designing and implementing scalable big data architectures using the Apache Hadoop ecosystem. • In-depth knowledge of Hadoop components (HDFS, YARN, MapReduce, Hive, Pig, HBase) and their configurations. • Proficiency in programming languages like Java or Scala for developing Hadoop applications. • Experience with scripting languages like Pig for high-level data manipulation (Preferred). • Experience with SQL for data querying and analysis (Preferred). • Familiarity with Linux operating systems for working with Hadoop clusters. • Experience with cloud platforms (AWS, Azure, Google Cloud Platform) for deploying Hadoop clusters (a plus). • Excellent problem-solving, analytical, and critical thinking skills. • Strong communication, collaboration, and leadership skills. We look forward to hearing from you! Regards, Xcelo Group

This job posting was last updated on 9/24/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt