Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
VS

Venon Solutions

via Himalayas.app

All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

#925 - Data Engineer (Python)

Anywhere
Full-time
Posted 1/6/2026
Verified Source
Key Skills:
Python
SQL
Data Pipelines
Data Warehousing
ETL/ELT

Compensation

Salary Range

$100K - 150K a year

Responsibilities

Design, build, and maintain scalable data pipelines and support ML workflows.

Requirements

Over 5 years of experience in data engineering, proficiency in Python, SQL, data warehouses, and cloud platforms, with some understanding of ML concepts.

Full Description

Job Opportunity available for professionals in LATAM, preferrably Brazil. We're looking for a Data Engineer (Python) with hands-on experience building and maintaining scalable data pipelines, and exposure to Machine Learning / AI workflows. In this role, you'll work closely with data scientists, ML engineers, and product teams to ensure reliable, high-quality data powers analytics and AI-driven products. This position is ideal for a strong data engineer who enjoys working with large datasets and wants to deepen their involvement in ML and AI systems. Requirements: • Advanced English level for fluent communication. • +5 years of experience in Data Engineer, with strong proficiency in Python. • Experience building data pipelines using tools such as Airflow, Prefect, Luigi, or similar. • Solid understanding of SQL and relational databases. • Experience working with data warehouses (e.g., BigQuery, Snowflake, Redshift). • Familiarity with cloud platforms (AWS, GCP, or Azure). • Experience handling large datasets and optimizing data workflows. Basic understanding of machine learning concepts (training, inference, features, evaluation). • Ability to work collaboratively in cross-functional teams. Nice to Have: • Experience supporting ML pipelines or MLOps workflows. • Familiarity with libraries such as Pandas, NumPy, Scikit-learn, PyTorch, or TensorFlow. • Experience with feature stores or model data versioning. • Knowledge of streaming technologies (Kafka, Pub/Sub, Kinesis). • Exposure to LLMs, NLP, or AI-driven applications. • Experience with containerization and orchestration (Docker, Kubernetes). Responsibilities: • Design, build, and maintain scalable data pipelines using Python. • Develop and optimize ETL/ELT processes for structured and unstructured data. • Manage data ingestion from APIs, databases, and streaming sources. • Collaborate with data scientists to support machine learning model training, evaluation, and deployment. • Ensure data quality, reliability, and performance across data platforms. • Implement monitoring, logging, and data validation for pipelines. • Work with cloud-based data infrastructure and storage solutions. • Document data flows, schemas, and pipeline logic. What do we offer? • 100% Remote work. • Competitive salary in USD. • Type of contract: Independent Contractor with Venon Solutions LLC. • Contract duration: Long-term. • 2 weeks of PTO (paid time off). • Holidays: from the Client's calendar (USA) • Working hours: Full-time EST timezone, fully committed.

This job posting was last updated on 1/8/2026

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt