Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
CC

CoreAi Consulting

via Snagajob

All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

AI & Cloud Engineer - Remote

Tucson, AZ
Full-time, Part-time
Posted 12/5/2025
Verified Source
Key Skills:
Python (FastAPI, Flask, Async frameworks)
AWS services (Lambda, ECS/EKS, S3, DynamoDB, Bedrock)
Containerization (Docker, CI/CD, CloudFormation, CDK)
RAG pipelines, vector databases, embedding workflows
LLMs, prompt engineering, generative AI
Agentic orchestration (LangChain, LlamaIndex, AWS Agents)

Compensation

Salary Range

$200K - 250K a year

Responsibilities

Design, develop, and deploy scalable AI/ML applications and microservices in cloud environments, focusing on data pipelines, model deployment, and system orchestration.

Requirements

Over 8 years of experience in software/AI engineering, expert Python skills, extensive AWS and containerization experience, and specialized knowledge in generative AI and agentic systems.

Full Description

We are seeking a highly skilled Mid–Senior Level Engineer with strong expertise in Python, AWS cloud services, containerization, and modern AI/ML technologies. The ideal candidate has hands-on experience designing scalable data ingestion pipelines, deploying GenAI/LLM solutions, and building retrieval-augmented and agentic systems for enterprise use cases. This role involves end-to-end solution design—data, infrastructure, orchestration, and model integration—across cloud-native environments. Key Responsibilities • Design, develop, and deploy scalable applications and microservices using Python and AWS services (Lambda, ECS/EKS, S3, DynamoDB, API Gateway, Bedrock, CloudFormation, etc.). • Build and maintain containerized workloads using Docker, GitHub workflows, and CI/CD automation. • Develop robust data ingestion and processing pipelines integrating structured/unstructured data sources. • Implement GenAI solutions using LLMs, embeddings, vector databases (Pinecone, FAISS, Redis, etc.), and RAG architectures. • Build and manage knowledge bases, embedding pipelines, and context-retrieval systems optimized for real-world performance. • Design and orchestrate agentic workflows using modern agentic frameworks and multi-agent patterns for automation and decision-making. • Work with AWS Bedrock to integrate foundation models, manage guardrails, tune prompts, and optimize model performance. • Implement secure, scalable infrastructure using CloudFormation, IAM, VPC, and AWS networking best practices. • Collaborate with cross-functional teams (data, product, engineering) to translate requirements into technical designs. • Monitor, troubleshoot, and optimize production AI/ML workloads, including inference performance, latency, cost, and reliability. • Maintain strong code quality standards through GitHub version control, documentation, and automated testing. Required Skills & Experience • 8+ years of professional experience in software engineering, cloud engineering, or ML/AI development. • Expert-level programming skills in Python (FastAPI, Flask, Async frameworks preferred). • Deep experience with AWS services, including serverless and container architectures. • Hands-on experience with Docker, CI/CD, and IaC tools like CloudFormation or CDK. • Proven experience building RAG pipelines, vector store integrations, and embedding workflows. • Strong understanding of LLMs, prompt engineering, model evaluation, and generative AI development. • Experience with agentic orchestration (LangChain, LlamaIndex, custom agent frameworks, or AWS Agents). • Experience integrating with AWS Bedrock or similar foundation model platforms. • Solid understanding of distributed systems, API development, security, and cloud-native patterns. • Strong problem-solving abilities and the ability to work independently in fast-paced environments. We are seeking a highly skilled Mid–Senior Level Engineer with strong expertise in Python, AWS cloud services, containerization, and modern AI/ML technologies. The ideal candidate has hands-on experience designing scalable data ingestion pipelines, deploying GenAI/LLM solutions, and building retrieval-augmented and agentic systems for enterprise use cases. This role involves end-to-end solution design—data, infrastructure, orchestration, and model integration—across cloud-native environments. Key Responsibilities • Design, develop, and deploy scalable applications and microservices using Python and AWS services (Lambda, ECS/EKS, S3, DynamoDB, API Gateway, Bedrock, CloudFormation, etc.). • Build and maintain containerized workloads using Docker, GitHub workflows, and CI/CD automation. • Develop robust data ingestion and processing pipelines integrating structured/unstructured data sources. • Implement GenAI solutions using LLMs, embeddings, vector databases (Pinecone, FAISS, Redis, etc.), and RAG architectures. • Build and manage knowledge bases, embedding pipelines, and context-retrieval systems optimized for real-world performance. • Design and orchestrate agentic workflows using modern agentic frameworks and multi-agent patterns for automation and decision-making. • Work with AWS Bedrock to integrate foundation models, manage guardrails, tune prompts, and optimize model performance. • Implement secure, scalable infrastructure using CloudFormation, IAM, VPC, and AWS networking best practices. • Collaborate with cross-functional teams (data, product, engineering) to translate requirements into technical designs. • Monitor, troubleshoot, and optimize production AI/ML workloads, including inference performance, latency, cost, and reliability. • Maintain strong code quality standards through GitHub version control, documentation, and automated testing. Required Skills & Experience • 8+ years of professional experience in software engineering, cloud engineering, or ML/AI development. • Expert-level programming skills in Python (FastAPI, Flask, Async frameworks preferred). • Deep experience with AWS services, including serverless and container architectures. • Hands-on experience with Docker, CI/CD, and IaC tools like CloudFormation or CDK. • Proven experience building RAG pipelines, vector store integrations, and embedding workflows. • Strong understanding of LLMs, prompt engineering, model evaluation, and generative AI development. • Experience with agentic orchestration (LangChain, LlamaIndex, custom agent frameworks, or AWS Agents). • Experience integrating with AWS Bedrock or similar foundation model platforms. • Solid understanding of distributed systems, API development, security, and cloud-native patterns. • Strong problem-solving abilities and the ability to work independently in fast-paced environments.

This job posting was last updated on 12/11/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt