via Remote
$NaNK - NaNK a year
Designing and developing backend systems and APIs for AI-powered applications, including LLM workflows and cloud integrations.
Proficiency in Python and deep learning frameworks, experience with LLMs, API design, cloud and containerization skills, and security best practices.
The Role: We are looking for a Senior Python AI Engineer to join our fast-growing Network, who will design and develop backend systems and APIs for AI-powered applications. You will play a key role in designing and building scalable backend systems and APIs, collaborating closely with cross-functional teams to shape the future of data-driven products across various platforms. What we are looking for: • Strong proficiency in Python (5+ years), including modern frameworks (FastAPI, Flask, or Django). • Deep learning frameworks (PyTorch, TensorFlow) for custom modeling beyond LLM APIs. • Experience with large language models (LLMs) such as GPT, Gemini, LLaMA, or similar. • Experience with prototyping tools: Streamlit, Gradio • Solid experience designing RESTful APIs and microservice architectures. • Strong backend development expertise, including databases (SQL/NoSQL). • Experience with version control (Git) and CI/CD workflows. • Hands-on experience with containerization (Docker, ideally Kubernetes). • Familiarity with cloud platforms (AWS, Azure, or GCP) is a plus. • Understanding of security best practices for handling sensitive data. • Strong problem-solving skills to address complex challenges and performance bottlenecks. • Excellent technical communication skills to collaborate effectively across teams and explain technical concepts to non-technical stakeholders. • Ability to work independently while aligning with broader team goals. • Intermediate-advanced English level. • Time zone: CET (+/- 3 hours). We are unable to consider applications from candidates in other time zones. AI/ML & LLM Ecosystem: • LLM orchestration frameworks: LangChain, LangGraph, LlamaIndex. • Retrieval-Augmented Generation (RAG) pipeline design. • Experience with vector databases (Pinecone, Weaviate, Milvus, Chroma, FAISS). • Hands-on with LLMs & APIs: OpenAI (GPT-5/5-mini), Anthropic Claude, Google Gemini, Meta Llama, Mistral. • Familiarity with AWS Bedrock for accessing and deploying foundation models. • Prompt engineering and structured output design (JSON mode, function calling). • Model fine-tuning (LoRA, QLoRA) and evaluation frameworks (DeepEval, Ragas). Responsibilities: • Design and develop backend systems and APIs for AI-powered applications. • Build and optimize LLM-based workflows, including chatbots, copilots, and automation tools. • Implement RAG architectures using vector databases and document pipelines. • Integrate and orchestrate cloud-hosted foundation models (AWS Bedrock, OpenAI, Anthropic, Google Gemini, Meta Llama, Mistral). • Collaborate cross-functionally with data scientists, product managers, and frontend developers to deliver end-to-end AI products. • Ensure performance, scalability, and cost optimization of AI solutions in production environments. • Monitor, evaluate, and continuously improve deployed AI systems. What we offer: Get paid, not played No more unreliable clients. Enjoy on-time monthly payments with flexible withdrawal options. Predictable project hours Enjoy a harmonious work-life balance with consistent 8-hour working days with clients. Flex days, so you can recharge Enjoy up to 24 flex days off per year without losing pay, for full-time positions found through Proxify. Career-accelerating positions at cutting-edge companies Discover exclusive long-term remote positions at the world's most exciting companies. Hand-picked opportunities, just for you Skip the typical recruitment roadblocks and biases with personally matched positions. One seamless process, multiple opportunities A one-time contracting process for endless opportunities, with no extra assessments. Compensation Enjoy the same pay, every month with positions landed through Proxify.
This job posting was last updated on 12/17/2025