WhyHireWrong?

WhyHireWrong?

4 open positions available

1 location
1 employment type
Actively hiring
Full-time

Latest Positions

Showing 4 most recent jobs
WhyHireWrong?

Product Data Scientist: Agentic AI

WhyHireWrong?AnywhereFull-time
View Job
Compensation$70K - 120K a year

Develop and scale an internal Agentic AI framework collaborating with product teams and global stakeholders. | Master's degree or bachelor's with strong data science experience, 2+ years production-grade data science including GenAI, solid Python skills. | The Role This is a product focused data science position sitting within a dedicated Agentic AI team. The core responsibility is co-owning the development and direction of an internal Agentic AI framework: ensuring it scales to a growing list of use cases and delivers a strong developer experience for data scientists building on top of it. This is not a pure research role. It combines hands on engineering, product thinking, and close collaboration with AI Engineers to build something that other data scientists rely on daily. What the Work Looks Like Day to Day Partner with product teams and business leaders to understand and define Agentic AI use cases Collaborate with data scientists across global teams to gather feedback on the agent building experience and translate it into framework improvements Shape and drive the evolution roadmap for the Agentic AI framework Apply GenAI and Agentic AI techniques to solve real business problems Build and maintain resilient, production grade algorithmic and agentic pipelines Write clean, well structured code following engineering best practices Deepen applied knowledge across machine learning, optimization, statistical modeling, and GenAI Technical Stack Cloud: Microsoft Azure, Google Cloud Platform, Kubernetes Languages: Python, Spark (preferred); SQL for analytical work Big data ecosystem: Databricks, BigQuery, Spark Dev tools: GitHub, Jira, Confluence (Agile DevOps environment) BI tools: PowerBI or Tableau (basic familiarity useful) What Is Required Masters degree in a quantitative field (Statistics, Operations Research, Computer Science, Applied Mathematics, Systems Engineering, Economics) OR a Bachelors or Engineering degree with strong, consecutive data science experience At least 2 years of delivering production grade data science or algorithmically enabled applications, with at least some of that experience involving GenAI based solutions Solid Python skills; Spark experience is a plus Experience with or genuine interest in building tools and frameworks used by other data scientists Strong analytical thinking across optimization, simulation, predictive modeling, and experimentation Comfortable taking ownership, navigating ambiguity, and working across distributed global teams What Strengthens an Application Prior experience building developer tooling, internal platforms, or frameworks for data science teams is a genuine differentiator here. This role sits at the intersection of engineering and product , and candidates who have thought about developer experience, not just model performance, will stand out. Working Model and Location This role is based in Warsaw, Poland, on a hybrid working arrangement. Regular on site presence in Warsaw is required. Full remote is not available for this position.

Python
SQL
Machine Learning
Federated Learning
Distributed Systems
Direct Apply
Posted about 18 hours ago
WhyHireWrong?

Data Scientist: Machine Learning and GenAI

WhyHireWrong?AnywhereFull-time
View Job
Compensation$55K - 120K a year

Develop and integrate machine learning models to solve business problems using large datasets. | Master's degree or Bachelor's with experience, strong Python and SQL skills, and 2+ years production data science experience. | The Role This is an ownership driven data science position within a scaled, globally distributed hub focused on bringing algorithms to production. The work spans traditional machine learning, deep learning, GenAI, optimization, and statistical modeling. Methods are chosen based on the problem, not the trend. The scope covers high impact business domains including retail, media, digital commerce, supply chain, R&D, and productivity. This is not a research only role. The expectation is to understand the business problem deeply, build the right model, and see it through to reliable production deployment. What the Work Looks Like Day to Day Take ownership of a defined business domain and its algorithmic needs from problem framing through to deployed solution Partner with product, business, and AI engineering teams to automate and integrate models into live applications Analyze large scale datasets (think: processing billions of behavioral signals daily) and translate findings into actionable recommendations Define and evolve the algorithmic roadmap for your area of ownership Apply machine learning, statistical, optimization, and GenAI techniques to real business problems Write production grade code following engineering best practices Build resilient, maintainable algorithmic pipelines that hold up over time Technical Stack Cloud: Microsoft Azure, Google Cloud Platform, Kubernetes Languages: Python, Spark (preferred); SQL for analytical work Big data ecosystem: Databricks, BigQuery, Spark Dev tools: GitHub, Jira, Confluence (Agile DevOps environment) BI tools: PowerBI or Tableau (basic familiarity useful) What Is Required Masters degree in a quantitative field (Statistics, Operations Research, Computer Science, Applied Mathematics, Systems Engineering, Economics) OR a Bachelors or Engineering degree with solid, consecutive data science experience At least 2 years of experience delivering production grade data science or algorithmically enabled applications Strong Python skills with hands on experience in machine learning, statistical modeling, and optimization Solid SQL and analytical skills Demonstrated ability to lead problem solving and prioritize across competing demands Comfortable working across cross functional teams in a fast moving environment What Strengthens an Application Experience with the full lifecycle of an algorithmic product: not just model building, but deployment, monitoring, and iteration. Familiarity with big data tooling (Databricks, BigQuery, Spark) and exposure to GenAI or optimization methods are genuine advantages, not box ticking requirements. Working Model and Location This role is based in Warsaw, Poland, on a hybrid working arrangement. Regular on site presence in Warsaw is expected; full remote is not available for this position.

Machine Learning
Python
SQL
Direct Apply
Posted about 18 hours ago
WhyHireWrong?

Senior Data Engineer - all genders - Google Cloud Platform

WhyHireWrong?AnywhereFull-time
View Job
Compensation$90K - 140K a year

Design and implement scalable data pipelines and collaborate with architects and managers to improve processes. | Bachelor's degree with proven data engineering experience, strong Python skills, and familiarity with GCP big data services. | The Role This is a hands on data engineering position embedded in a product focused environment. The work spans the full data lifecycle: gathering requirements from stakeholders, designing technical solutions, and shipping reliable, scalable pipelines. Expect close collaboration with data architects, asset managers, and product managers. This role sits at the intersection of engineering and business outcomes. What the Work Looks Like Day to Day Translate product and business requirements into technical data solutions Design and implement data pipelines and capabilities across product offerings Work with data architects to ensure solutions are aligned with broader technical strategy Identify gaps in internal processes and lead improvements Write clean, reusable code that adheres to established engineering standards Communicate technical decisions clearly to both technical and non technical audiences Technical Stack The primary environment is Google Cloud Platform. Day to day tooling includes: BigQuery for data analysis and processing Cloud Composer / Airflow for workflow orchestration Dataproc / PySpark for large scale data processing Vertex AI for machine learning adjacent workloads Cloud Spanner and Cloud Run for additional platform needs Python as the primary programming language, with GitHub Copilot integrated into the workflow What's Required Degree in Computer Science, Engineering, or a related field, or equivalent demonstrated experience Proven background in data engineering and architecture, including ownership of strategic technical initiatives Strong Python skills with hands on experience across the GCP services listed above Familiarity with PySpark and big data processing patterns Ability to explain complex technical concepts to varied audiences Comfortable working in fast moving environments where priorities shift What Sets a Strong Candidate Apart A track record of not just building pipelines, but improving how a team builds them: process thinking alongside technical depth. No need to be an expert in every tool listed. Intellectual curiosity and a structured approach to learning matter more than a perfect checklist match.

Python
Data Engineering
Machine Learning
Direct Apply
Posted about 19 hours ago
WhyHireWrong?

MLOps Engineer: ML Risk Platform

WhyHireWrong?AnywhereFull-time
View Job
Compensation$100K - 140K a year

Design and operate automated ML model deployment infrastructure ensuring reliability and compliance across jurisdictions. | Experience building production ML pipelines, managing containerized ML workloads on Kubernetes, and working in regulated environments with compliance knowledge. | Every day, the models make financial decisions that affect real people. Credit approvals, fraud blocks, transaction risk scores. If a model drifts silently in production, customers get wrongly declined. If a pipeline breaks at 2am, no one catches it until the damage is done. This role exists to make sure that does not happen. You will own the infrastructure that takes ML models from a data scientist's notebook into production systems processing millions of events daily, and keeps them running reliably across multiple regulatory jurisdictions. Not maintaining someone else's setup. Building and owning it. What You Will Work On Model pipelines: Design and operate automated training, validation, deployment, and rollback workflows across our credit scoring, fraud detection, and transaction risk models Production monitoring: Build observability for ML specific failure modes including data drift, prediction drift, and feature skew, not just system uptime Compliance instrumentation: Maintain full audit trails and model cards required for internal model risk reviews and regulatory examination under EU AI Act and GDPR Infrastructure ownership: Run Kubernetes based ML serving on AWS or Azure, manage CI/CD pipelines that version code, data, and models simultaneously Reliability and incident response: Define SLAs for latency sensitive scoring models and own the full response when something breaks in production Cost management: Optimise cloud spend for GPU training jobs and batch inference workloads, compute budgets in fintech are scrutinised closely 5 Non-Negotiable Requirements 1. Production ML pipelines you built yourself You have designed and operated automated training, validation, and deployment pipelines serving real users in a live environment. Not internal tooling. Not a prototype. If the pipeline broke, you were the one who fixed it. 2. Kubernetes in production You have deployed and managed containerised ML workloads on Kubernetes including autoscaling, resource limits, and failure recovery. EKS, AKS, or GKE. 3. ML lifecycle ownership Hands on model versioning, experiment tracking, and registry management using MLflow, Weights and Biases, or equivalent. You managed promotion gates and rollback procedures, not just tracked experiments. 4. Monitoring for ML specific failures You have built observability for data drift, prediction drift, and feature skew, not just CPU and memory. Evidently AI, Whylogs, Prometheus, or equivalent. You defined what an alert means and what to do when it fires. 5. Regulated environment experience You have worked in fintech, banking, or insurance where model decisions required audit trails, explainability artefacts, or sign off from a risk or compliance function. You know what SR 11-7, EU AI Act, or GDPR means for an ML pipeline in practice. Full Technical Stack Core: Python, Docker, Kubernetes, GitHub Actions or GitLab CI ML Platform: MLflow, Apache Airflow or Prefect Cloud: AWS SageMaker with EKS, or Azure ML with AKS Monitoring: Prometheus, Grafana, Evidently AI Data: Spark, PostgreSQL, S3 or Azure Blob Useful but not required on day one: Terraform, feature stores such as Feast or Tecton, LangChain for LLM pipeline integration, SHAP or LIME for explainability What This Role Is Not Not a data science role. You will not be building models. Not a generic DevOps role. Kubernetes experience without ML context is not sufficient. Not a research or platform architecture role. All work is production focused with hard reliability and compliance constraints. How to Apply This role is open to EU based candidates only. We are not considering applications from outside the European Union at this time, regardless of remote working arrangements or timezone compatibility. Submit your CV and record a short video answer to one question: Describe a machine learning pipeline you built and owned in production. What broke, how did you detect it, and what did you change? The video format is uncomfortable. We know that. If you still do it, that already tells us something. How Applications Are Assessed I want to be upfront about how this works before you invest your time. Every CV is scored against the 5 non-negotiable requirements only. One point per requirement. 5 out of 5 to proceed. Not 4. If a requirement is listed as a tool or skill without context describing what you built and what it served, it scores 0. I compare all applications before advancing anyone. If the pool of 5 out of 5 scores is larger than 15, I rank by depth of regulated environment experience and scale of systems owned. The top 15 go forward. If fewer than 15 score 5 out of 5, all of them go forward. The video is reviewed by me and the team together. We are not assessing your camera confidence. We are assessing whether your answer is specific, whether you owned what you are describing, and whether your response to a real production failure was sound. I do not follow up to ask for clarification on an ambiguous CV. What is written is what is scored. You will hear back from us regardless of outcome. That is a promise, not a pleasantry. Hubert Warszta Tech Recruiter | WhyHireWrong? |

Python
Docker
Kubernetes
MLflow
Apache Airflow
Direct Apply
Posted 13 days ago

Ready to join WhyHireWrong??

Create tailored applications specifically for WhyHireWrong? with our AI-powered resume builder

Get Started for Free

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt