via LinkedIn
$120K - 180K a year
Architect, train, and deploy lightweight, high-performance on-device AI models for sports analytics using time-series sensor data.
Strong expertise in deep learning architectures for time series, model compression and deployment on mobile devices, signal processing of IMU sensor data, production-quality Python and ML frameworks, and cross-team technical collaboration.
About Softeq: Established in 1997, Softeq was built from the ground up to specialize in new product development and R&D, tackling the most difficult problems in the tech sphere. Now we've expanded to offer early-stage innovation and ideation plus digital transformation business consulting. Our superpower is to deliver all of this under one roof on a global scale. We are looking for a hands-on Senior Machine Learning Engineer to spearhead the development of an on-device AI solution for sports analytics. You will architect, train, and deploy lightweight, high-performance models that process dual-leg sensor data (IMU) to recognize complex movement patterns in real-time. This is a pure engineering role requiring deep expertise in time-series analysis and edge optimization. We are expecting to grow our team and begin new projects in the next 1-2 months, As such, we’re starting to accept resumes and process chosen candidates. Feel free to apply! Locations: Vilnius, Lithuania (employment contract/B2B contract, hybrid) OR Warsaw, Poland (B2B contract, fully remote) KEY SKILLS AND REQUIREMENTS 1. ML Architectures & Time Series Deep Learning for Sequences: Deep understanding of modern architectures for time-series processing, specifically: TCN (Temporal Convolutional Networks): Dilated 1D Convolutions, Residual blocks, Causal padding. RNN Variants: Bi-directional LSTM / GRU, layer stacking. Hybrid / Attention Models: 1D-CNN + Attention mechanisms (Transformer-lite), Projection heads. Classical ML Baselines: Experience with Random Forest and XGBoost based on strong feature engineering (windowed stats, spectral energy). Metric Design: Ability to design robust evaluation metrics (Macro-F1, Confusion Matrix analysis) and handle severe Class Imbalance in real-world datasets. 2. Model Optimization & Edge Deployment • Optimization Techniques: Hands-on experience compressing models for mobile: Quantization: Post-training quantization (PTQ) to INT8. Pruning: Structured pruning of convolutional and recurrent layers.= Knowledge Distillation: Training lightweight "student" models based on heavy "teacher" models. • Deployment Stack: Interoperability: Expert-level knowledge of the ONNX ecosystem (export, validation, versioning, opset compatibility). Mobile Runtimes: Experience preparing models for Core ML (iOS), TFLite / NNAPI (Android), and ONNX Runtime. Constraint Management: Proven ability to optimize models for strict hardware constraints: Inference < 50–80ms, Model Size < 5–10MB. 3. Signal Processing & Data Handling Sensor Data (IMU): extensive experience working with raw accelerometer and gyroscope data (6-axis / 9-axis) and understanding motion physics. DSP Techniques: Sensor Calibration & Gravity removal. Resampling & Synchronization (NTP time sync alignment). Normalization techniques (Min-Max, Z-score per session). Feature Extraction: RMS energy, Jerk, Spectral Centroid. Data Augmentation (Time-Domain): Implementation of Time-warping, Jittering (Gaussian noise), Random window shifts, and Channel dropout. 4. Engineering & MLOps Core Stack: Production-quality Python, expert proficiency in PyTorch or TensorFlow. Infrastructure: Experience managing cloud training environments (AWS/GCP), GPU resources, and Docker for reproducible training. Validation Strategy: Implementation of strict Subject-exclusive validation schemes (preventing specific user data leakage into test sets). Data Pipelines: Building pipelines for multimodal data synchronization (Video + Sensor timestamps) and automated window slicing. Tooling: Proficiency with experiment tracking tools (e.g., MLflow, Weights & Biases) to benchmark multiple architecture iterations. 5. Soft / Lead Skills (Technical Context) Decision Making: Ability to justify architectural choices (e.g., LSTM vs. TCN) through the lens of the "Accuracy vs. Latency" trade-off. Cross-Team Integration: Ability to bridge the gap between Data Science and Mobile Engineering, ensuring Python preprocessing logic is correctly replicated in Swift/Kotlin/C++ on the device. Documentation: Skills in writing technical specifications (Recording protocols, Model cards, API contracts).
This job posting was last updated on 11/28/2025