via DailyRemote
$120K - 200K a year
Design and maintain semantic models, optimize performance, manage data connectivity, implement security, and develop dashboards in Power BI and Splunk.
6+ years of experience with ETL pipelines, Power BI modeling, Microsoft Fabric, SPL, KQL, ADX, and CI/CD practices, along with soft skills like collaboration and continuous learning.
Calling All Upstarters! SENIOR BI ENGINEER WANTED! We are Upstart 13. We are humble, hungry, and competent people who are radically changing the expectations and experience of outsourcing for all participants by challenging barriers that create inequality and by bringing down borders in technology for people everywhere. We’re all about delivering value and doing big things. We have become a game-changer for teams around the world who look to Upstart’s services as a differentiator. Job Description: We are seeking a Senior BI Engineer located in Latin America to own the development and evolution of our Microsoft Fabric semantic model layer, and to drive the creation of high-quality, performant Power BI solutions for our Starbase project. You will work hands-on across Lakehouse/Warehouse, the SQL analytics endpoint, and the Power BI Service, applying best practices for dimensional modeling, SQL, DAX, and CI/CD in Fabric. You’ll also support telemetry and security analytics scenarios through Splunk and Azure Data Explorer (ADX). The ideal candidate is fluent in semantic model design, Direct Lake, and performance tuning. Power BI report-building experience is a plus. Responsibilities: • Design and maintain semantic models using star-schema principles (facts, dimensions, hierarchies, calculation groups, KPIs, reusable DAX logic). • Optimize performance through aggregations, partitions, column sizing, caching, and monitoring tools (Performance Analyzer, Capacity Metrics). • Manage data connectivity and ingestion from Lakehouse/Warehouse, SQL analytics endpoint, Dataflows Gen2, external sources, and configure gateways for on-prem data. • Implement security models (RLS, OLS), manage permissions, endorsements, and sensitivity labels in alignment with governance. • Develop and maintain dashboards, alerts, and scheduled reports in Power BI and Splunk. • Strong background with Splunk, including authoring advanced queries in DAX, SPL, and KQL for analytics and time-series scenarios. • Configure and maintain Splunk data onboarding (Forwarders, inputs.conf, props.conf, transforms.conf) and extract data via REST API or SDK. • Build ADX ingestion pipelines from Event Hubs, Blob Storage, or custom connectors; manage retention policies and optimize performance. • Apply CI/CD best practices using PBIP, Git integration, deployment pipelines, and TMDL serialization; leverage external tools (Tabular Editor, DAX Studio). • Document definitions, lineage, refresh behaviors, and promote reuse of semantic layers across workspaces. • Collaborate with admins to monitor capacity, ensure healthy refreshes, and maintain secure, scalable environments. Qualifications Experience: • 6+ years of experience building production ETL/ELT pipelines. • Strong Power BI modeling experience with dimensional/ star schema design, and strong DAX and Power Query (M). • Hands‑on experience with Microsoft Fabric (Lakehouse/Warehouse, SQL analytics endpoint, OneLake concepts). • Proficient in SPL, including advanced commands, subsearches, macros, and eval functions. • Experience with data onboarding using Universal/Heavy Forwarders and configuring inputs.conf, props.conf, and transforms.conf. • Ability to extract data from Splunk using REST API, SDK, or export commands for downstream analytics. • Proven use of Direct Lake plus Import/DirectQuery and a clear understanding when/why to use each. • RLS/OLS design and implementation experience. • PBIP + Git and deployment pipelines for CI/CD; TMDL familiarity (preview features acceptable). • Performance tuning (VertiPaq fundamentals, partitions, aggregations; Performance Analyzer). • Comfortable with SQL. • Strong KQL skills including joins, summarize, make-series, and time series analysis functions. • Experience with ADX ingestion pipelines from Event Hubs, Blob Storage, or custom connectors. • Familiarity with ADX retention policies, caching, and performance optimization • Familiarity with TMDL, external tools (e.g., Tabular Editor, DAX Studio) Soft skills: • Execution-First Mindset — delivers working functionality quickly, then iterates. • Curious Integrator — enjoys untangling messy, niche data feeds and brings order. • Quality Advocate — insists on tests, logging, and clear hand-offs. • Collaborative Communicator — explains decisions to peers and stakeholders. • Continuous Learner — keeps abreast of Azure & Fabric feature releases. Bonus skills: • Experience with XMLA endpoint and external tools (Tabular Editor, DAX Studio) is a plus. • SemPy / Semantic Link to collaborate with notebooks and data science, or to validate models programmatically. • Experience with the Fabric Capacity Metrics app and workspace monitoring. • Knowledge of Analyze in Excel patterns and enterprise Excel connectivity to semantic models. • XMLA endpoint usage and model lifecycle management with external tools. • Experience deploying models in hybrid (cloud/on-prem) environments. • Building Power BI reports is a plus. Why Upstart13? • We put people first at Upstart 13! We believe the world is filled with amazing people and we are willing to go to great lengths to seek out others who share our values to join our cause of bringing down borders in technology for people everywhere. • We develop leaders at Upstart 13, we focus on what matters to do meaningful work, we own our shit, we stay curious, and we understand responsibility leads to giving. We do big things together! Perks: • Job-type: long-term, full-time job. • Fully remote. • USD competitive salary. • 20+ Paid time off days. Are you ready to join our cause? Be sure to ask, “Why 13?”
This job posting was last updated on 1/8/2026