Come join us, and shape the future of the insurance industry!
Come and join us to shape the future of the insurance industry!
SCOR Digital Solutions is a global insurance consultancy helping insurers worldwide to grow sustainably. A critical part of the SCOR Group, we are specialised in developing industry-leading digital solutions for every part of the consumer journey, from underwriting, to engagement, to claims. By combining SCOR’s comprehensive data and analytical expertise with the award-winning capabilities of our in-house product and technical teams, our solutions are helping insurers to transform the experience of their consumers worldwide.
Roles & Responsibilities
We’re building a product analytics capability embedded in our Underwriting/Claims platform to deliver critical insights for our customers. These insights are essential to improve our product offering, drive better underwriting and claims decisions, and create tangible value for clients. You will be responsible for the data collection throughout our Underwriting & Claims platform (underwriting engine, workbench, AI data extraction capabilities, underwriting guidelines), ensuring complete and accurate capture of operational and product data.
This is a role for a hands-on, principal data engineer who will lead the Product & Business Analytics team from day one. You’ll combine deep technical expertise with people management responsibilities, ensuring the team delivers robust data pipelines into Databricks and actionable insights through Tableau dashboards and APIs. This role also ensures that our analytics layer connects seamlessly to policy data managed by the CDAO, enabling a unified view across operational and enterprise domains.
This job has a broad remit encompassing – but not limited to – the work areas below:
- Lead and manage the team: set priorities, coach engineers, and build a high-performing culture.
- Own data collection across the Underwriting & Claims platform (underwriting engine, workbench, AI extraction, guidelines).
- Design event schemas & data contracts for the underwriting engine, workbench interactions, AI extraction outputs, and guideline lookups; ensure click‑through traceability to sources for explainability.
- Design and optimize Databricks pipelines (Unity Catalog, Delta, medallion architecture) with strong governance and lineage.
- Own the medallion architecture (bronze/silver/gold) with Unity Catalog for RBAC, lineage, and audit; standardize ELT/Jobs pipelines and SQL Warehouses for serving.
- Define SLOs (freshness, availability, latency) and build data tests (expectations, schema checks, reconciliation) into CI/CD.
- Apply Unity Catalog lineage, classification, and least‑privilege access; enforce retention and masking where needed.
- Shape gold‑layer datasets and semantic SQL for Tableau (live or extract) with performant joins, row‑level security, and caching strategies; keep heavy transforms in Databricks, not in the BI tool.
- Follow Data Management Standard, ICT governance, and Security Management guidelines (GDPR/PII handling, audit readiness).
- Embed in agile squads, collaborating with app engineers and product owners to integrate analytics into workflows.
- Align with Group CDAO on data contracts, ADM integration, and CMDS compliance.
- Drive innovation and continuous improvement in data engineering practices.
Core competencies
The successful candidate for this job will be enthusiastic about the responsibilities above, and will have a skillset which complements the job well, including:
- Solid understanding of data governance and security: RBAC, lineage, auditability, GDPR/PII handling.
- Hands-on experience with CI/CD for data pipelines, automated testing, schema evolution, and observability (data quality checks, monitoring, alerting).
- Proficiency in Python and SQL for data transformations, orchestration, and API integration.
- Familiarity with streaming and batch ingestion patterns, including event-driven architectures and data contracts.
- Ability to design and optimize APIs for analytics consumption (Databricks SQL endpoints, REST/GraphQL patterns).
- Strong collaboration skills: comfortable working in multi-skill agile squads, influencing architecture decisions, and aligning with enterprise governance.
- A pragmatic, innovation-driven mindset: balancing lean delivery with compliance and quality.
Nice to have
- Knowledge of insurance or reinsurance domains, especially underwriting engines, claims automation, and AI-assisted data extraction.
- Experience integrating analytics with policy data and enterprise master data systems (e.g., CMDS).
- Exposure to FinOps practices for cost optimization in cloud data platforms.
Required skills & experience
- 8+ years of experience in data engineering or analytics engineering, with 3+ years in technical leadership or team management roles.
- Proven expertise in Databricks (Unity Catalog, Delta, DLT/Jobs, SQL Warehouses) and data modeling for analytics (event-driven, SCD, snapshotting).
- Strong experience with Tableau (semantic SQL design, row-level security, performance tuning, content governance).
What we offer
- Be part of an international culture with Tech specialists.
- Medical Allowance and pension plans.
- Remuneration Policy.
- Green Policy.
- Evolve in a stimulating and challenging environment.
- Share and learn with a passionate international community.
- Evolve in a start-up mentality.
The company working language is English. All employees should speak, read and write English to a sufficient level in order to communicate and operate effectively in the organization.
The recruitment process
You can expect the following stages:
1
Screening interview with HR (online)
2
Interview with the hiring team & manager (online or in-person)
3
Written test or case study
Apply now
If you feel you have something unique to bring, make your case by getting in touch. We’d love to hear from you.