LatticeFlow AI announces partnership with SAP. Read more.

logo
logo

Use Cases

Resources

Company

Unique Logo

Customer Story

Control AI Risk under
the EU AI Act

Under the EU AI Act, high-risk AI systems must move beyond intent to proof, requiring clear, measurable evidence of performance and risk control. PastaHR anticipated this shift early, proactively aligning its AI solution with these requirements. Then, they partnered with LatticeFlow AI to independently validate how the system performs in practice.

In collaboration with LatticeFlow AI, they translated EU AI Act obligations into concrete technical evaluations, generating measurable evidence on accuracy, robustness, and security.This enabled PastaHR to build trust with customers through transparent, evidence-based validation, establish audit-ready documentation for EU AI Act compliance, and uncover system-level insights to ensure the performance of their AI solution meets the highest standards.

Access the full
EU AI Act assessment

See how PastaHR evaluated JobFit against EU AI Act requirements, translating high-risk obligations into measurable system evaluation and technical evidence.

Challenge

PastaHR provides AI-powered screening through JobFit, a solution enabling recruiting teams to evaluate candidates efficiently at scale. Alignment with the EU AI Act was a priority from day one.

The regulation defines clear expectations for high-risk systems across accuracy, robustness, and cybersecurity, including technical evaluations and documentation. The challenge was translating these requirements into concrete, system-level validation.

This meant assessing how consistently JobFit evaluates candidates across different profiles, how it behaves under variations in language and input data, and how resilient it is to manipulation attempts and unintended feedback effects. These are questions that require direct evaluation of system behavior under real conditions.

PastaHR needed to establish a technical blueprint to perform concrete, repeatable, and scalable assessments, measuring AI system performance and compliance under the EU AI Act, and generating technical evidence and reports to share with customers.

Solution

PastaHR selected LatticeFlow AI to turn EU AI Act requirements for high-risk AI systems into measurable evaluation of system behavior, establishing a concrete foundation to control AI risk.

Focusing on Article 15 of the EU AI Act, LatticeFlow AI translated requirements on accuracy, robustness, and cybersecurity into system-level assessments. These evaluations reflect how JobFit performs in the real world, across different candidate profiles, inputs, and conditions.

Thanks to the assessment, PastaHR gained external validation of their system’s performance across key dimensions such as decision quality, robustness to language and input variability, and resilience to adversarial inputs. By making system behavior measurable, the assessment established a structured and repeatable blueprint to evaluate AI systems used in hiring, generating technical evidence and a detailed report to support compliance and control AI risk in production.

Results

Through the assessment, PastaHR was able to externally validate that its AI-powered candidate screening system, JobFit, meets EU AI Act requirements for high-risk AI systems, based on measurable, system-level evidence.

This provides customers with a clear and verifiable understanding of how the system performs across accuracy, robustness, and cybersecurity in real-world conditions, increasing transparency into how candidates are evaluated.

Beyond validation, the blueprint developed by LatticeFlow AI delivers broader impact. It enables PastaHR to build trust with customers through evidence-based evaluations, establish audit-ready documentation to support EU AI Act compliance, and gain deeper technical insight into system behavior, revealing failure modes and enabling targeted improvements.

As a result, PastaHR can support its customers not only with AI capabilities, but with the clarity and evidence required to confidently adopt, assess, and oversee AI systems in hiring.

What our Customer Says

Quotes

Under the EU AI Act, we need technical proof of how our systems perform and how risk is controlled. LatticeFlow AI helped us to achieve that, through measurable evaluation and clear evidence.

PastaHR testimonial

Patrick Schnyder

Co-founder & MD, PastaHR

Frequently Asked Questions

Under the EU AI Act, high-risk AI systems must meet requirements on accuracy, robustness, and cybersecurity. Providers are expected to demonstrate compliance through technical documentation and evidence showing how their systems perform in practice.
Preparing for EU AI Act compliance requires a clear understanding of how AI systems behave under real conditions. Providers need to evaluate system performance and generate measurable evidence that supports compliance claims. LatticeFlow AI enables this by translating regulatory requirements into technical evaluations and structured reports.
Article 15 focuses on the technical performance of high-risk AI systems, specifically accuracy, robustness, and cybersecurity. It is critical because it requires providers to demonstrate that their systems perform reliably, handle variability, and are resilient to errors and adversarial inputs.
LatticeFlow AI helps control AI risk by making system behavior measurable. It translates EU AI Act requirements into concrete technical evaluations, enabling organizations to assess how their systems perform in practice and generate evidence to support risk and compliance decisions.
A technical blueprint is a structured and repeatable approach to assess AI systems against regulatory requirements. LatticeFlow AI provides such a blueprint by defining evaluation scenarios, measuring system performance across key dimensions, and generating evidence and reports that can be reused over time.
LatticeFlow AI generates evidence by mapping regulatory requirements to system-level evaluations. These assessments measure performance across dimensions such as accuracy, robustness, and cybersecurity, and the results are documented in reports that support compliance and audit-readiness.
High-risk AI systems require clear validation of how they behave in practice. Technical evaluations provide measurable insight into system performance, including consistency, robustness to input variability, and resilience under challenging conditions, enabling more reliable and transparent use.
Technical evaluations reveal how systems behave under different conditions, including potential failure modes and inconsistencies. This enables deeper analysis of system behavior and supports targeted improvements to performance, reliability, and transparency.
Evidence-based evaluations provide transparency into how AI systems perform in real-world conditions. By sharing measurable results and technical reports, organizations can demonstrate reliability and accountability, helping customers better understand and trust AI systems.
The EU AI Act requires providers to maintain technical documentation for high-risk AI systems. Audit-ready documentation, supported by measurable evidence, enables organizations to demonstrate compliance and respond effectively to regulatory and customer requirements.
This customer story shows how EU AI Act requirements for high-risk AI systems can be translated into concrete technical evaluations. It demonstrates how providers can validate compliance, generate evidence, improve audit-readiness, and gain technical insight to strengthen their AI systems.