LatticeFlow AI Acquires AI Sonar, the Leading AI Discovery Platform. Read more.

logo
logo

Use Cases

Resources

Company

Unique Logo

Customer Story

Turning AI Governance Principles into Evidence for Agentic AI in Banking

This customer story presents the first FINMA-aligned technical blueprint, developed by LatticeFlow AI and Unique AI, demonstrating how evidence-based AI governance translates regulatory principles from FINMA into verifiable, decision-ready evidence to govern, deploy, and continuously oversee agentic AI systems in production.

Access the Technical Blueprint for Governing Agentic AI in Banking

Quotes

The blueprint we developed with LatticeFlow AI reflects our commitment to building AI that meets the expectations of Switzerland’s highly regulated financial sector, and can be deployed with confidence in practice.

- Dr. Sina Wulfmeyer

Chief Data Officer at Unique AI

Challenge

As agentic AI systems move into production, AI decision-makers in banks and insurers face a growing accountability gap: governance frameworks exist, yet teams responsible for risk, compliance, and AI often lack technical evidence of how AI systems actually behave, especially once deployed.

This challenge is particularly visible in regulated environments. FINMA, Switzerland’s financial market supervisory authority, sets clear expectations around accountability, reliability, and human oversight. However, it does not prescribe how AI systems should be technically assessed against those principles. As a result, companies are left to interpret governance requirements without a concrete technical reference.

The same gap affects AI vendors serving financial services. Unique AI is a fast-growing fintech company supporting more than 30,000 financial professionals across 40+ institutional clients, with solutions used by leading Swiss and international institutions including Pictet, Julius Baer, BNP Paribas, SIX Group, and Swiss Life.

As adoption of agentic AI accelerated, Unique AI faced a critical question shared by many AI providers: How do you provide customers the verifiable evidence they need to prove that AI systems are governed responsibly?

To address this, Unique AI partnered with LatticeFlow AI to develop the first FINMA-aligned technical blueprint, translating regulatory expectations into measurable, decision-ready technical assessments.

Solution

This FINMA-aligned technical blueprint demonstrates how evidence-based AI governance can be operationalized for an agentic AI system already in production.

Developed by LatticeFlow AI in collaboration with Unique AI, the blueprint translates regulatory principles into deep technical assessments that reflect how agentic AI systems behave in real-world banking environments.

Rather than relying on static policies or checklists, the assessment evaluates:

➜ System reliability and robustness.

➜ Explainability of outputs.

➜ Human oversight and intervention mechanisms.

➜ Ongoing risk monitoring over time.

By mapping FINMA principles to concrete technical controls, the blueprint produces verifiable, decision-ready evidence that institutions can use across the AI lifecycle, from approval to continuous oversight.

Results

This blueprint on evidence-based AI governance for agentic AI in financial services delivers practical outcomes for both financial institutions and AI vendors.

For banks and insurers, it provides a concrete basis to evaluate, deploy, and continuously oversee agentic AI systems in line with regulatory expectations, grounded in technical evidence, not assumptions.

For AI vendors serving financial services, it offers a way to build customer trust at scale, backed by verifiable evidence that supports AI governance and compliance requirements.

Finally, this is the first FINMA-aligned technical assessment applied to an agentic AI system in production.

Frequently Asked Questions

FINMA-Aligned AI Governance for Agentic AI:
Key Questions Answered

FINMA is Switzerland’s financial market supervisory authority. It defines expectations for accountability, reliability, and human oversight on AI systems, making it a critical reference for AI governance in financial services.
This blueprint translates regulatory principles into technical assessments applied to agentic AI systems in production, delivering verifiable, decision-ready evidence, not policies or documentation alone.
It is designed for AI decision-makers in banks and insurers, as well as AI vendors building solutions for financial services who need to demonstrate responsible AI governance and compliance.
No. While aligned with FINMA, the blueprint provides a practical reference for evidence-based AI governance that is relevant to regulated financial institutions globally.
The assessment examines real agentic AI system behavior, including reliability, robustness, explainability, human oversight mechanisms, and ongoing risk monitoring.
This is a real technical assessment applied to an agentic AI system already in production, not a conceptual or academic model.
You receive access to the first FINMA-aligned technical blueprint and assessment, showing how regulatory expectations can be operationalized through deep technical assessments.
Yes. The blueprint is designed as a reusable reference that can be applied to other agentic AI systems and financial services use cases.