LatticeFlow AI at the 2025 Gartner® Market Guide for AI Governance Platforms. Read more.

logo
logo

Use Cases

Resources

Company

NEWS ARTICLE

AI Governance

Share

Is Swiss LLM Ready for Enterprise Adoption?

First independent evaluation finds Swiss LLM EU AI Act–ready and 99% secure with guardrails in place.

Cover Image

Released today, the Swiss LLM sets a milestone as the first fully open large language model, developed by ETH Zurich, EPFL, and the Swiss National Supercomputing Centre (CSCS). With multilingual support across more than 1,000 languages and datasets curated to respect copyright and data-protection rules, it is designed for transparency and reproducibility.

For enterprises, however, the key question is whether the model is secure and compliant with the EU AI Act. This post provides the first independent evaluation of the Swiss LLM for enterprise adoption.

Swiss LLM: 99% Secure with Risk Mitigation Strategies

 

Swiss LLM shows modest baseline security, similar to other open models. But with the right guardrails in place, it reaches enterprise-grade performance: 99% attack rejection with 98.4% quality of service (QoS).

  • Pre-Mitigation (no guardrails): Swiss LLM rejects up to 21.2% of prompt injection and jailbreak attacks, typical for open-source models such as Mistral, Qwen, Llama, or DeepSeek.
  • Post-Mitigation (with guardrails): Swiss LLM security score jumps to up to 99%, blocking nearly all attacks. This level matches or exceeds proprietary models, showing that open models can be hardened to enterprise standards.
  • Quality of Service (QoS): Remains stable at 98.4%, confirming that guardrails preserve usability and do not block normal user queries.

 

Security Comparison: Swiss LLM vs Other Popular Models

The table below compares the security scores of Swiss LLM (Apertus) with other leading open and proprietary models, showing baseline security before guardrails (base model) and after guardrails are applied:

It is important to note that the scores above are affected by design choices made by the Swiss LLM team. In particular, rather than following the common approach employed by the commercial model providers that perform explicit safety and security alignment, Swiss LLM instead opted for a data curation approach that focuses on filtering dangerous data up front. In other words, the goal is to prevent the model from learning dangerous things in the first place, rather than filtering it in post-processing steps or via alignment.

EU AI Act Compliance: What Sets the Swiss LLM Apart 

The Swiss LLM is the first large model built to meet the EU AI Act obligations. Unlike other open models such as Meta’s LLaMA or DeepSeek, which release weights but not the required training-data summary or copyright policy, ETH Zurich and EPFL deliver both, setting a new European benchmark for responsible AI. 

The Swiss LLM is trained below the systemic-risk threshold (10^25 FLOPs) and therefore falls under the baseline obligations in Article 53 of the EU AI Act, which requires GPAI providers to:

  • Publish a training-data summary using the European Commission’s template.
  • Adopt a copyright-compliance policy aligned with EU copyright and text/data-mining rules.
  • For closed models:  Maintain technical documentation and provide information to downstream providers. Open-source GPAI models like Swiss LLM are exempt from these duties.

 

ETH Zurich & EPFL explicitly address the obligations of this article by:

  • Publishing a detailed training-data summary as well as reproducible pipelines
  • Adopting a copyright-compliance policy aligned with Swiss and EU laws.

 

In addition, and despite not being required, the Swiss LLM team has conducted thorough performance, safety, security, and robustness evaluations and made the results publicly available to ensure transparency and trust.

¹Training code classification:

  • Open: Full training pipeline is released, reproducible end-to-end.
  • Partially disclosed: Research papers and/or partial code or tools are available, but the full training pipeline cannot be reproduced.
  • Closed: Little or no information about the training process is provided.

²Training data transparency classification:

  • High: The dataset is fully available, with accompanying code or instructions for loading and reproducing it.
  • Mixed: The data curation methods are described, but the underlying data sources are unclear or not disclosed in full.
  • Low: Neither the data curation methods nor the data sources are explained.

Next Steps

Do you plan to adopt Swiss LLM or other open models in your enterprise? Curious about how to deploy them securely, responsibly, and in line with the EU AI Act?

Contact us to learn more!

Methodology: How Was the Security Evaluation Conducted

The evaluation was performed using LatticeFlow AI Insights methodology for evaluating the security of general-purpose AI models and guardrails. The methodology is aligned with leading industry AI security standards: OWASP LLM Top 10 and MITRE ATLAS, which are two of the most widely recognized for identifying and classifying emerging adversarial threats for LLM applications.

In short, the methodology evaluates three key metrics that matter most for enterprise AI deployment with the right guardrails:

  • System Security (Pre-Mitigation): Baseline security of the model against prompt injection and jailbreak attacks, before any guardrails are applied, measured by the percentage of malicious attacks correctly rejected.
  • System Security (Post-Mitigation): Security of the model after deploying guardrails, measured by the percentage of malicious attacks correctly rejected.
  • Quality of Service (Post-Mitigation): Impact of guardrails on normal user queries – ensuring security improvements don’t come at the cost of blocking benign user prompts.

 

In short, the first metric captures baseline security without deployed guardrails, while the latter two measure secure deployment after guardrails are in place.

Guardrails Tested 

The LatticeFlow AI team evaluated a broad set of open-weight and proprietary API-based guardrails to measure their effectiveness in securing Swiss LLM without degrading quality of service.

  • Open-Weights Guardrails: Guardrails.ai, Jailbreak / JailbreakLarge, LastLayer, Protect AI, Llama-Prompt-Guard (86M & 22M), Arch Gateway, WalledGuard Community,
  • Commercial Guardrails: AWS Bedrock Guardrail, Anthropic Claude Haiku 3 / 3.5, IBM Granite Guardian, Lakera Guard, Microsoft Prompt Shield.

 

This comprehensive coverage provides the basis for identifying which guardrails deliver the best balance between security and usability for enterprise deployment.