logo
logo

Platform

Resources

Company

NEWS ARTICLE

AI Assessments

Share

LatticeFlow AI Joins the U.S. AI Safety Institute Consortium

Zurich, Switzerland – April 23, 2024 – LatticeFlow AI, the leading platform empowering Artificial Intelligence (AI) teams to build performant, safe, and trustworthy AI solutions, proudly announces that it has joined the U.S. AI Safety Institute Consortium (AISIC). LatticeFlow AI researchers will support Working Group #3, focusing on capability evaluations. Together with the National Institute […]

Zurich, Switzerland – April 23, 2024LatticeFlow AI, the leading platform empowering Artificial Intelligence (AI) teams to build performant, safe, and trustworthy AI solutions, proudly announces that it has joined the U.S. AI Safety Institute Consortium (AISIC). LatticeFlow AI researchers will support Working Group #3, focusing on capability evaluations. Together with the National Institute of Standards & Technology (NIST) and other consortium members, LatticeFlow AI will support the development of methods, benchmarks, and testing environments that help organizations operationalize the practices outlined in NIST’s AI Risk Management Framework (RMF).

Dave Henry, SVP of Business Development at LatticeFlow AI, added: “AI safety programs are interdisciplinary in nature, requiring a broad range of management and technical skills to execute. The Consortium brings diverse experts together to create durable and innovative practices that promote trustworthy AI. We look forward to contributing our knowledge of AI model safety and collaborating on new approaches for scalable evaluations.”

Executive Demand for Trustworthy AI

Despite the impressive accuracy of AI models demonstrated in pilots and proof-of-concept projects, building AI solutions that perform reliably on real-world data remains an immense challenge. This affects both technical teams building and delivering AI solutions as well as management teams that need to quantify risks and approve AI solutions for use in business-critical operations. As a result, 85% of the models never make it into production, and out of those that do, 91% degrade over time.

“Unfortunately, high-value AI deployments are being delayed due to developer delay, insufficient good data governance and the inability to quantify possible risks associated with the use of their AI models,” stated Randolph Kahn Esq., President of Kahn Consulting Inc. “Consortiums such as AISIC can produce detailed guidelines and best practices, and when coupled with LatticeFlow AI’s technology, this can lead to a significant reduction of the time and cost required to complete thorough risk assessments and unlock the value of AI systems.”

The U.S. AI Safety Institute Consortium

AISIC was established by the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) to contribute to the priority actions outlined in U.S. President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI. These include developing red-teaming guidelines, technical assessment standards, and risk management practices, among other key elements to the development of trustworthy AI and its responsible use.

LatticeFlow AI's Contributions to Making AI Safe and Trustworthy

LatticeFlow AI’s association with AISIC follows a series of contributions made by the LatticeFlow AI team towards the goals outlined by AISIC and President Biden’s Executive Order. Since 2020, LatticeFlow AI has been an invited technical contributor to key AI Safety initiatives at the International Organization for Standardization, where it has actively contributed to the working group focusing on the development and standards for Trustworthy AI. Earlier this year, LatticeFlow AI hosted renowned AI leaders at the World Economic Forum’s AI House, gathering key figures in AI, including Gary Marcus (NYU), Apostol Vassilev (NIST), Kai Zenner (European Parliament), Matthias Bossardt (KPMG), Thomas Stauner (BMW), among other industry experts, to delve into the latest AI developments and discuss responsible AI adoption.

From Standards to Practice: First AI Assessment for a Leading Swiss Bank

Moving beyond the development of AI guidelines and standards, this year, LatticeFlow AI announced the first industry-pioneering technical AI Assessment for Migros Bank, a leading Swiss Bank, demonstrating how these AI standards are used in practice to mitigate risk and ensure regulatory compliance for business-critical AI solutions. The results of this assessment, alongside a concrete blueprint to implement AI governance and technical AI assessments at enterprises and governments, were presented at a dedicated event hosted at the ETH AI Center.

LatticeFlow AI’s Commitment to the U.S. and NIST’s AI Safety Institute Consortium

With its contributions to AISIC, LatticeFlow AI will continue its commitment to helping U.S. government agencies such as the U.S. Army ensure the safety and trustworthiness of mission-critical AI systems. Last year, White House officials announced LatticeFlow AI as the first-place winner in the “red teaming” AI Privacy Prize Challenge category. Subsequently, the company announced a strategic expansion towards the US and a three-year strategic engagement with the U.S. Army to build next-generation resilient AI solutions for mission-critical defense use cases.

“Joining AISIC will allow us to accelerate and broaden our impact by aligning our efforts towards ensuring AI trust and safety with global AI leaders, enterprises, and governments,” said Petar Tsankov, Co-founder and CEO of LatticeFlow AI.

Interested in learning more?

If you are interested in conducting an AI assessment, book a meeting with a LatticeFlow AI expert. 😊

More Articles

LatticeFlow AI Named a Top Swiss AI Company by CB Insights

LatticeFlow AI Named a Top Swiss AI Company by CB Insights

Read Article

LatticeFlow AI Introduces Suite 2.0 to Enhance Performance, Reliability, and Compliance in AI Systems

LatticeFlow AI Introduces Suite 2.0 to Enhance Performance, Reliability, and Compliance in AI Systems

Read Article

Finding and Managing Audio Anomalies: Case Study on Speech Commands dataset

Finding and Managing Audio Anomalies: Case Study on Speech Commands dataset

Read Article