Discover the world's first product designed to find model errors in audio AI applications.

LatticeFlow Welcomes Renowned AI Leaders at Davos’ World Economic Forum AI House

World Economic Forum
LatticeFlow Co-founder and ETH Zurich Professor Andreas Krause with Arvind Krishna (CEO of IBM), Alex Ilic (Executive Director of ETH AI Center), Daniel Naeff (ETH AI Center), and Mateja Kramer (ETH AI Center); Source: Alex Mundt / AI House Davos


This year’s World Economic Forum at Davos has positioned AI safety as a central theme, with LatticeFlow leading a pivotal discussion at the AI House Davos. The AI House, an initiative by the ETH AI Center and Merantix, served as the gathering ground for key figures in AI, including Gary Marcus (NYU), Yann LeCun (VP & Chief AI Scientist at Meta), Stuart Russel (UC Berkeley), and Arvind Krishna (Chairman and CEO at IBM), to delve into the latest AI developments and discuss responsible AI adoption.

LatticeFlow’s dedicated session on AI safety, “The Next AI Frontier: AI Safety Audits and Standards,” convened a notable panel, showcasing a broad spectrum of expertise from academia, industry, and governance.

The session’s objective was to address the pressing safety needs of companies adopting the latest AI advances. The roundtable featured Gary Marcus (NYU), Apostol Vassilev (National Institute of Standards and Technology), Thomas Stauner (BMW), Matt Fredrikson (CMU), Iwan Gloor (Gowago), Matthias Bossardt (KPMG), Kai Zenner (European Parliament), Christopher Nguyen (AI Alliance), Andreas Krause (ETH Zurich & LatticeFlow), and Pavol Bielik (LatticeFlow).

AI House Davos in the World Economic Forum
From Left to Right: Thomas Stauner (BMW), Matt Fredrikson (CMU), Iwan Gloor (Gowago), Gary Marcus (NYU), Matthias Bossardt (KPMG), Andreas Krause (ETH / LatticeFlow), Kai Zenner (European Parliament), Christopher Nguyen (AI Alliance), Apostol Vassilev (NIST), Pavol Bielik (LatticeFlow); Source: LatticeFlow

AI House Davos: The Next Frontier

Each panelist brought unique perspectives to the forefront, emphasizing the importance of trust and transparency in AI applications, particularly in high-stakes fields such as finance.

Iwan Gloor, CTO at Gowago, a company providing a car leasing platform that relies on an AI model to predict the residual value of cars, emphasized the importance of third-party AI assessments to build trust in their AI products: “We don’t really care about standards and regulations. We need third parties to assess our systems to build trust. Because, we lose money if we’re wrong. For us, it is absolutely critical that our business partners trust the models themselves to increase adoption.”

Pavol Bielik highlighted the necessity of embedding safety considerations from the inception of AI development, stating, “It is important that AI safety topics are incorporated from the start, not as an afterthought.”

Matthias Bossardt, Partner at KPMG, highlights that AI standards and audits will help ensure a baseline level of safety assurance and instill trust in the industry: “We did a survey on AI trust, showing that 61% of executives do not really trust AI”.

AI House Davos in the World Economic Forum
Gary Marcus engages in a lively discussion at the AI safety roundtable, with attentive listeners Thomas Stauner (BMW), Matt Fredrikson (CMU), Matthias Bossardt (KPMG), Andreas Krause (ETH Zurich/LatticeFlow), and Christopher Nguyen (AI Alliance) Source: LatticeFlow

Looking ahead, the panelists align on the view that current AI systems based on deep learning methods have inherent limitations that lack adequate mitigations.

Apostol Vassilev from NIST addressed the theoretical challenges surrounding AI safety, pointing out, “There are known theoretical results that show that no matter what kind of guardrails we employ, they are incomplete and can always be circumvented.” This remark served as a call to research and academics to explore approaches beyond current deep learning methods.

To further disseminate the insights gathered at the event, LatticeFlow presented a White Paper summarizing the key viewpoints of the panelists, aimed at extending the conversation on AI safety and its practical implementations.

Read the Viewpoints White Paper here.

If you missed the Live Event, you can watch the replay of the event:


Latest Events