This year’s World Economic Forum in Davos has positioned AI safety as a central theme, with LatticeFlow leading a pivotal discussion at the AI House Davos. The AI House, an initiative by the ETH AI Center and Merantix, served as the gathering ground for key figures in AI, including Gary Marcus (NYU), Yann LeCun (VP & Chief AI Scientist at Meta), Stuart Russel (UC Berkeley), and Arvind Krishna (Chairman and CEO at IBM), to delve into the latest AI developments and discuss responsible AI adoption.
LatticeFlow’s dedicated session on AI safety, “The Next AI Frontier: AI Safety Audits and Standards,” convened a notable panel, showcasing a broad spectrum of expertise from academia, industry, and governance. The session’s objective was to address the pressing safety needs of companies adopting the latest AI advances. The roundtable featured Gary Marcus (NYU), Apostol Vassilev (National Institute of Standards and Technology), Thomas Stauner (BMW), Matt Fredrikson (CMU), Iwan Gloor (Gowago), Matthias Bossardt (KPMG), Kai Zenner (European Parliament), Christopher Nguyen (AI Alliance), Andreas Krause (ETH Zurich & LatticeFlow), and Pavol Bielik (LatticeFlow).
Each panelist brought unique perspectives to the forefront, emphasizing the importance of trust and transparency in AI applications, particularly in high-stakes fields such as finance. Iwan Gloor, CTO at Gowago, a company providing an innovative car leasing platform that relies on an AI model to predict the residual value of cars, emphasized the importance of third-party AI assessments to build trust in their AI products: “We don’t really care about standards and regulations. We need third parties to assess our systems to build trust. Because, we lose money if we’re wrong. For us, it is absolutely critical that our business partners trust the models themselves to increase adoption.”
Pavol Bielik highlighted the necessity of embedding safety considerations from the inception of AI development, stating, “It is important that AI safety topics are incorporated from the start, not as an afterthought.”
Matthias Bossardt, Partner at KPMG, highlights that AI standards and audits will help ensure a baseline level of safety assurance and instill trust in the industry: “We did a survey on AI trust, showing that 61% of executives do not really trust AI”.
Looking ahead, the panelists align on the view that current AI systems based on deep learning methods have inherent limitations that lack adequate mitigations. Apostol Vassilev from NIST addressed the theoretical challenges surrounding AI safety, pointing out, “There are known theoretical results that show that no matter what kind of guardrails we employ, they are incomplete and can always be circumvented.” This remark served as a call to research and academics to explore approaches beyond current deep learning methods.
To further disseminate the insights gathered at the event, LatticeFlow presented a White Paper summarizing the key viewpoints of the panelists, aimed at extending the conversation on AI safety and its practical implementations.
Watch the replay of the event: