www.industryemea.com
03
'24
Written on
Modified on
Fraunhofer News
Solutions for efficient and trustworthy artificial intelligence
At the Hannover Messe 2024, the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS and the institutes of the Fraunhofer Big Data and Artificial Intelligence Alliance will be presenting two exhibits and several use cases centering on trustworthy AI solutions.
AI offers a wealth of potential for industrial manufacturing, spanning fields such as automation, quality checks, and process optimization. “We are currently receiving a lot of inquiries from companies that have developed AI prototypes and want to put them into series production. To make the scale-up a success, these AI solutions have to be tested systematically so it is also possible to identify vulnerabilities that did not become apparent in the prototype,” explains Dr. Maximilian Poretschkin, Head of AI Assurance and Certification at Fraunhofer IAIS.
Industry attendees at the Hannover Messe 2024 (April 22–26, 2024) can visit the Fraunhofer booth (Booth B24, Hall 2) and explore various exhibits and concrete use cases to learn more about applications and solutions for trustworthy AI and how to incorporate them securely and reliably into their business processes.
Exhibit 1: AI Reliability for Production: Assessment tools for systematic testing of AI models
One such application is a testing tool for AI models used in production or in mechanical and plant engineering. The tool can be used to systematically pinpoint vulnerabilities in AI systems as a way to ensure that they are reliable and robust. “Our methodology is based on specifying the AI system’s scope of application detail. Specifically, we parametrize the space of possible inputs that the AI-system processes and give it a semantic structure. The AI testing tools that we have developed in the KI.NRW flagship project "ZERTIFIZIERTE KI", among others, can then be used to detect weaknesses in AI-systems,” Poretschkin explains.
Fraunhofer and other partners involved in the research project are working to develop assessment methods concerning the quality of AI systems: The AI assessment catalog provides companies with a practically tested guide that enables them to make their AI systems efficient and trustworthy. A recent white paper also addresses the question of how AI applications developed with generative AI and foundation models can be assessed and made secure.
Exhibit 2: “Who’s deciding here?!”: What can artificial intelligence decide — and what decisions can it not yet make?
The collective exhibit titled “Who’s deciding here?!”, which the Fraunhofer BIG DATA AI Alliance will be presenting at the Hannover Messe 2024, is also all about trustworthy AI. The exhibit ties in with the topic of freedom, the theme set by the German Federal Ministry of Education and Research (BMBF) for its Science Year in 2024. How do technologies like artificial intelligence influence our freedom to make decisions? How trustworthy are the AI systems that will likely be used increasingly in applications involving sensitive data, such as credit checks?
The exhibit is designed as an interactive game inviting attendees to reflect on everyday uses of AI. Which decisions can we — and do we want to — leave to AI algorithms, and which decisions would be better made ourselves? Major topics here include object recognition in autonomous driving, the risk of discrimination by AI, and identifying fake news. Sebastian Schmidt, a data scientist working on trustworthy AI at Fraunhofer IAIS, explains: “The game lets everyone experience where AI can be a helpful assistant and where it’s better for humans to participate in the decision. The goal is always to arrive at the right decision without taking decision-making authority and independence away from humans.” This year, the interactive exhibit will also be touring on the MS Wissenschaft ship.
www.fraunhofer.com