AI

Peeking inside AI brains: Machines learn like us

New research reveals a surprising geometric link between human and machine learning. A mathematical property called convexity may help explain how brains and algorithms form concepts and make sense of the world.

A new connection between human and machine learning has been discovered: While conceptual regions in human cognition for long have been modelled as convex regions, Tetkova et al. present new evidence that convexity plays a similar role in AI. So-called pretraining by self-supervision leads to convexity of conceptual regions and the more convex the regions are, the better the model will learn a given specialist task in supervised fine-tuning.

About the project

The research carried out within the research project “Cognitive Spaces – Next generation explainable AI” funded by the Novo Nordisk Foundation.

The project’s aim is to open the machine learning black-box and build tools to explain the inner workings of AI-systems with concepts that can be understood by specific user groups.

Read more: Cognitive spaces - Next generation explainability - Data Science 

Contact

Lars Kai Hansen

Lars Kai Hansen Professor, head of section Department of Applied Mathematics and Computer Science Phone: +45 45253889

Lenka Tetková

Lenka Tetková Postdoc Department of Applied Mathematics and Computer Science