Image
Illustration av en person som sitter vid en dator
Photo: Carolina Svensson
Breadcrumb

Machine learning - a physics prize of its time

Published

How does memory work? This year's Nobel Prize in Physics is about the methods that have made it possible to recreate our brains' unrivalled memory tools in machines. This is known as machine learning or artificial intelligence.
“Developments are moving at breakneck speed now, and the work awarded shows that it is possible to understand the technology,” says Bernhard Mehlig, professor at the University of Gothenburg.

This year's Nobel Laureates in Physics are two pioneers in modern machine learning. John Hopfield and Geoffrey Hinton took old theories about how human memory works and created the first machine learning algorithms using artificial neural networks, which later evolved into today's powerful methods.

Inspired by the brain

Artificial neural networks for machine learning are inspired by how the brain works. The connections between neurons in the brain take on different values that, when trained on different tasks, create memories in large networks. Since the 1940s, scientists have been trying to understand how to use this principle to create machines that can learn different things.

When Nobel laureate John Hopfield conducted his experiments with neural networks in the 1980s, he created an artificial network that could memorise images stored in the network. Hopfield showed that this worked well even if the images had some pixels that were wrong. To achieve this, Hopfield used the methods of statistical physics, which deals with so-called spin glass.

Geoffrey Hinton then developed the technique by adjusting the links so that the network could recognise characteristic features in a data set. Hinton showed why hidden neurons were needed for this. Today's deep networks contain many layers of hidden neurons. Hinton also invented a technique to adjust the network's connections, to train the network. A famous example was being able to distinguish between a T and a C.

Image
Portrait of Bernhard Mehlig
Bernhard Mehlig, professor of complex systems at the University of Gothenburg.
Photo: Malin Arnesson

Missing large amounts of data

“His scientific paper from 1986 is fantastic. Almost everything about machine learning was already in it; how to do it, what is required and why it works,” says Bernhard Mehlig, professor of complex systems at the University of Gothenburg.

There are several reasons why it has taken 30 years for these neural networks to develop to the point where they have become part of our everyday lives.

“In the beginning, the large amounts of data required to make the technology work were lacking. With the advent of the internet, large image banks emerged that were carefully indexed, which the networks could be trained to recognise different motifs. Processors are also needed in the computers to handle these data sets,” says Mats Granath, senior lecturer in physics at the University of Gothenburg.

Image
Portrait of Mats Granath
Mats Granath, senior lecturer in physics at the University of Gothenburg.
Photo: Privat

Threats in technology?

Today, developments in machine learning and artificial intelligence are moving very fast. The question is, can we keep up? This year's Nobel laureate Geoffrey Hinton has also expressed concern that neural network technology could be used for the wrong things.

“Hinton sees a threat in the technology. We need to know the strengths and weaknesses of machine learning and AI,” says Mr Granath.

Bernhard Mehlig started teaching machine learning at the University of Gothenburg in 2002, but it was no fun at first.

“There were few applications and interest was low. But around 2015 there was a sudden rush to the course and this year's Nobel Prize makes it especially fun,” he says, hurrying off to this afternoon's lecture on the subject.

An interdisciplinary subject

Image
Portrait picture of Asad Basheer Sayeed
Asad Basheer Sayeed, Senior Lecturer at the University of Gothenburg.
Photo: Monica Havström

This year's Nobel Prize in Physics is of interest to researchers in many disciplines at the University of Gothenburg. Here you can read some comments.

Several researchers at the Department of Philosophy, Linguistics and Philosophy of Science work with language models, so-called Large Language Models, which are ultimately based on the Nobel Laureates' research. Among other things, the researchers at the Humanist are investigating what these language models can and cannot do, how to build AI that can explain their judgements and decisions, how to combine rule-based language models with neural language models.

“Geoffrey Hinton has a controversial view of artificial intelligence and its direction, but his work forms the basis of what we are now teaching in the Language Technology programme,” says Senior Lecturer Asad Basheer Sayeed.

 

Image
Porträttbild av Stefano Sarao Mannelli
Stefano Sarao Mannelli, assistant professor at the University of Gothenburg.
Photo: Privat

The Department of Computer Science and Engineering conducts a wide range of research in machine learning and AI, ranging from algorithms and large language models to the application of AI in healthcare and research.

Stefano Sarao Mannelli is an assistant professor in the research area of Data Science and AI:

“Hinton and Hopfield's research re-energised the field of neural networks after a bit of an uphill battle in the 1960s, paving the way for fundamental discoveries in machine learning, neuroscience and cognitive science. AI has now become part of the toolbox that scientists use to make discoveries, and this is affecting all fields, including physics, chemistry and biology. Among those working with AI, there is a clear symbiotic relationship with science. This is further proof of that. By highlighting the importance of physics in machine learning, I hope that more physicists will choose to use their knowledge to understand many of the unsolved problems we have in AI.”

Article by: Olof Lönnehed, Monika Havström och Natalija Sako