8 August 2022
How much of the human brain do we know? Why is forcing thousand-year-old concepts on the functionality of the brain a dead end? What is the real function of our brain? What can a neuroscientist learn from the research of artificial intelligence? Why is finding dead ends a good thing? We interviewed one of the most renown neuroscientists, György Buzsáki, who started his career in the Physiology department of the University of Pécs, during his visit to Pécs.
by Miklós Stemler
Before we started the interview, I was watching your interactions with the lecturers and researchers of the Physiology Department of your Alma Mater, and your enthusiasm seems to be contagious. You have been doing this research for fifty years, one would think that even the enthusiasm of the most committed researchers would dwindle over time…
On the contrary, it increases. Curiosity is a serious curse, if one is curious, they cannot get rid of it, since the more curious we are, the more curious the happenings of the world make us. We discover more and more things, we want to conduct more and more experiments. And if we cannot do them, we pass on all of this to the younger generation, and their enthusiasm rekindles ours. This is the most beautiful part of our profession.
So scientific curiosity gives birth to more curiosity, and it is undoubtedly a fact that we have much to be curious about when it comes to the brain. I interviewed Nobel-Prize winner neuroscientist Thomas C. Südhoff a few years ago, and he estimated that we might know a few percent of our brain, which is still a huge leap from the 0,1 percent of half a century prior. Do you agree with this assessment?
My friend Thomas, I believe, is much too optimistic: I would say a much lower percentage. While we feel like the world of brain research has never progressed this quickly before, this is partly a “statistical distortion” for outsiders. It would be more correct to phrase that the listmaking has never been this quick before.
It is a natural fact, that we have made much progress with things not needing a lot of creativity, only money and diligent work. How many types of cells there are in the brain; what kind of connections there are between these; what kind of genes are there in the brain, and how these change during sleep and wakefulness; these are easy to document, and we will feel like we have progressed, and that much is true. But new concepts and viewpoints are much harder to substitute with technology.
It is relatively easy to write something for a significant scientific journal with the help of modern technology – even if the basic idea has already been published elsewhere. We do not add anything new in essence, we just repeat the results in a nicer, more embellished way. We can definitely conduct better experiments than Newton – but this does not make us Isaac Newton.
We can definitely see more about brain functions, things we could have only dreamt of 30 years before: for example, we can follow the happenings within the brain in real time with the help of functional MRI technology. But we know a lot less about why things happen…
Moreover, we see what we already know. For example, the “home” of memory in the brain is the hippocampus. For emotions, it’s the amygdala. For decision-making and imagining the future we have the pre-frontal cortex – and so on. I call this boxing – we have a lot of preconceptions about what we are looking for, therefore we can find them with the help of fMRI. But we do not know what we are really seeing, and what the connections are.
To show an example: a research group is studying the neural structures of memories, and which part of the brain houses this function – here I am being a bit sarcastic, since nothing has a home in the brain.
They find multiple places. Later, another group is examining the area structures of imagination, and a third is dealing with planning. All three groups find the same areas. So three conventionally separate thought process shows the same results in an fMRI. We are working with concepts and categories that have existed even before the beginnings of brain research, all the way back in ancient philosophers – however, more of these concepts are just imaginary.
We cannot be sure that these functions are truly separated in the brain the way our preconceptions say they are. For me, this a bit like everybody knows that the Earth is flat and all science is just trying to prove this. Therefore, what if these notions – memory, imagination, planning – are not even real as far as brain functions go? They are certainly useful in human communication, they allow us to discuss our opinions, but they are less useful when it comes to the nervous system.
So we are trying to force thousand-year-old concepts on things we see…
Yes, I think this is the dead end of our field of study. I have written a book about this not too long ago, arguing that we should get to know the brain from the inside, and not from categories forced on it from the outside.
And we have not even mentioned the so-called difficult problem of the consciousness, the idea that even if we write down the physical and biological functionality of the brain with the currently available scientific methods, we are missing the step of how the consciousness and thinking are born from this. What do you think, how “difficult” or “simple” of a problem is consciousness?
I fully agree with this question. I am personally highly interested in time and space, and I read as many articles and books about this topic as I can. Twenty years before, almost all of them closed with God, now most of them end on consciousness. There is a mystical philosophy, according to which the consciousness is not a product of the brain, but the universe itself has its own consciousness – and interestingly, many physicians agree with this.
I believe that all levels have the required and necessary explanations. We do not have to explain the patella-reflex on the level of quantum-physics. The idea of Roger Penrose about how we should explain consciousness by the resonance of microtubes in the projection of brain cells is complete nonsense in my view, even if the idea comes from a scientist renown in their own field of study. You could say of course, that then I should come up with a better theory, but any one of us could come up with something, and it would have the same value. Right now, there is no metric for which theory is better or worse than the other.
There are several difficult problems surrounding the consciousness, the most difficult is “feeling”, as my colleague from the University of New York, David Chalmers put it: what does it mean that I feel that something is green, or that something hurts?
By the way, I prefer to chew on problems that have the possibility to be solved within my lifetime, and I ignore the more difficult problems for others to take care of. Maybe a seemingly unsolvable problem stops being that from another viewpoint. Let me give you an example. The theory of vitalism was very popular in the 19th century. According to this theory, there is a basic difference between live an inanimate things, and some sort of “élan vital” or spark of life is necessary for inanimate things to become live.
They said that scientists have to solve this difficult problem in order to be able to progress in getting to know the world and ourselves. However, genetics replaced this, and we have answered this question from a different view, using DNS and describing the basic microbiological processes of life. So we looked at the problem from a different point of view, and the problem ceased to exist. We barely even mention the theory of vitality now, and it was the biggest question of biology just one century ago.
Today, one of the biggest questions is that of consciousness. Maybe this will also disappear if we start looking at it from a different framework – for example, that the brain was not created by nature for getting to know the world, but for the use for its owner.
And this would be the concept you elaborated on in your latest book; that the main function of the brain is not observing the outside world and processing the information coming from it, but to create constructs about it, and to learn about it with the help of actions.
Yes, that is true. One often ends up in dead ends during research, and the way out has to be found. A young researcher always inherits the problems to be solved from their master, and starts to deal with them, it is those problems they will find important. And if the funding entities also find these areas important, then one can be occupied with them for decades. I was also a young researcher like that, because I dealt with problems that let me advance. But in the meantime, there are more and more questions to be answered.
Let’s take sight. How do we see? Neuroscience has discovered cells that react to horizontal and vertical lines and colour, so let’s follow this problem. Take out all eye movement in the laboratory animal so that we can control what it sees, and write all of this down. But there is a question of how many such experiments would be needed to fully describe sight? I think this is a dead end, because the essence of sight is eye movement, the onlooker has to decide if the world around them is changing, or they have created the change themselves.
According to the classical view, sight allows us to get to know the world, and we can differentiate between good and bad. However, we need to make decisions for differentiation, and this is where the question arises: where is this decision made in the brain? By this logic, there is a now unknown area between the input and output that we can call by multiple names: free will, central processor, black box. We take the fact that decision-making exists as evident, since we are constantly making decisions, but this is not so simple on the nervous level.
We say that we have done something after making a decision about it, but there are many experiments pointing out that this is not entirely correct. For example, we are driving a car, a deer jumps out in front of us and we step on the brake – hopefully saving the animal. We will tell this story as “I saw that the deer ran out in front of me, so I braked, and I didn’t hit it”, and this is not true: it can be proven that we have stepped on the brake even before recognising the animal.
So we act before conscious perception?
Yes, the decision was made on the subconscious level. We know how long conscious perception takes (ca. half a second) and that would have been too slow to stop. Let’s look at another example. Many people might have experienced that they were typing in a document and the software slows down, and even though they were still typing, the letters did not appear on the screen. This is very annoying, since we do not get the expected visual response to the action.
Let’s say that this becomes the permanent state, and the text we type will always appear with a few seconds’ delay. You can get used to this, and after a few days you will not even notice. But then someone repairs the software without your knowledge and now the letters will appear on time. Surprisingly, the effect will feel like as if someone else was typing instead of us. It will seems that the letter has already been pressed even before the action. This experiment raises difficult questions. Who is the entity executing the action, who creates the self, and where does decision-making take place?
If I’m guessing correctly, this is related to the concept that the brain is not reacting first and foremost, but it is making predictions about the effects and consequences of our actions, since the outside world is – mostly – predictable.
I believe that actions are the most important: without them, there is no consciousness, no recognition, no perception, nothing. This is the essence of thinking as well, which in my understanding is an extended action. I am not currently undertaking an action, but in some time, what I think will become an action. Based on the traditional outside-in idea, the information arrives in the brain and gets somewhere, and then it becomes output. But if we study the brain, we see something different: the output sends a response to the sensory system so that we know that the change affecting the outside world is coming from us.
This works like this with the simpler nervous systems as well: a fly can predict what’s going to happen in the next 100-200 milliseconds in its surroundings. The problem is that the well-functioning ecological environment changes unexpectedly. Every year, environmental activists protest around September 11 that the location of the two towers is lit up at night. The intention is nice, but the lights confuse birds, and many of them fly to their deaths because their brains give them wrong predictions in the changed environment.
The complexity of predictions can be increased by adding new layers, just as the deep learning systems of an artificial intelligence. So a more advanced nervous system like ours is capable of longer term predictions in an environment containing many more variables. Everything has to work even if we lock out the outside world. In these cases, the motor system does not send responses from our body, the action system of our brain offers a short circuited instruction for the rest of the brain. We can call it the internalisation of action – or with other words: planning, thinking.
You have mentioned artificial intelligence (AI), and I know that you also deal with this area. How do you see the current state of artificial intelligence research, and what connection is there between the two fields? How much can a neuroscientist profit from advances in AI, and how much can AI researchers gain from your results?
This is a difficult question to answer. On one hand, you need to know that many AI researchers do not deal with the big questions around consciousness, they are more preoccupied with practical problems. On the other hand, what we talked about earlier is also relevant here: they are forming their expectations and the things they hope to see based on pre-existing concepts. Imagine what could have happened if instead of artificial intelligence, we would have named the technology “robotic support software”, not including the word “intelligence”.
In my opinion, there is not much difference between a steam engine and artificial intelligence: both are tools in service of humanity that expand our horizons. The question of consciousness does not arise with the Neumann-computer, and the current deep learning AI methods work on the same principle; but in the latter, there are so many computing processes happening simultaneously in the latter that it seems unreal.
As I have already mentioned, few AR-researchers deal with the achievements of brain research, but there are certainly some who pay attention to what they could use for developing machine intelligence – and we also keep an eye on their field. When we run a lot of data through a deep learning system again and again to teach it how to recognise a human face, that learning is completely different from what happens in the human brain, where one meeting is enough to remember someone’s face. These AI softwares are more similar to motor processes in the cerebellum; and episodic memory, generalisation and comparison (more related to the cerebrum) are week points for AI. From the practical viewpoint of brain research, the classification systems created by deep learning technologies are very useful for understanding and simplifying data from the brain.
You estimated in the beginning of our talk that we only know how a small percentage of the brain works, which means that there is an extraordinary amount of work to be done in brain research. Then you said that you like dealing with stuff that has a chance to be solved in your lifetime. I feel a bit of a contradiction here.
There is no contradiction because I do not want to get to know the brain, just some of its functions. For example, the question of how sleep influences memory, why and how wakefulness affects our behaviour, and why changes of sleep are a recurring symptom in many psychiatric ailments. These questions never even appeared in conventional outside-in viewpoint; it cannot do much with sleep at all, since it has no function in a system where the brain is an organ that observes and analyses the outside world – this is why the idea of sleep being a time to rest and recharge was introduced.
The main element of the inside-out approach is a self-creating process that is separate from the outside world, and its changes can cause illnesses, and not the other way around. We sleep badly because we have depression, and we don’t have depressions because we have some problems with the sleep processes.
We have published an article about the hippocampal pattern that has been known for a while, we call them ruffled waves in Hungarian. These allow to consolidate knowledge we gained during the day in our sleep. There are three-four thousand of them every night; if we erase them, we will not remember anything that happened to us on that day. We have been experimenting with them: a few years ago, we elongated their duration, which helped improve the quality of learning.
These wave patters have appeared way earlier in evolution and are not only characteristic for the most developed nervous systems, but also for reptiles possessing less developed cerebral cortexes. What function does this pattern have in their case? We discovered that these patterns affect the cells innervating the pancreas in the hypothalamus, therefore decreasing blood sugar levels. So we have found a connection between obesity, memory and sleep disturbances: disturbing sleep patterns leads to disturbances in blood sugar levels, which can result in obesity and type 2 diabetes. This is a nice example for how a function that is supposed to support the body can be used for a more complex task, like memory.
This is also a good example for when research thought to be obscure has very practical results.
Yes, sometimes this happens. This research had sizeable influence, because there is a lot of money in this area, since obesity and diabetes are problems worldwide. Others have also suspected the existence of such connections, but we have discovered a concrete example, and half a year later a research group in Chicago has managed to find this occurrence in the human brain too.
So this was an experiment that went very well and served as a basis for more research, but if I’m guessing correctly there are other types as well. In one of your earlier interviews, you talked about when you were conducting experiments on rats as a young researcher in the Physiology Department in Pécs, but the experiment was foiled by the fact that rats dfo not see the colour red, which would have been incredibly important. This is just a funny story now, but back then it was several months’ worth of work for nothing…
To be honest, these dead ends are the most common in research (laughs). In this case, the lesson was straightforward, we should have known the relevant literature. I didn’t know it, which was not characteristic of me: the host of my first New York presentation introduced me as the Hungarian neuroscientist who know all neuroscience articles, and it was almost true. Today, that would be impossible. By the way, this is a recurring problem when we discover something that has already been discovered, since you can only discover something once; this raises the question of what counts as discovery, and when do we just add something to the topic.
There is a large chance that we publish our results in the same month with a different research group, and this race is not something that can be won. There is no point in researching if we read something and want to follow up on it, since others have surely had the same idea, and there is probably someone who is already doing that research.
New roads must be found, but there is a price: it is harder to find money, harder to find enthusiastic university students, and the fantastic idea turns out to be a dead end quite often. But the entire point of research is that we are in areas that nobody has explored before, and there aren’t many clues. This is the beauty of it all, but you need good instincts as well; and it has worked out with our blood sugar research.
According to the traditional approach, science is about testing hypotheses. We form a hypothesis, think about it, read the relevant literature, and conduct some experiments. Nowadays it has got to the point that there are articles published in high level journals that prove or disprove a hypothesis. However, scientific research rarely works like this: we start on a road, find something unexpected, make a sharp turn because we see something that looks more interesting than our original starting point. Almost all research is like this: we go in a direction, realise it’s the wrong one, and start over.
(The interview was originally published on hvg.hu.)