Monday, July 17, 2017

Using evolution for particle physics

(I will start to illustrate the entries with some simple sketches. I am not very experienced with it, and thus, they will be quite basic. But with making more of them I should gain experience, and they should become better eventually)

This entry will be on the recently started bachelor thesis of Raphael Wagner.

He is addressing the following problem. One of the mainstays of our research are computer simulations. But our computer simulations are not exact. They work by simulating a physical system many times with different starts. The final result is then an average over all the simulations. There is an (almost) infinite number of starts. Thus, we cannot include them all. As a consequence, our average is not the exact value we are looking for. Rather, it is an estimate. We can also estimate in which range around the real result should be.

This is sketched in the following picture

The black line is our estimate and the red lines give the range were the true value should be. From left to right some parameter runs. In the case of the thesis, the parameter is the time. The value is roughly the probability for a particle to survive this time. So we have an estimate for the survivability probability.

Fortunately, we know a little more. From quite basic principles we know that this survivability cannot depend in an arbitrary way on the time. Rather, it has a particular mathematical form. This function depends only on a very small set of numbers. The most important one is the mass of the particle.

What we then do is to start with some theory. We simulate it. And then we extract from such a survival probability the masses of the particles. Yes, we do not know them beforehand. This is because the masses of particles are changed in a quantum theory by quantum effects. These are which we simulate, to get a final value of the masses.

Up to now, we try to determine the mass in a very simple-minded way: We determined them by just looking for numbers for the mathematical functions which are closest to the data. That seems reasonable. Unfortunately, the function is not so simple. Thus, you can mathematically show that this does not give necessarily the best result. You can imagine this in the following way: Imagine you want to find the deepest valley in area. Surely, walking down hill will get you in a valley. But only walking down hill this will usually not be the deepest one:

But this is the way we determine the numbers so far. So there may be other options.

There is a different possibility. In the picture of the hills, you could rather deploy a number of ants, of which some prefer to walk up, some down, and some sometimes so and otherwise opposite. The ants live, die, and reproduce. Now, if you give the ants more to eat if they live in a deeper valley, at some time evolution will bring the population to live in the deepest valley:

And then you have what you want.

This is called a genetic algorithm. It is used in many areas of engineering. The processor of the computer or smartphone you use to read this has likely been optimized using such algorithms.

The bachelor thesis is now to apply the same idea to find better estimates for the masses of the particles in our simulations. This requires to understand what would be the equivalent to the deepness of the valley and the food for the ants. And how long we let evolution run its course. Then, we have only to monitor the (virtual) ants to find our prize.

No comments:

Post a Comment