Thursday, November 30, 2017

Reaching closure – completing a review

I did not publish anything here within the last few months, as the review I am writing took up much more time than expected. A lot of interesting project developments happened also during this time. I will write on them as well later, so that nobody will miss out on the insights we gained and the fun we had with them.

But now, I want to write about how the review comes along. It has now grown into a veritable almost 120 page document. And actually most of it is texts and formulas, and only very few figures. This makes for a lot of content. Right now, it has reached the status of a release candidate 2. This means I have distributed it to many of my colleagues to comment on it. I also used the draft as lecture notes for a lecture on its contents at a winter school in Odense/Denmark (where I actually wrote this blog entry). Why? Because I wanted to have feedback. What can be understood, and what may I have misunderstood? After all, this review not only looks at my own research. Rather, it compiles knowledge from more than a hundred scientists over 45 years. In fact, some of the results I write about have been obtained before I was born. Especially, I could have overlooked results. With by now dozens of new papers per day, this can easily happen. I have collected more than 330 relevant articles, which I refer to in the review.

And, of course, I could have misunderstood other people’s results or made mistakes. This needs to be avoided in a review as good as possible.

Indeed, I had many discussions by now on various aspects of the research I review. I got comments and was challenged. In the end, there was always either a conclusion or the insight that some points, believed to be clear, are not as entirely clear as it seemed. There are always more loopholes, more subtleties, than one anticipates. By this, the review became better, and could collect more insights from many brilliant scientists. And likewise I myself learned a lot.

In the end, I learned two very important lessons about the physics I review.

The first is that many more things are connected than I expected. Some issues, which looked to my like a parenthetical remark in the beginning became first remarks at more than one place and ultimately became an issue of their on.

The second is that the standard modelof particle physics is even more special and more balanced than I thought. I was never really thinking that the standard model is so terrible special. Just one theory among many which happen to fit experiments. But really it is an extremely finely adjusted machinery. Every cog in it is important, and even slight changes will make everything fall apart. All the elements are in constant connection with each other, and influence each other.

Does this mean anything? Good question. Perhaps it is a sign of an underlying ordering principle. But if it is, I cannot see it (yet?). Perhaps this is just an expression of how a law of nature must be – perfectly balanced. At any rate, it gave me a new perspective of what the standard model is.

So, as I anticipated writing this review gave me a whole new perspective and a lot of insights. Partly by formulating questions and answers more precisely. But, and probably more importantly, I had to explain it to others, and to either successfully defend or adapt it or even correct it.

In addition, two of the most important lessons about understanding physics I learned were the following:

One: Take your theory seriously. Do not take a shortcut or use some experience. Literally understand what it means and only then start to interpret.

Two: Pose your questions (and answers) clearly. Every statement should have a well-defined meaning. Never be vague when you want to make a scientific statement. Be always able to back up a question of “what do you mean by this?” by a precise definition. This seems obvious, but is something you tend to be cavalier about. Don’t.

So, writing a review not only helps in summarizing knowledge. It also helps to understand this knowledge and realize its implications. And, probably fortunately, it poses new questions. What they are, and what we do about, this is something I will write about in the future.

So, how does it proceed now? In two weeks I have to deliver the review to the journal which mandated it. At the same time (watch my twitteraccount) it will become available on the preprint server arxiv.org, the standard repository of all elementary particle physics knowledge. Then you can see for yourself what I wrote, and wrote about

Thursday, July 20, 2017

Getting better

One of our main tools in our research are numerical simulations. E.g. the research of the previous entry would have been impossible without.

Numerical simulations require computers to run them. And even though computers become continuously more powerful, they are limited in the end. Not to mention that they cost money to buy and to use. Yes, also using them is expensive. Think of the electricity bill or even having space available for them.

So, to reduce the costs, we need to use them efficiently. That is good for us, because we can do more research in the same time. And that means that we as a society can make scientific progress faster. But it also reduces financial costs, which in fundamental research almost always means the taxpayer's money. And it reduces the environmental stress which we exercise by having and running the computers. That is also something which should not be forgotten.

So what does efficiently mean?

Well, we need to write our own computer programs. What we do nobody did before us. Most of what we do is really the edge of what we understand. So nobody was here before us and could have provided us with computer programs. We do them ourselves.

For that to be efficient, we need three important ingredients.

The first seems to be quite obvious. The programs should be correct before we use them to make a large scale computation. It would be very wasteful to run on a hundred computers for several months, just to figure out it was all for naught, because there was an error. Of course, we need to test them somewhere, but this can be done with much less effort. But this takes actually quite some time. And is very annoying. But it needs to be done.

The next two issues seems to be the same, but are actually subtly different. We need to have fast and optimized algorithms. The important difference is: The quality of the algorithm decides how fast it can be in principle. The actual optimization decides to which extent it uses this potential.

The latter point is something which requires a substantial amount of experience with programming. It is not something which can be learned theoretically. And it is more of a craftsmanship than anything else. Being good in optimization can make a program a thousand times faster. So, this is one reason why we try to teach students programming early, so that they can acquire the necessary experience before they enter research in their thesis work. Though there is still today research work which can be done without computers, it has become markedly less over the decades. It will never completely vanish, though. But it may well become a comparatively small fraction.

But whatever optimization can do, it can do only so much without good algorithms. And now we enter the main topic of this entry.

It is not only the code which we develop by ourselves. It is also the algorithms. Because again, they are new. Nobody did this before. So it is also up to us to make them efficient. But to really write a good algorithm requires knowledge about its background. This is called domain-specific knowledge. Knowing the scientific background. One reason more why you cannot get it off-the-shelf. Thus, if you want to calculate something new in research using computer simulations that means usually sitting down and writing a new algorithm.

But even once an algorithm is written down this does not mean that it is necessarily already the fastest possible one. Also this requires on the one hand experience, but even more so it is something new. And it is thus research as well to make it fast. So they can, and need to be, made better.

Right now I am supervising two bachelor theses where exactly this is done. The algorithms are indeed directly those which are involved with the research mentioned in the beginning. While both are working on the same algorithm, they do it with quite different emphasis.

The aim in one project is to make the algorithm faster, without changing its results. It is a classical case of improving an algorithm. If successful, it will make it possible to push the boundaries of what projects can be done. Thus, it makes computer simulations more efficient, and thus satisfies allows to do more research. One goal reached. Unfortunately the 'if' already tells that, as always with research, there is never a guarantee that it is possible. But if this kind of research should continue, it is necessary. The only alternative is waiting for a decade for the computers to become faster, and doing something different in the time in between. Not a very interesting option.

The other one is a little bit different. Here, the algorithm should be modified to serve a slightly different goal. It is not a fundamentally different goal, but subtly different so. Thus, while it does not create a fundamentally new algorithm, it still does create something new. Something, which will make a different kind of research possible. Without the modification, the other kind of research may not be possible for some time to come. But just as it is not possible to guarantee that an algorithm can be made more efficient, it is also not always possible that an algorithm with any reasonable amount of potential can be created at all. So this is also true research.

Thus, it remains exciting of what both theses will ultimately lead to.

So, as you see, behind the scenes research is quite full of the small things which make the big things possible. Both of these projects are probably closer to our everyday work than most of the things I have been posting before. The everyday work in research is quite often grinding. But, as always, this is what makes the big things ultimately possible. Without such projects as these two theses, our progress would be slowed down to a snail's speed.

Wednesday, July 19, 2017

Tackling ambiguities

I have recently published a paper with a rather lengthy and abstract title. I wanted to enlighten in this entry a little bit what is going on.

The paper is actually on a problem which occupies me by now since more than a decade. And this is the problem how to really define what we mean when we talk about gluons. The reason for this problem is a certain ambiguity. This ambiguity arises because it is often much more convenient to have auxiliary additional stuff around to make calculations simple. But then you have to deal with this additional stuff. In a paper last year I noted that the amount of stuff is much larger than originally anticipated. So you have to deal with more stuff.

The aim of the research leading to the paper was to make progress with that.

So what did I do? To understand this, it is first necessary to say a few words about how we describe gluons. We describe them by mathematical functions. The simplest such mathematical functions makes, loosely speaking, a statement about how probable it is that a gluon moves from one point to another. Since a fancy word for moving is propagating, this function is called a propagator.

So the first question I posed was whether the ambiguity in dealing with the stuff affects this. You may ask whether this should happen at all. Is a gluon not a particle? Should this not be free of ambiguities? Well, yes and no. A particle which we actually detect should be free of ambiguities. But gluons are not detected. Gluons are, in fact, never seen directly. They are confined. This is a very peculiar feature of the strong force. And one which is not satisfactorily fully understood. But it is experimentally well established.

Since therefore something happens to gluons before we can observe them, there is now a way out. If the gluon is ambiguous, then this ambiguity has to be canceled by whatever happens to it. Then whatever we detect is not ambiguous. But cancellations are fickle things. If you are not careful in your calculations, something is left uncanceled. And then your results become ambiguous. This has to be avoided. Of course, this is purely a problem for us theoreticians. The experimentalists never have this problem. A long time ago I actually already wrote together with a few other people a paper on this, showing how it may proceed.

So, the natural first step is to figure out what you have to cancel. And therefore to map the ambiguity in its full extent. The possibilities discussed since decades look roughly like this:

As you see, at short distances there is (essentially) no ambiguity. This is actually quite well understood. It is a feature very deeply embedded in the strong interaction. It has to do with the fact that, despite its name, the strong interaction makes itself less known the shorter the distance. But for weak effects we have very precise tools, and we therefore understand it.

On the other hand at long distances - well, there we knew for a long time not even qualitatively what is going on for sure. But, finally, over the decades, we were able to constrain the behavior at least partly. Now, I tested a large part of the remaining range of ambiguities. In the end, it indeed mattered little. There is almost no effect left of the ambiguity on the behavior of the gluon. So, it seems we have this under control.

Or do we? One of the important things in research is that it is never sufficient to confirm your result just by looking at a single thing. Either your explanation fits everything we see and measure, or it cannot be the full story. Or may even be wrong and the agreement with part of the observations is just a lucky coincidence. Well, actually not lucky. Rather terrible, since this misguides you.

Of course, doing all in one go is a horrendous amount of work, and so you work on a few at the time. Preferably, you first work on those where the most problems are expected. It is just ultimately that you need to have covered everything. But you cannot stop and claim victory before you did.

So I did, and looked in the paper at a handful of other quantities. And indeed, in some of them there remain effects. Especially, if you look at how strong the strong interaction is, depending on the distance where you measure it, something remains:

The effects of the ambiguity are thus not qualitative. So it does not change our qualitative understanding of how the strong force works. But there remains some quantitative effect, which we need to take into account.

There is one more important side effect. When I calculated the effects of the ambiguity, I learned also to control how the ambiguity manifests. This does not alter that there is an ambiguity, nor that it has consequences. But it allows others to reproduce how I controlled the ambiguity. This is important because now two results from different sources can be put together, and when using the same control they will fit such that for experimental observables the ambiguity cancels. And thus we have achieved the goal.

To be fair, however, this is currently at the level of an operative control. It is not yet a mathematically well-defined and proven procedure. As with so many cases, this still needs to be developed. But having operative control allows to develop the rigorous control easier than starting without it. So, progress has been made.

Monday, July 17, 2017

Using evolution for particle physics

(I will start to illustrate the entries with some simple sketches. I am not very experienced with it, and thus, they will be quite basic. But with making more of them I should gain experience, and they should become better eventually)

This entry will be on the recently started bachelor thesis of Raphael Wagner.

He is addressing the following problem. One of the mainstays of our research are computer simulations. But our computer simulations are not exact. They work by simulating a physical system many times with different starts. The final result is then an average over all the simulations. There is an (almost) infinite number of starts. Thus, we cannot include them all. As a consequence, our average is not the exact value we are looking for. Rather, it is an estimate. We can also estimate in which range around the real result should be.

This is sketched in the following picture

The black line is our estimate and the red lines give the range were the true value should be. From left to right some parameter runs. In the case of the thesis, the parameter is the time. The value is roughly the probability for a particle to survive this time. So we have an estimate for the survivability probability.

Fortunately, we know a little more. From quite basic principles we know that this survivability cannot depend in an arbitrary way on the time. Rather, it has a particular mathematical form. This function depends only on a very small set of numbers. The most important one is the mass of the particle.

What we then do is to start with some theory. We simulate it. And then we extract from such a survival probability the masses of the particles. Yes, we do not know them beforehand. This is because the masses of particles are changed in a quantum theory by quantum effects. These are which we simulate, to get a final value of the masses.

Up to now, we try to determine the mass in a very simple-minded way: We determined them by just looking for numbers for the mathematical functions which are closest to the data. That seems reasonable. Unfortunately, the function is not so simple. Thus, you can mathematically show that this does not give necessarily the best result. You can imagine this in the following way: Imagine you want to find the deepest valley in area. Surely, walking down hill will get you in a valley. But only walking down hill this will usually not be the deepest one:

But this is the way we determine the numbers so far. So there may be other options.

There is a different possibility. In the picture of the hills, you could rather deploy a number of ants, of which some prefer to walk up, some down, and some sometimes so and otherwise opposite. The ants live, die, and reproduce. Now, if you give the ants more to eat if they live in a deeper valley, at some time evolution will bring the population to live in the deepest valley:

And then you have what you want.

This is called a genetic algorithm. It is used in many areas of engineering. The processor of the computer or smartphone you use to read this has likely been optimized using such algorithms.

The bachelor thesis is now to apply the same idea to find better estimates for the masses of the particles in our simulations. This requires to understand what would be the equivalent to the deepness of the valley and the food for the ants. And how long we let evolution run its course. Then, we have only to monitor the (virtual) ants to find our prize.

Thursday, April 27, 2017

A shift in perspective - or: what makes an electron an electron?

We have recently published a new paper. It is based partially on the master thesis of my student Larissa Egger, but involves also another scientist from a different university. In this paper, we look at a quite fundamental question: How do we distinguish the matter particles? What makes an electron an electron and a muon a muon?

In a standard treatment, this identity is just an integral part of the particle. However, results from the late 1970ies and early 1980ies as well as our own research point to a somewhat different direction. I have described the basic idea sometime back. The basic idea back then was that what we perceive as an electron is not really just an electron. It consists itself out of two particles. A Higgs and something I would call a constituent electron. Back then, we were just thinking about how to test this idea.

This took some time.

We thought this was an outrageous question, putting almost certain things into question.

Now we see: Oh, this was just the beginning. And things got more crazy in every step.

But, as a theoretician, if I determine the consequences of a theory, we should not stop because something sounds crazy. Almost everything what we take for granted today, like quantum physics, sounded crazy in the beginning. But if you have reason to believe that a theory is right, then you have to take it seriously. And then its consequences are what they are. Of course, we may just have made an error somewhere. But that remains to be checked, preferably by independent research groups. After all, at some point, it is hard to see the forest for the trees. But so far, we are convinced that we made at most quantitative errors, but no qualitative errors. So the concept appears to us sound. And therefore I keep on writing about it here.

The older works was just the beginning. And we just followed their suggestion to take the standard model of particle physics not only serious, but also literal.

I will start out with the leptons, i.e. electrons, muons, and tauons as well as the three neutrinos. I come back to the quarks later.

The first thing we established was that it is indeed possible to think of particles like the electron as a kind of bound state of other particles, without upsetting what we have measured in experiment. We also gave an estimate what would be necessary to test this statement in an experiment. Though really exact numbers are as always complicated, we believe that the next generation of experiments which use electrons and positrons and collide them could be able to detect difference between the conventional picture and our results. In fact, the way they are currently designed makes them ideally suited to do so. However, they will not provide a measurement before, roughly, 2035 or so. We also understand quite well, why we would need these machines to see the effect. So right now, we will have to sit and wait for this. Keep your fingers crossed that they will be build, if you are interested in the answer.

Naturally, we therefore asked ourselves if there is no alternative. The unfortunate thing is that you will need at least enough energy to copiously produce the Higgs to test this. The only existing machine being able to do so is the LHC at CERN. However, to do so they collide protons. So we had to discuss whether the same effect also occurs for protons. Now a proton is much more complicated than any lepton, because it is already build from quarks and gluons. Still, what we found is the following: If we take the standard model serious as a theory, then a proton cannot be a theoretically well-defined entity if it is only made out of three quarks. Rather, it needs to have some kind of Higgs component. And this should be felt somehow. However, for the same reason as with the lepton, only the LHC could test it. And here comes the problem. Because the proton is made up out of three quarks, it has already a very complicated structure. Furthermore, even at the LHC, the effect of the additional Higgs component will likely be tiny. In fact, the probably best chance to probe it will be if this Higgs component can be linked to the production of the heaviest known quark, the top quark The reason is that the the top quark is so very sensitive to the Higgs. While the LHC indeed produces a lot of top quarks, producing a top quark linked to a Higgs is much harder. Even just the strongest effect has not yet been seen above doubt. And what we find will only be a (likely small) correction to it. There is still a chance, but this will need much more data. But the LHC will keep on running for a long time. So maybe, it will be enough. We will see.

So, this is what we did. In fact, this will all be part of the review I am writing. So, more will be told about this.

If you are still reading, I want to give you some more of the really weird stuff, which came out.

The first is that live is actually even more complicated. Even without all of what I have written about above, there are actually two types of electrons in the standard model. One which is affected by the weak interaction, and one which is not. Other than that, they are the same. They have the same mass, and they are electromagnetically the same. The same is actually true for all leptons and quarks. The matter all around us is actually a mixture of both types. However, the subtle effects I have been talking so far about only affect those which are affected by the weak interaction. There is a technical reason for this (the weak interaction is a so-called gauge symmetry). However, it makes detecting everything more harder, because it only works if we get the 'right' type of an electron.

The second is that electrons and quarks come in three sets of four particles each, the so-called generations or families. The only difference between these copies is the mass. Other than that, there is no difference that we know of. Though we cannot exclude it, but we have no experiment saying otherwise with sufficient confidence. This is one of the central mysteries. It occupies, and keeps occupying, many physicist. Now, we had the following idea: If we provide internal structure to the members of the family - could it be that the different generations are just different arrangements of the internal structure? That such things are in principle possible is known already from atoms. Here, the problem is even more involved, because of the two types of each of the quarks and leptons. This was just a speculation. However, we found that this is, at least logically, possible. Unfortunately, it is yet too complicated to provide definite quantitative prediction how this can be tested. But, at least, it seems to be not at odds with what we know already. If this would be true, this would be a major step in understanding particle physics. But we are still far, far away from this. Still, we are motivated to continue this road.

Monday, April 10, 2017

Making connections inside dead stars

Last time I wrote about our research on neutron stars. In that case we were concerned with the properties of neutron stars - its mass and size. But these are determined by the particles inside the star, the quarks and gluons and how they influence each other by the strong force.

However, a neutron star is much more than just quarks and gluons bound by gravity and the strong force.

Neutron stars are also affected by the weak force. This happens in a quite subtle way. The weak force can transform a neutron into a proton, an electron and an (anti)neutrino, and back. In a neutron star, this happens all the time. Still, the neutron are neutrons most of the time, hence the name neutron stars. Looking into this process more microscopically, the protons and neutrons consist out of quarks. The proton out of two up quarks and a down quark, and the neutron out of one up quark and two down quarks. Thus, what really happens is that a down quark changes into an up quark and an electron and an (anti)neutrino and back.

As noted, this does not happen too often. But this is actually only true for a neutron star just hanging around. When neutron stars are created in a supernova, this happens very often. In particular, the star which becomes a supernova is mostly protons, which have to be converted to neutrons for the neutron star. Another case is when two neutron stars collide. Then this process becomes much more important, and more rapid. The latter is quite exciting, as the consequences maybe observable in astronomy in the next few years.

So, how can the process be described? Usually, the weak force is weak, as the name says. Thus, it is usually possible to consider it a small effect. Such small effects are well described by perturbation theory. This is OK, if the neutron star just hangs around. But for collisions, or forming, the effect is no longer small. And then other methods are necessary. For the same reasons as in the case of inert neutron stars we cannot use simulations to do so. But our third possibility, the so-called equations of motion, work.

Therefore Walid Mian, a PhD student of mine, and myself used these equations to study how quarks behave, if we offer to them a background of electrons and (anti)neutrinos. We have published a paper about our results, and I would like to outline what we found.

Unfortunately, we still cannot do the calculations exactly. So, in a sense, we cannot independently vary the amount of electrons and (anti)neutrinos, and the strength of their coupling to the quarks. Thus, we can only estimate what a more intense combination of both together means. Since this is qualitatively what we expect to happen during the collision of two neutron stars, this should be a reasonable approximation.

For a very small intensity we do not see anything but what we expect in perturbation theory. But the first surprise was already when we cranked up the intensity. Much earlier than expected new effects which showed up. In fact, they started to be there at intensities some factor 10-1000 smaller than expected. Thus, the weak interaction could play a much larger role in such environments than usually assumed. That was the first insight.

The second was that the type of quarks - whether it is an up or a down quark is more relevant than expected. In particular, whether they have a different mass, like it is in nature, or the same mass makes a big difference. If the mass is different qualitatively new effects arise, which was not expected in this form.

The observed effects themselves are actually quite interesting: They make the quarks, depending on their type, either more sensitive or less sensitive to the weak force. This is important. When neutron stars are created or collide, they become very hot. The main way to get cooler is by dumping (anti)neutrinos into space. This becomes more efficient if the quarks react less to the weak force. Thus, our findings could have consequences on how quickly neutron stars could become colder.

We also saw that these effects only start to play a role if the quark can move inside the neutron star over a sufficiently large distance. Where sufficiently large is here about the size of a neutron. Thus the environment of a neutron star shows itself already when the quarks start to feel that they do not live in a single neutron, but rather in a neutron star, where there neutrons touch each other. All of the qualitative new effects then started to appear.

Unfortunately, to estimate how important these new effects for the neutron star really are, we first have to understand what it means for the neutrons. Essentially, we have to somehow pull our results on a larger scale - what does this mean for the whole neutron - before we can recreate our investigation of the full neutron star with these effects included. Not even to mention the impact for a collision, which is even more complicated.

Thus, our current next step is to understand what the weak interaction implies for hadrons, i.e. states of multiple quarks like the neutron. The first step is to understand how the hadron can decay and reform by the weak force, as I described earlier. The decay itself can be described already quite well using perturbation theory. But decay and reforming, or even an endless chain of these processes, cannot yet. To become able to do so is where we head next.

Thursday, March 30, 2017

Building a dead star

I have written previously about how we investigate QCD to learn about neutron stars. Neutron stars are the extremely dense and small objects left over after a medium-sized star became a supernova.

For that, we have decided to take a detour. To do so, we have slightly modified the strong interactions. The reason for this modification was to do numerical simulations. In the original version of the theory, this is yet impossible. Mainly, because we have not yet been able to develop an algorithm, which is fast enough to get a result within our lifetime. With the small changes we did to our theory, this changes. And therefore, we have now a (rough) idea of how this theory behaves at densities relevant for neutron stars.

Now Ouraman Hajizadeh, a PhD student of mine, and I went all the way. We used these results to construct a neutron star from it. What we found is written up in a paper. And I will describe here what we learned.

The first insight is that we needed a baseline. Of course, we could compare to what we have on neutron star from astrophysics. But we do not yet know too much about their internal structure. This may change with the newly established gravitational wave astronomy, but this will take a few years. Thus, we decided to use neutrons, which do not interact with each other, as the baseline. A neutron star of such particles is only held together by the gravitational pull and the so-called Pauli principle. This principle forbids certain types of particles, so-called fermions, to occupy the same spots. Neutrons are such fermions. Any difference from such a neutron star has therefore to be attributed to interactions.

The observed neutron stars show the existence of interactions. This is exemplified by their mass. A neutron star made out of non-interacting neutrons can have only masses which are somewhat below the mass of our sun. The heaviest neutron stars we have observed so far are more than twice the mass of our sun. The heaviest possible neutron stars could be a little bit heavier than three times our sun. Everything which is heavier would collapse further, either to a different object unknown to us, or to a black hole.

Now, the theory we investigated is different from the true strong-interactions by two effects. One is that we had only one type of quarks, rather than the real number. Also, our quarks was heavier than the lightest quark in nature. Finally, we have more colors and also more gluons than in nature. Thus, our neutron has a somewhat different structure than the real one. But we used this modified version of the neutron to create our baseline, so that we can still see the effect of interactions.

Then, we cranked the machinery. This machinery is a little bit of general relativity, and thermodynamics. The prior is not modified, but our theory determines the latter. What we got was a quite interesting result. First, our heaviest neutron star was much heavier than our baseline. Roughly 20 to 50 percent heaver than our sun, depending on details and uncertainties. Also, a typical neutron star of this mass had much less variation of its size than the baseline. For non-interacting neutrons, changing the maximum mass by ten percent changes the radius by a kilometer, or so. In our case, this changed the radius almost not at all. So, our heaviest neutron stars are much more reluctant to change. So interactions indeed change the structure of a neutron star considerably.

Another long-standing question is, what the internal structure of a neutron star is. Especially, whether they are a, more or less, monolithic block, except for a a very thin layer close to the surface. Or whether they are composed of many different layers, like our earth. In our case, we find indeed a layered structure. There is an outer surface, a kilometer or so thick, and then a different state of matter down to the core. However, the change appears to be quite soft, and there is no hard distinction. Still, our results signal that there a light neutron stars, which only consist out of the 'surface' material, and only heavier neutron stars have such a core of different stuff. Thus, there could be two classes of neutron stars, with different properties. However, the single-type class is lighter than those which have been observed so far. Such light neutron stars, while apparently stable, seem not, or rarely, be formed during the supernovas giving birth to neutron stars.

Of course, the question is, to which extent such qualitative features can be translated to the real case. We can learn more about this by doing the same in other theories. If features turn out to be generic, this points at something which may also happen for the real case. But even our case, which in a certain sense is the simplest possibility, was not trivial. It may take some time to repeat it for other theories.

Wednesday, January 18, 2017

Can we tell when unification works? - Some answers.

This time, the following is a guest entry by one of my PhD students, Pascal Törek, writing about the most recent results of his research, especially our paper.

Some time ago the editor of this blog, offered me to write about my PhD research here. Since now I gained some insight and collected first results, I think this is the best time to do so.

In a previous blog entry, Axel explained what I am working on and which questions we try to answer. The most important one was: “Does the miracle repeat itself for a unified theory?”. Before I answer this question and explain what is meant by “miracle”, I want to recap some things.

The first thing I want to clarify is, what a unified or a grand unified theory is. The standard model of particle physics describes all the interactions (neglecting gravity) between elementary particles. Those interactions or forces are called strong, weak and electromagnetic force. All these forces or sectors of the standard model describe different kinds of physics. But at very high energies it could be that these three forces are just different parts of one unified force. Of course a theory of a unified force should also be consistent with what has already been measured. What usually comes along in such unified scenarios is that next to the known particles of the standard model, additional particles are predicted. These new particles are typically very heavy and thus makes them very hard to detect in experiments in the near future (if one of those unified theories really describes nature).

What physicists often use to make predictions in an unified theory is perturbation theory. But here comes the hook: what one does in this framework is to do something really arbitrarily, namely to fix a so-called “gauge”. This rather technical term just means that we have to use a mathematical trick to make calculations easier. Or to be more precise, we have to use that trick to even perform a calculation in perturbation theory in those kinds of theories which would be impossible otherwise.

Since nature does not care about this man-made choice, every quantity which could be measured in experiments must be independent of the gauge. But this is exactly how the elementary particles are treated in conventional perturbation theory, they depend on the gauge. An even more peculiar thing is that also the particle spectrum (or the number of particles) predicted by these kinds of theories depends on the gauge.
This problem appears already in the standard model: what we call the Higgs, W, Z, electron, etc. depends on the gauge. This is pretty confusing because those particles have been measured experimentally but should not have been observed like that if you take the theory serious. 

This contradiction in the standard model is resolved by a certain mechanism (the so-called “FMS mechanism”) which maps quantities which are independent of the gauge to the gauge-dependent objects. Those gauge-independent quantities are so called bound states. What you essentially do is to “glue” the gauge-dependent objects together in such a way that the result does not depended on the gauge. This exactly the miracle I wrote about in the beginning: one interprets something (gauge-dependent objects as e.g. the Higgs) as if it will be observable and you indeed find this something in experiments. The correct theoretical description is then in terms of bound states and there exists a one-to-one mapping to the gauge-dependent objects. This is the case in the standard model and it seems like a miracle that everything fits so perfectly such that everything works out in the end. The claim is that you see those bound states in experiments and not the gauge-dependent objects.

However, it was not clear if the FMS mechanism works also in a grand unified theory (“Does the miracle repeat itself?”). This is exactly what my research is about. Instead of taking a realistic grand unified theory we decided to take a so called “toy theory”. What is meant by that is that this theory is not a theory which can describe nature but rather covers the most important features of such kind of theory. The reason is simply that I use simulations for answering the question raised above and due to time constraints and the restricted resources a toy model is more feasible than a realistic model. By applying the FMS mechanism to the toy model I found that there is a discrepancy to perturbation theory, which was not the case in the standard model. In principle there were three possible outcomes: the mechanism works in this model and perturbation theory is wrong, the mechanism fails and perturbation theory gives the correct result or both are wrong. So I performed simulations to see which statement is correct and what I found is that only the FMS mechanism predicts the correct result and perturbation theory fails. As a theoretician this result is very pleasing since we like to have nature independent of a arbitrarily chosen gauge.

The question you might ask is: “What is it good for?” Since we know that the standard model is not the theory which can describe everything, we look for theories beyond the standard model as for instance grand unified theories. There are many of these kinds of theories on the market and there is yet no way to check each of them experimentally. What one can do now is to use the FMS mechanism to rule some of them out. This is done by, roughly speaking, applying the mechanism to the theory you want to look at, count the number of particles predicted by the mechanism, compare it to the number particles of the standard model. If there are more the theory is probably a good candidate to study and if not you can throw it away.

Right now Axel, a colleague from Jena University, and myself look at more realistic grand unified theories and try to find general features concerning the FMS mechanism. I am sure Axel or maybe myself keep you updated on this topic.

Monday, January 16, 2017

Writing a review

As I have mentioned recently on Twitter, I have been given the opportunity, and the mandate, to write a review on Higgs physics. Especially, I should describe how the connection is established from the formal basics to what we see in experiment. While I will be writing in the next time a lot about the insights I gain and the connection I make during writing, this time I want to talk about something different. About what this means, and what the purpose of reviews is.

So what is a review good for? Physics is not static. Physics is about our understanding of the world around us. It is about making things we experience calculable. This is done by phrasing so-called laws of nature as mathematical statements. Then making predictions (or explaining something what happens) is, essentially, just evaluating equations. At least in principle, because this may be technically extremely complicated and involved. There are cases in which our current abilities are not even yet able to do so. But this is technology and, often, resources in form of computing time. Not some conceptual problem.

But there is also a conceptual problem. Our mathematical statements encode what we know. One of their most powerful feature is that they tell us themselves that they are incomplete. That our mathematical formulation of nature only reaches this far. That are things, we do not even yet know what they are, which we cannot describe. Physics is at the edge of knowledge. But we are not lazy. Every day, thousands of physicists all around the world work together to push this edge daily a little bit farther out. Thus, day by day, we know more. And, in a global world, this knowledge is shared almost instantaneously.

A consequence of this progress is that the textbooks at the edge become outdated. Because we get a better understanding. Or we figure out that something is different than we thought. Or because we find a way to solve a problem which withstood solution for decades. However, what we find today or tomorrow is not yet confirmed. Every insight we gain needs to be checked. Has to be investigated from all sides. And has to be fitted into our existing knowledge. More often that not some of these insights turn out to be false hopes. That we thought we understood something. But there is still that one little hook, this one tiny loop, which in the end lets our insight crumble. This can take a day or a month or a year, or even decades. Thus, insights should not directly become part of textbooks, which we use to teach the next generation of students.

To deal with this, a hierarchy of establishing knowledge has formed.

In the beginning, there are ideas and first results. These we tell our colleagues at conferences. We document the ideas and first results in write-ups of our talks. We visit other scientists, and discuss our ideas. By this we find many loopholes and inadequacies already, and can drop things, which do not work.

Results which survive this stage then become research papers. If we write such a paper, it is usually about something, which we personally believe to be well funded. Which we have analyzed from various angles, and bounced off the wisdom and experience of our colleagues. We are pretty sure that it is solid. By making these papers accessible to the rest of the world, we put this conviction to the test of a whole community, rather than some scientists who see our talks or which we talk to in person.

Not all such results remain. In fact, many of these are later to be found to be only partly right, or still have overlooked a loophole, or are invalidated by other results. But this stage already a considerable amount of insights survive.

Over years, and sometimes decades, insights in papers on a topic accumulate. With every paper, which survives the scrutiny of the world, another piece in the puzzle fits. Thus, slowly a knowledge base emerges on a topic, carried by many papers. And then, at some point, the amount of knowledge has provided a reasonable good understanding of the topic. This understanding is still frayed at the edges towards the unknown. There is still here and there some holes to be filled. But overall, the topic is in fairly good condition. That is the point where a review is written on the topic. Which summarizes the finding of the various papers, often hundreds of them. And which draws the big picture, and fits all the pieces into it. Its duty is also to point out all remaining problems, and where the ends are still frayed. But at this point usually the things are well established. They often will not change substantially in the future. Of course, no rule without exception.

Over time, multiple reviews will evolve the big picture, close all holes, and connect the frayed edges to neighboring topics. By this, another patch in the tapestry of a field is formed. It becomes a stable part of the fabric of our understanding of physics. When this process is finished, it is time to write textbooks. To make even non-specialist students of physics aware of the topic, its big picture, and how it fits into our view of the world.

Those things, which are of particular relevance, since they form the fabric of our most basic understanding of the world, will eventually filter further down. At some point, the may become part of the textbooks at school, rather then university. And ultimately, they will become part of common knowledge.

This has happened many times in physics. Mechanics, classical electrodynamics, thermodynamics, quantum and nuclear physics, solid state physics, particle physics, and many other fields have undergone these level of hierarchies. Of course, often only with hindsight the transitions can be seen, which lead from the first inspiration to the final revelation of our understanding. But in this way our physics view of the world evolves.