Thursday, December 13, 2018

The size of the W

As discussed in an earlier entry we set out to measure the size of a particle: The W boson. We have now finished this, and published a paper about our results. I would like to discuss these results a bit in detail.

This project was motivated because we think that the W (and its sibling, the Z boson) are actually more complicated than usually assured. We think that they may have a self-similar structure. The bits and pieces of this is quite technical. But the outline is the following: What we see and measure as a W at, say, the LHC or earlier, is actually not a point-like particle. Although this is the currently most common view. But science has always been about changing the common ideas and replacing them with something new and better. So, our idea is that the W has a substructure. This substructure is a bit weird, because it is not made from additional elementary particles. It rather looks like a bubbling mess of quantum effects. Thus, we do not expect that we can isolate anything which resembles a physical particle within the W. And if we try to isolate something, we should not expect it to behave as a particle.

Thus, this scenario gives two predictions. One: Substructure needs to have space somewhere. Thus, the W should have a size. Two: Anything isolated from it should not behave like a particle. To test both ideas in the same way, we decided to look at the same quantity: The radius. Hence, we simulated a part of the standard model. Then we measured the size of the W in this simulation. Also, we tried to isolate the most particle-like object from the substructure, and also measured its size. Both of these measurements are very expensive in terms of computing time. Thus, our results are rather exploratory. Hence, we cannot yet regard what we found as final. But at least it gives us some idea of what is going on.

The first thing is the size of the W. Indeed, we find that it has a size, and one which is not too small either. The number itself, however, is far less accurate. The reason for this is twofold. On the one hand, we have only a part of the standard model in our simulations. On the other hand, we see artifacts. They come from the fact that our simulations can only describe some finite part of the world. The larger this part is, the more expensive the calculation. With what we had available, the part seems to be still so small that the W is big enough to 'bounce of the walls' fairly often. Thus, our results still show a dependence on the size of this part of the world. Though we try to accommodate for this, this still leaves a sizable uncertainty for the final result. Nonetheless, the qualitative feature that it has a significant size remains.

The other thing are the would-be constituents. We indeed can identify some kind of lumps of quantum fluctuations inside. But indeed, they do not behave like a particle, not even remotely. Especially, when trying to measure their size, we find that the square of their radius is negative! Even though the final value is still uncertain, this is nothing a real particle should have. Because when trying to take the square root of such a negative quantity to get the actual number yields an imaginary number. That is an abstract quantity, which, while not identifiable with anything in every day, has a well-defined mathematical meaning. In the present case, this means this lump is nonphysical, as if you would try to upend a hole. Thus, this mess is really not a particle at all, in any conventional sense of the word. Still, what we could get from this is that such lumps - even though they are not really lumps, 'live' only in areas of our W much smaller than the W size. So, at least they are contained. And let the W be the well-behaved particle it is.

So, the bottom line is, our simulations agreed with our ideas. That is good. But it is not enough. After all, who can tell if what we simulate is actually the thing happening in nature? So, we will need an experimental test of this result. This is surprisingly complicated. After all, you cannot really get a measure stick to get the size of a particle. Rather, what you do is, you throw other particles at them, and then see how much they are deflected. At least in principle.

Can this be done for the W? Yes, it can be done, but is very indirect. Essentially, it could work as follows: Take the LHC, at which two protons are smashed in each other. In this smashing, it is possible that a Z boson is produced, which smashes of a W. So, you 'just' need to look at the W before and after. In practice, this is more complicated. Since we cannot send the W in there to hit the Z, we use that mathematically this process is related to another one. If we get one, we get the other for free. This process is that the produced Z, together with a lot of kinetic energy, decays into two W particles. These are then detected, and their directions measured.

As nice as this sounds, this is still horrendously complicated. The problem is that the Ws themselves decay into some leptons and neutrinos before they reach the actual detector. And because neutrinos escape essentially always undetected, one can only indirectly infer what has been going on. Especially the directions of the Ws cannot easily be reconstructed. Still, in principle it should be possible, and we discuss this in our paper. So we can actually measure this size in principle. It will be now up to the experimental experts if it can - and will - be done in practice.

Wednesday, October 24, 2018

Looking for something when no one knows how much is there

This time, I want to continue the discussion from some months ago. Back then, I was rather general on how we could test our most dramatic idea. This idea is connected to what we regard as elementary particles. So far, our idea is that those you have heard about, the electrons, the Higgs, and so on are truly the basic building blocks of nature. However, we have found a lot of evidence that indicate that we see in experiment, and call these names, are actually not the same as the elementary particles themselves. Rather, they are a kind of bound state of the elementary ones, which only look at first sight like they themselves would be the elementary ones. Sounds pretty weird, huh? And if it sounds weird, it means it needs to be tested. We did so with numerical simulations. They all agreed perfectly with the ideas. But, of course, its physics, and thus we need also an experiment. The only question is which one.

We had some ideas already a while back. One of them will be ready soon, and I will talk again about it in due time. But this will be rather indirect, and somewhat qualitative. The other, however, required a new experiment, which may need two more decades to build. Thus, both cannot be the answer alone, and we need something more.

And this more is what we are currently closing in. Because one has this kind of weird bound state structure to make the standard model consistent, not only exotic particles are more complicated than usually assumed. Ordinary ones are too. And most ordinary are protons, the nucleus of the hydrogen atom. More importantly, protons is what is smashed together at the LHC at CERN. So, we have a machine already, which may be able to test it. But this is involved, as protons are very messy. They are already in the conventional picture bound states of quarks and gluons. Our results just say there are more components. Thus, we have somehow to disentangle old and new components. So, we have to be very careful in what we do.

Fortunately, there is a trick. All of this revolves around the Higgs. The Higgs has the property that interacts stronger with particles the heavier they are. The heaviest particles we know are the top quark, followed by the W and Z bosons. And the CMS experiment (and other experiments) at CERN has a measurement campaign to look at the production of these particles together! That is exactly where we expect something interesting can happen. However, our ideas are not the only ones leading to top quarks and Z bosons. There are many known processes which produce them as well. So we cannot just check whether they are there. Rather, we need to understand if there are there as expected. E.g., if they fly away from the interaction in the expected direction and with the expected speeds.

So what a master student and myself do is the following. We use a program, called HERWIG, which simulates such events. One of the people who created this program helped us to modify this program, so that we can test our ideas with it. What we now do is rather simple. An input to such simulations is how the structure of the proton looks like. Based on this, it simulates how the top quarks and Z bosons produced in a collision are distributed. We now just add our conjectured additional contributions to the proton, essentially a little bit of Higgs. We then check, how the distributions change. By comparing the changes to what we get in experiment, we can then deduced how large the Higgs contribution in the proton is. Moreover, we can even indirectly deduce its shape, i.e. how in the proton the Higgs is located.

And this we now study. We iterate modifications of the proton structure with comparison to experimental results and predictions without this Higgs contribution. Thereby, we constraint the Higgs contribution in the proton bit by bit. At the current time, we know that the data is only sufficient to provide an upper bound to this amount inside the proton. Our first estimates show already that this bound is actually not that strong, and quite a lot of Higgs could be inside the proton. But on the other hand, this is good, because that means that the expected data in the next couple of years from the experiments will be able to actually either constraint the contribution further, or could even detect it, if it is large enough. At any rate, we now know that we have a sensitive leverage to understand this new contribution.

Thursday, September 27, 2018

Unexpected connections

The history of physics is full of stuff developed for one purpose ending up being useful for an entirely different purpose. Quite often they also failed their original purpose miserably, but are paramount for the new one. Newer examples are the first attempts to describe the weak interactions, which ended up describing the strong one. Also, string theory was originally invented for the strong interactions, and failed for this purpose. Now, well, it is the popular science star, and a serious candidate for quantum gravity.

But failing is optional for having a second use. And we just start to discover a second use for our investigations of grand-unified theories. There our research used a toy model. We did this, because we wanted to understand a mechanism. And because doing the full story would have been much too complicated before we did not know, whether the mechanism works. But it turns out this toy theory may be an interesting theory on its own.

And it may be interesting for a very different topic: Dark matter. This is a hypothetical type of matter of which we see a lot of indirect evidence in the universe. But we are still mystified of what it is (and whether it is matter at all). Of course, such mysteries draw our interests like a flame the moth. Hence, our group in Graz starts to push also in this direction, being curious on what is going on. For now, we follow the most probable explanation that there are additional particles making up dark matter. Then there are two questions: What are they? And do they, and if yes how, interact with the rest of the world? Aside from gravity, of course.

Next week I will go to a workshop in which new ideas on dark matter will be explored, to get a better understanding of what is known. And in the course of preparing for this workshop I noted that there is this connection. I will actually present this idea at the workshop, as it forms a new class of possible explanations of dark matter. Perhaps not the right one, but at the current time an equally plausible one as many others.

And here is how it works. Theories of the type of grand-unified theories were for a long time expected to have a lot of massless particles. This was not bad for their original purpose, as we know quite some of them, like the photon and the gluons. However, our results showed that with an improved treatment and shift in paradigm that this is not always true. At least some of them do not have massless particles.

But dark matter needs to be massive to influence stars and galaxies gravitationally. And, except for very special circumstances, there should not be additional massless dark particles. Because otherwise the massive ones could decay into the massless ones. And then the mass is gone, and this does not work. Thus the reason why such theories had been excluded. But with our new results, they become feasible. Even more so, we have a lot of indirect evidence that dark matter is not just a single, massive particle. Rather, it needs to interact with itself, and there could be indeed many different dark matter particles. After all, if there is dark matter, it makes up four times more stuff in the universe than everything we can see. And what we see consists out of many particles, so why should not dark matter do so as well. And this is also realized in our model.

And this is how it works. The scenario I will describe (you can download my talk already now, if you want to look for yourself - though it is somewhat technical) finds two different types of stable dark matter. Furthermore, they interact. And the great thing about our approach is that we can calculate this quite precisely, giving us a chance to make predictions. Still, we need to do this, to make sure that everything works with what astrophysics tells us. Moreover, this setup gives us two more additional particles, which we can couple to the Higgs through a so-called portal. Again, we can calculate this, and how everything comes together. This allows to test this model not only by astronomical observations, but at CERN. This gives the basic idea. Now, we need to do all the detailed calculations. I am quite excited to try this out :) - so stay tuned, whether it actually makes sense. Or whether the model will have to wait for another opportunity.

Monday, August 13, 2018

Fostering an idea with experience

In the previous entry I wrote how hard it is to establish a new idea, if the only existing option to get experimental confirmation is to become very, very precise. Fortunately, this is not the only option we have. Besides experimental confirmation, we can also attempt to test an idea theoretically. How is this done?

The best possibility is to set up a situation, in which the new idea creates a most spectacular outcome. In addition, it should be a situation in which older ideas yield a drastically different outcome. This sounds actually easier than it is. There are three issues to be taken care of.

The first two have something to do with a very important distinction. That of a theory and that of an observation. An observation is something we measure in an experiment or calculate if we play around with models. An observation is always the outcome if we set up something initially, and then look at it some time later. The theory should give a description of how the initial and the final stuff are related. This means that we look for every observation for a corresponding theory to give it an explanation. To this comes the additional modern idea of physics that there should not be an own theory for every observation. Rather, we would like to have a unified theory, i.e. one theory which explains all observations. This is not yet the case. But at least we have reduced it to a handful of theories. In fact, for anything going on inside our solar system we need so far just two: The standard-model of particle physics and general relativity.

Coming back to our idea, we have now the following problem. Since we do a gedankenexperiment, we are allowed to chose any theory we like. But since we are just a bunch of people with a bunch of computers we are not able to calculate all the possible observations a theory can describe. Not to mention all possible observations of all theories. And it is here, where the problem starts. The older ideas still exist, because they are not bad, but rather explain a huge amount of stuff. Hence, for many observations in any theory they will be still more than good enough. Thus, to find spectacular disagreement, we do not only need to find a suitable theory. We also need to find a suitable observation to show disagreement.

And now enters the third problem: We actually have to do the calculation to check whether our suspicion is correct. This is usually not a simple exercise. In fact, the effort needed can make such a calculation a complete master thesis. And sometimes even much more. Only after the calculation is complete we know whether the observation and theory we have chosen was a good choice. Because only then we know whether the anticipated disagreement is really there. And it may be that our choice was not good, and we have to restart the process.

Sounds pretty hopeless? Well, this is actually one of the reasons why physicists are famed for their tolerance to frustration. Because such experiences are indeed inevitable. But fortunately it is not as bad as it sounds. And that has something to do with how we chose the observation (and the theory). This I did not specify yet. And just guessing would indeed lead to a lot of frustration.

The thing which helps us to hit more often than not the right theory and observation is insight and, especially, experience. The ideas we have tell us about how theories function. I.e., our insights give us the ability to estimate what will come out of a calculation even without actually doing it. Of course, this will be a qualitative statement, i.e. one without exact numbers. And it will not always be right. But if our ideas are correct, it will work out usually. In fact, if we would regularly not estimate correctly, this should require us to reevaluate our ideas. And it is our experience which helps us to get from insights to estimates.

This defines our process to test our ideas. And this process can actually be well traced out in our research. E.g. in a paper from last year we collected many of such qualitative estimates. They were based on some much older, much more crude estimates published several years back. In fact, the newer paper already included some quite involved semi-quantitative statements. We then used massive computer simulations to test our predictions. They were indeed as good confirmed as possible with the amount of computers we had. This we reported in another paper. This gives us hope to be on the right track.

So, the next step is to enlarge our testbed. For this, we already came up with some new first ideas. However, these will be even more challenging to test. But it is possible. And so we continue the cycle.

Tuesday, June 12, 2018

How to test an idea

As you may have guessed from reading through the blog, our work is centered around a change of paradigm: That there is a very intriguing structure of the Higgs and the W/Z bosons. And that what we observe in the experiments are actually more complicated than what we usually assume. That they are not just essentially point-like objects.

This is a very bold claim, as it touches upon very basic things in the standard model of particle physics. And the interpretation of experiments. However, it is at the same time a necessary consequence if one takes the underlying more formal theoretical foundation seriously. The reason that there is not a huge clash is that the standard model is very special. Because of this both pictures give almost the same prediction for experiments. This can also be understood quantitatively. That is where I have written a review about. It can be imagined in this way:

Thus, the actual particle, which we observe, and call the Higgs is actually a complicated object made from two Higgs particles. However, one of those is so much eclipsed by the other that it looks like just a single one. And a very tiny correction to it.

So far, this does not seem to be something where it is necessary to worry about.

However, there are many and good reasons to believe that the standard model is not the end of particle physics. There are many, many blogs out there, which explain the reasons for this much better than I do. However, our research provides hints that what works so nicely in the standard model, may work much less so in some extensions of the standard model. That there the composite nature makes huge differences for experiments. This was what came out of our numerical simulations. Of course, these are not perfect. And, after all, unfortunately we did not yet discover anything beyond the standard model in experiments. So we cannot test our ideas against actual experiments, which would be the best thing to do. And without experimental support such an enormous shift in paradigm seems to be a bit far fetched. Even if our numerical simulations, which are far from perfect, support the idea. Formal ideas supported by numerical simulations is just not as convincing as experimental confirmation.

So, is this hopeless? Do we have to wait for new physics to make its appearance?

Well, not yet. In the figure above, there was 'something'. So, the ideas make also a statement that even within the standard model there should be a difference. The only question is, what is really the value of a 'little bit'? So far, experiments did not show any deviations from the usual picture. So 'little bit' needs indeed to be really rather small. But we have a calculation prescription for this 'little bit' for the standard model. So, at the very least what we can do is to make a calculation for this 'little bit' in the standard model. We should then see if the value of 'little bit' may already be so large that the basic idea is ruled out, because we are in conflict with experiment. If this is the case, this would raise a lot of question on the basic theory, but well, experiment rules. And thus, we would need to go back to the drawing board, and get a better understanding of the theory.

Or, we get something which is in agreement with current experiment, because it is smaller then the current experimental precision. But then we can make a statement how much better experimental precision needs to become to see the difference. Hopefully the answer will not be so much that it will not be possible within the next couple of decades. But this we will see at the end of the calculation. And then we can decide, whether we will get an experimental test.

Doing the calculations is actually not so simple. On the one hand, they are technically challenging, even though our method for it is rather well under control. But it will also not yield perfect results, but hopefully good enough. Also, it depends strongly on the type of experiment how simple the calculations are. We did a first few steps, though for a type of experiment not (yet) available, but hopefully in about twenty years. There we saw that not only the type of experiment, but also the type of measurement matters. For some measurements the effect will be much smaller than for others. But we are not yet able to predict this before doing the calculation. There, we need still much better understanding of the underlying mathematics. That we will hopefully gain by doing more of these calculations. This is a project I am currently pursuing with a number of master students for various measurements and at various levels. Hopefully, in the end we get a clear set of predictions. And then we can ask our colleagues at experiments to please check these predictions. So, stay tuned.

By the way: This is the standard cycle for testing new ideas and theories. Have an idea. Check that it fits with all existing experiments. And yes, this may be very, very many. If your idea passes this test: Great! There is actually a chance that it can be right. If not, you have to understand why it does not fit. If it can be fixed, fix it, and start again. Or have a new idea. And, at any rate, if it cannot be fixed, have a new idea. When you got an idea which works with everything we know, use it to make a prediction where you get a difference to our current theories. By this you provide an experimental test, which can decide whether your idea is the better one. If yes: Great! You just rewritten our understanding of nature. If not: Well, go back to fix it or have a new idea. Of course, it is best if we have already an experiment which does not fit with our current theories. But there we are at this stage a little short off. May change again. If your theory has no predictions which can be testable in any foreseeable future experimentally. Well, that is a good question how to deal with this, and there is not yet a consensus how to proceed.

Thursday, March 29, 2018

Asking questions leads to a change of mind

In this entry, I would like to digress a bit from my usual discussion of our physics research subject. Rather, I would like to talk a bit about how I do this kind of research. There is a twofold motivation for me to do this.

One is that I am currently teaching, together with somebody from the philosophy department, a course on science philosophy of physics. It cam to me as a surprise that one thing the students of philosophy are interested in is, how I think. What are the objects, or subjects, and how I connect them when doing research. Or even when I just think about a physics theory. The other is the review I have have recently written. Both topics may seem unrelated at first. But there is deep connection. It is less about what I have written in the review, but rather what led me up to this point. This requires some historical digression in my own research.

In the very beginning, I started out with doing research on the strong interactions. One of the features of the strong interactions is that the supposed elementary particles, quarks and gluons, are never seen separately, but only in combinations as hadrons. This is a phenomenon which is called confinement. It always somehow presented as a mystery. And as such, it is interesting. Thus, one question in my early research was how to understand this phenomenon.

Doing that I came across an interesting result from the 1970ies. It appears that a, at first sight completely unrelated, effect is very intimately related to confinement. At least in some theories. This is the Brout-Englert-Higgs effect. However, we seem to observe the particles responsible for and affected by the Higgs effect. And indeed, at that time, I was still thinking that the particles affected by the Brout-Englert-Higgs effect, especially  the Higgs and the W and Z bosons, are just ordinary, observable particles. When one reads my first paper of this time on the Higgs, this is quite obvious. But then there was the results of the 1970ies. It stated that, on a very formal level, there should be no difference between confinement and the Brout-Englert-Higgs effect, in a very definite way.

Now the implications of that serious sparked my interest. But I thought this would help me to understand confinement, as it was still very ingrained into me that confinement is a particular feature of the strong interactions. The mathematical connection I just took as a curiosity. And so I started to do extensive numerical simulations of the situation.

But while trying to do so, things which did not add up started to accumulate. This is probably most evident in a conference proceeding where I tried to put sense into something which, with hindsight, could never be interpreted in the way I did there. I still tried to press the result into the scheme of thinking that the Higgs and the W/Z are physical particles, which we observe in experiment, as this is the standard lore. But the data would not fit this picture, and the more and better data I gathered, the more conflicted the results became. At some point, it was clear that something was amiss.

At that point, I had two options. Either keep with the concepts of confinement and the Brout-Englert-Higgs effect as they have been since the 1960ies. Or to take the data seriously, assuming that these conceptions were wrong. It is probably signifying my difficulties that it took me more than a year to come to terms with the results. In the end, the decisive point was that, as a theoretician, I needed to take my theory seriously, no matter the results. There is no way around it. And it gave a prediction which did not fit my view of the experiments than necessarily either my view was incorrect or the theory. The latter seemed more improbable than the first, as it fits experiment very well. So, finally, I found an explanation, which was consistent. And this explanation accepted the curious mathematical statement from the 1970ies that confinement and the Brout-Englert-Higgs effect are qualitatively the same, but not quantitatively. And thus the conclusion was what we observe are not really the Higgs and the W/Z bosons, but rather some interesting composite objects, just like hadrons, which due to a quirk of the theory just behave almost as if they are the elementary particles.

This was still a very challenging thought to me. After all, this was quite contradictory to usual notions. Thus, it came as a very great relief to me that during a trip a couple months later someone pointed me to a few, almost forgotten by most, papers from the early 1980ies, which gave, for a completely different reason, the same answer. Together with my own observation, this made click, and everything started to fit together - the 1970ies curiosity, the standard notions, my data. That I published in the mid of 2012, even though this still lacked some more systematic stuff. But it required still to shift my thinking from agreement to really understanding. That came then in the years to follow.

The important click was to recognize that confinement and the Brout-Englert-Higgs effect are, just as pointed out in the 1970ies mathematically, really just two faces to the same underlying phenomena. On a very abstract level, essentially all particles which make up the standard model, are really just a means to an end. What we observe are objects which are described by them, but which they are not themselves. They emerge, just like hadrons emerge in the strong interaction, but with very different technical details. This is actually very deeply connected with the concept of gauge symmetry, but this becomes quickly technical. Of course, since this is fundamentally different from the usual way, this required confirmation. So we went, made predictions which could distinguish between the standard way of thinking and this way of thinking, and tested them. And it came out as we predicted. So, seems we are on the right track. And all details, all the if, how, and why, and all the technicalities and math you can find in the review.

To make now full circle to the starting point: That what happened during this decade in my mind was that the way I thought about how the physical theory I tried to describe, the standard model, changed. In the beginning I was thinking in terms of particles and their interactions. Now, very much motivated by gauge symmetry, and, not incidental, by its more deeper conceptual challenges, I think differently. I think no longer in terms of the elementary particles as entities themselves, but rather as auxiliary building blocks of actually experimentally accessible quantities. The standard 'small-ball' analogy went fully away, and there formed, well, hard to say, a new class of entities, which does not necessarily has any analogy. Perhaps the best analogy is that of, no, I really do not know how to phrase it. Perhaps at a later time I will come across something. Right now, it is more math than words.

This also transformed the way how I think about the original problem, confinement. I am curious, where this, and all the rest, will lead to. For now, the next step will be to go ahead from simulations, and see whether we can find some way how to test this actually in experiment. We have some ideas, but in the end, it may be that present experiments will not be sensitive enough. Stay tuned.

Wednesday, February 7, 2018

How large is an elementary particle?

Recently, in the context of a master thesis, our group has begun to determine the size of the W boson. The natural questions on this project is: Why do you do that? Do we not know it already? And does elementary particles have a size at all?

It is best to answer these questions in reverse order.

So, do elementary particles have a size at all? Well, elementary particles are called elementary as they are the most basic constituents. In our theories today, they start out as pointlike. Only particles made from other particles, so-called bound states like a nucleus or a hadron, have a size. And now comes the but.

First of all, we do not yet know whether our elementary particles are really elementary. They may also be bound states of even more elementary particles. But in experiments we can only determine upper bounds to the size. Making better experiments will reduce this upper bound. Eventually, we may see that a particle previously thought of as point-like has a size. This has happened quite frequently over time. It always opened up a new level of elementary particle theories. Therefore measuring the size is important. But for us, as theoreticians, this type of question is only important if we have an idea about what could be the more elementary particles. And while some of our research is going into this direction, this project is not.

The other issue is that quantum effects give all elementary particles an 'apparent' size. This comes about by how we measure the size of a particle. We do this by shooting some other particle at it, and measure how strongly it becomes deflected. A truly pointlike particle has a very characteristic reflection profile. But quantum effects allow for additional particles to be created and destroyed in the vicinity of any particle. Especially, they allow for the existence of another particle of the same type, at least briefly. We cannot distinguish whether we hit the original particle or one of these. Since they are not at the same place as the original particle, their average distance looks like a size. This gives even a pointlike particle an apparent size, which we can measure. In this sense even an elementary particle has a size.

So, how can we then distinguish this size from an actual size of a bound state? We can do this by calculations. We determine the apparent size due to the quantum fluctuations and compare it to the measurement. Deviations indicate an actual size. This is because for a real bound state we can scatter somewhere in its structure, and not only in its core. This difference looks pictorially like this:


So, do we know the size already? Well, as said, we can only determine upper limits. Searching for them is difficult, and often goes via detours. One of such detours are so-called anomalous couplings. Measuring how they depend on energy provides indirect information on the size. There is an active program at CERN underway to do this experimentally. The results are so far say that the size of the W is below 0.0000000000000001 meter. This seems tiny, but in the world of particle physics this is not that strong a limit.

And now the interesting question: Why do we do this? As written, we do not want to make the W a bound state of something new. But one of our main research topics is driven by an interesting theoretical structure. If the standard model is taken seriously, the particle which we observe in an experiment and call the W is actually not the W of the underlying theory. Rather, it is a bound state, which is very, very similar to the elementary particle, but actually build from the elementary particles. The difference has been so small that identifying one with the other was a very good approximation up to today. But with better and better experiments may change. Thus, we need to test this.

Because then the thing we measure is a bound state it should have a, probably tiny, size. This would be a hallmark of this theoretical structure. And that we understood it. If the size is such that it could be actually measured at CERN, then this would be an important test of our theoretical understanding of the standard model.

However, this is not a simple quantity to calculate. Bound states are intrinsically complicated. Thus, we use simulations for this purpose. In fact, we actually go over the same detour as the experiments, and will determine an anomalous coupling. From this we then infer the size indirectly. In addition, the need to perform efficient simulations forces us to simplify the problem substantially. Hence, we will not get the perfect number. But we may get the order of magnitude, or be perhaps within a factor of two, or so. And this is all we need to currently say whether a measurement is possible, or whether this will have to wait for the next generation of experiments. And thus whether we will know whether we understood the theory within a few years or within a few decades.

Monday, January 22, 2018

Finding - and curing - disagreements

The topic of grand-unified theories came up in the blog several times, most recently last year in January. To briefly recap, such theories, called GUTs for short, predict that all three forces between elementary particles emerge from a single master force. That would explain a lot of unconnected observations we have in particle physics. For example, why atoms are electrically neutral. The latter we can describe, but not yet explain.

However, if such a GUT exists, then it must not only explain the forces, but also somehow why we see the numbers and kinds of elementary particles we observe in nature. And now things become complicated. As discussed in the last entry on GUTs there maybe a serious issue in how we determine which particles are actually described by such a theory.

To understand how this issue comes about, I need to put together many different things my research partners and I have worked on during the last couple of years. All of these issues are actually put into an expert language in the review of which I talked in the previous entry. It is now finished, and if your interested, you can get it free from here. But it is very technical.

So, let me explain it less technically.

Particle physics is actually superinvolved. If we would like to write down a theory which describes what we see, and only what we see, it would be terribly complicated. It is much more simple to introduce redundancies in the description, so-called gauge symmetries. This makes life much easier, though still not easy. However, the most prominent feature is that we add auxiliary particles to the game. Of course, they cannot be really seen, as they are just auxiliary. Some of them are very obviously unphysical, called therefore ghosts. They can be taken care of comparatively simply. For others, this is less simple.

Now, it turns out that the weak interaction is a very special beast. In this case, there is a unique one-to-one identification between a really observable particle and an auxiliary particle. Thus, it is almost correct to identify both. But this is due to the very special structure of this part of particle physics.

Thus, a natural question is whether, even if it is special, it is justified to do the same for other theories. Well, in some cases, this seems to be the case. But we suspected that this may not be the case in general. And especially not in GUTs.

Now, recently we were going about this much more systematically. You can again access the (very, very technical) result for free here. There, we looked at a very generic class of such GUTs. Well, we actually looked at the most relevant part of them, and still by far not all of them. We also ignored a lot of stuff, e.g. what would become quarks and leptons, and concentrated only on the generalization of the weak interaction and the Higgs.

We then checked, based on our earlier experiences and methods, whether a one-to-one identification of experimentally accessible and auxiliary particles works. And it does essentially never. Visually, this result looks like


On the left, it is seen that everything works nicely with a one-to-one identification in the standard model. On the right, if one-to-one identification would work in a GUT, everything would still be nice. But a our more precise calculation shows that the actually situation, which would be seen in an experiment, is different. There is non one-to-one identification possible. And thus the prediction of the GUT differs from what we already see inn experiments. Thus, a previously good GUT candidate is no longer good.

Though more checks are needed, as always, this is a baffling, and at the same time very discomforting, result.

Baffling as we did originally expect to have problems under very special circumstances. It now appears that actually the standard model of particles is the very special case, and having problems is the standard.

It is discomforting because in the powerful method of perturbation theory the one-to-one identification is essentially always made. As this tool is widely used, this seems to question the validity of many predictions on GUTs. That could have far-reaching consequences. Is this the case? Do we need to forget everything about GUTs we learned so far?

Well, not really, for two reasons. One is that we also showed that methods almost as easily handleable as perturbation theory can be used to fix the problems. This is good, because more powerful methods, like the simulations we used before, are much more cumbersome. However, this leaves us with the problem of having made so far wrong predictions. Well, this we cannot change. But this is just normal scientific progress. You try, you check, you fail, you improve, and then you try again.

And, in fact, this does not mean that GUTs are wrong. Just that we need to consider somewhat different GUTs, and make the predictions more carefully next time. Which GUTs we need to look at we still need to figure out, and that will not be simple. But, fortunately, the improved methods mentioned beforehand can use much of what has been done so far, so most technical results are still unbelievable useful. This will help enormously in finding GUTs which are applicable, and yield a consistent picture, without the one-to-one identification. GUTs are not dead. They likely just need a bit of changing.

This is indeed a dramatic development. But one which fits logically and technically to the improved understanding of the theoretical structures underlying particle physics, which were developed over the last decades. Thus, we are confident that this is just the next logical step in our understanding of how particle physics works.