Monday, March 26, 2012

Methods. United, they are strong.

In the last few postings, I have collected a number of methods: perturbation theory, simulations, and the abstract equations of motion. I have furthermore gave you a bit of a taste of one of our most important strategies: divide and conquer. Or, more bluntly, if the original problem is too complicated, first try a simpler one, which resembles it. This lead us to a stack of models, which bit by bit always included more details of the world.

This list is by no means complete. Over the years, decades, and centuries, physicists have developed many methods. I could probably fill a blog all by its own just by giving a brief introduction to each of them. I will not do this here. Since the man purpose of this blog is to write about my own research, I will just contend myself with this list of methods. These are, right now, those which I use myself.

You may now ask, why do I use more than one method? What is the advantage in this? To answer this, lets have a look at my work-flow. Well, actually this is similar to what many people in theoretical particle physics do, but with some variations on the choice of methods and topics.

The ultimate goal of my work is to understand the physics encoded in the standard model of particle physics, and to get a glimpse of what else may be out there. Not an easy task at all. One, which many people work on, many hundreds, probably even thousands nowadays. And not something to be done in an afternoon, not at all. We know the standard model, more or less, since about forty years at the time of this writing. We think essentially as long as it exists about what else there might be in particle physics.

Thus, the first thing I do is to make the things more manageable. I do this, by making a simpler model of particles. I will give some examples of these simpler models in the next few entries. For now, lets say, I just keep a few of the particles, and one or two of their interactions, not more. This looks much more like something I can deal with. Ok, so now I have to treat this chunk of particles happily playing around with each other.

To get a first idea of what I am facing, I usually start off with perturbation theory, if no one else did this before me. This gives me an idea of what is going on, when the interactions are weak. This hides much of the interesting stuff, but it gives me a starting point. Also, very many insights of perturbation theory can be gained with a sheet of paper an a pencil (and many erasers), and probably a good table of mathematical formulas. Thus, I can be reasonably sure that what I do is right. Thus, whatever I will do next, it has to reduce to what I just did now when the interactions become weak.

Now I turn to the things, which really interest me. What happens, when the interactions are not weak? When they are strong? To get an idea of this, the next step is to perform some simulations of the theory. This will give me a rough idea, of what is going on. How the theory behaves. What kind of interesting phenomena will occur. Armed with this knowledge, I have already gained quite a lot of understanding of the model. I usually know then what are the typical way the particles arrange themselves. How their interaction changes, when looking at it from different directions. What the fate of the symmetries is. And lot more of details.

With this, I stand at a crossroad. I can either go on, and deepen my understanding by improving my simulations. Or, I can make use of the equations of motion to understand the internal workings a bit better. What usually decides for the latter is then that many questions about how a theory works can be best answered when going to extremes. Going to very long or very short distances when poking the particles. Looking a very light or very heavy particles. Simulations cannot do this with an affordable amount of computing time. So I formulate my equations. Then I have to make approximations, as they are usually too complicated. For this, I use the knowledge gained from the simulations. And then I solve the equations, thereby learning more about how the model works.

When I am done to my satisfaction, then I can either enlarge the model somewhat, by adding some more particles or interactions, or go a different model. Hopefully, at the end I arrive at the standard model.

What sounds so very nice and straightforward up to here is not. The process I describe is an ideal. Even if it should work out like this, I am talking about the several years of work. But usually it does not. I run across all kind of difficulties. It could turn out that my approximations for the equations of motion have been too bold, and I can get no sensible solution. Then I have to do more simulations, to improve the approximations. Or the calculations with the equations of motion tell me that I was looking at the wrong thing in my simulations. That the thing I was looking at was deceiving me, and gave me a wrong idea about what is going on. Or it can turn out that the model cannot be simulated efficiently enough, and I would have to wait a couple of decades to get a result. Then, I have to learn more about my model. Possibly, I even have to change it, and start from a different model. This often requires quite a detour to get back to the original model. This may even take many years of work. And then, it may happen that the different method give different results, and I have to figure out, what is going on, and what to improve.

You see, working on a problem means for me to go over the problem many times, comparing the different results. Eventually, it is the fact that the different methods have to agree in the end what guides my progress. Thus, a combination of different methods, each with their specific strengths and weaknesses, is what permits me to make progress. In the end, reliability is what counts. And with this nothing cuts it like a set of methods all pointing to the same answer.

Monday, March 19, 2012

Modelling reality

Ever wondered why it is called the standard model of particle physics? And what a physicist has in mind, when she talks about models?

Models are the basic ingredient of what a theoretical physicist is doing. The problem is that we do not know the answer, we do not know the fundamental theory of everything. Thus, the best we can do is take what we know, and make a guess. The result of such a guess is a model. Such a model should describe what we see. Thus, the standard model of particle physics is the one model what we know about particle physics right now, as incomplete as it may be. It is called the standard one, because it is our best effort to describe nature so far, to model nature in terms of mathematics. There are also other standard models. We have one for how a sun functions, the standard model of the sun, or how the universe evolved, the standard model of cosmology.

Now, when I say, it is our best guess this implies that it is not necessarily right. Well, actually it is, in a sense. It was made the standard model, because it describes (or, if you read this in a couple of years, perhaps has described) our experiments as good as we can wish for. That means, we have found no substantial evidence against this model within the domain accessible in the experiment. This sentence has two important warning signs attached.

The one is about the domain. We do not know what is the final theory. But what we do know is the models. And any decent model will tell us, what it can describe, and what not. This also applies to the standard model. It tells us: 'Sorry guys, I cannot tell what is happening at very large energies, and on the matter of gravitation, well I stay away from this entirely.' This means that this standard model will only remain the standard model until we have figured out what is going on elsewhere. At higher energies, or what is up with gravitation. However, this does not mean that the standard model will be completely useless once we managed that. As with many standard models in the past, it likely will just become part of the large picture, and remain a well-trusted companion, at least in some area of physics. Happened to Newton's law, which was superseded by special relativity, and later by general relativity. Happened to Maxwell's theory of electromagnetism, which was superseded by Quantumelectrodynamics, and later by the standard model. Of course, there is once more no guarantee, and it may happen that we have to replace the standard model entirely, once we see the bigger picture. But this seems right now unlikely.

The other thing was about the experiment. Models are created to describe experiments (or observations, when we think about the universe). Their justification rests on describing experiments. We can have some experimental result, and cook up a model to explain it. Then we do a prediction, and make an experiment to test it. Either it works, and we go on. Or it does not, and then we discard the model. While people developed the standard model, this was a long, painful process during which many models have been developed, proposed, checked, and finally discarded. Only one winner remained, the model which we now call the standard model.

Ok, nice and cozy, and that how science works. But I was talking about methods the last couple of times, so what has this to do with it? Well, this should just prepare you for an entirely different type of models, to avoid confusion. Hopefully. Now the standard model is the model of particle physics. But, honestly, it is a monster. Just writing it down during a lecture requires something like fifteen minutes, two blackboards, and two months of preparation to explain all the symbols, abbreviations and notions involved to write it in such a brief version. I know, I have done it. If you want to solve it, things go often from bad to worse. That is where models come in once more.

Think of the following: You want to describe how electric current flows inside a block of, say, Aluminum. In principle, this is explained by the standard model. The nuclei of Aluminum come from the strong force, and the electrons from the electromagnetic one, and both are decorated with some weak interaction effects. If you really wanted to try describing this phenomena using the standard model, you would be very brave indeed. No physicist has yet tried to undertake such an endeavor. The reason is that the description using the standard model is very, very complicated, and actually most of it turns out to be completely irrelevant for the electric current in Aluminum. To manage complexity, therefore, physicists investigating aluminum do not use the standard model of particle physics in its full glory, but reduce it very, very much, and end up with a much simpler theory. This models Aluminum, but has forgotten essentially everything about particle physics. This is then a model of Aluminum. And it works nice and well for Aluminum. Applying it to, say, copper, will not work, as Aluminum nuclei have been put into it as elementary entities, to avoid the strong interactions. You would need a different model for copper then, or at least different parameters.

So, we threw away almost all of the power of the standard model. For what? Actually, for a price worth the loss: The final model of Aluminum is sufficiently simple to solve it. Most of our understanding of materials, technology, chemistry, biology (all described by the standard model of particle physics, in principle) rests on such simplified models. With only the standard model, we would not be able to accomplish anything useful for these topics, even knowing so much about particles. In fact, historically, the development was even the other way around. We started with simple models, describing few things, and generalized bit by bit.

Ok, you may say. You see the worth of simplified models for practical applications. But, you may ask, you surely do not simplify in particle physics? Well, unfortunately, we have to, yes. Even when only describing particles, the standard model is so complicated that we are not really able to solve it. So we very often make models only describing part of it. Most what we know about the strong interactions has been learned by throwing away most of the weak interactions, to have a simpler model. When talking about nuclear physics, we even reduce further. Also, when we talk about physics beyond the standard model, we often first create very simple-minded models, and in fact neglect the standard model part. Only, if we start to do experiments, we start to incorporate some parts of the standard model.

Again, we do this for the sake of manageability. Only by first solving simpler models, we understand how to deal with the big picture. In particle physics the careful selection of simplified models was what drove our insight since decades. And it will continue to do so. This strategy is called divide and conquer. It is a central concept in physics, but also in many other areas where you have to solve complicated problems.

Of course, there is always a risk. The risk is that we simplify the model too much. That we loose something important on the way. We try to avoid that, but it has happened, and will happen again. Therefore, one has to be careful with such simplifications, and double-check. Often, it turns out that a model makes very reliable predictions for some quantities, but fails utterly for others. Often, our intuition and experience tells us ahead what is a sensible question for a given model. But sometimes, we are wrong. Then experiment is one of the things which puts us back on track. Or that we are actually able to calculate something in the full standard model, and find a discrepancy compared to the simple model.

In the past, such simplified models were created by very general intuition, and including some of the symmetries of the original theory. Over time, we have also learned how to construct, more or less systematically, models. This systematic approach is referred to as effective field theory. This name comes about as it creates a (field) theory which is an effective (thus manageable) version of a more complicated field theory in a certain special case, e.g. low energies.

Thus, you see that models are in fact a versatile part of our tool kit. But they are only to some extent a method - we have still to specify how we perform calculations in them. And that will lead us then to the important concept of combining methods next time.

Thursday, March 15, 2012

The equations that describe the world

Ever since mathematics has been introduced into the description of physics we have striven to describe reality in terms of equations. One of the arguably most know equations is Newton's law that the acceleration of an object is given by the ratio of the force acting upon this object divided by its mass. These equations should not be taken as everlasting truths. For this law of Newton we know that its not fully satisfied if we try to describe a quantum object or if the speed of the object is close to the one of light. However, this makes the equation nonetheless useful, as there are many cases where neither is the case. The best known example is the movement of the planets around the sun. This equation is, however, not yet complete. It states everything that is to known about the particle, but nothing about the force. It is what is called a kinematic equation.

We need to supplement it by an equation for the force. In case of the movement of a planet around the sun, it is Newton's law of gravitation: The force is given by a constant, which has to be measured, times the mass of the sun times the mass of the planet, divided by the square of the distance between the sun and the planet. With this, we know enough to solve the equation, and find after some tedious calculations (to be done by every first semester student of physics) that the planet moves on an elliptical orbit around the sun. With the force given, the equation therefore describes the motion of the planet. It is thus called an equation of motion. Generically, if we can formulate the equations of motion for a theory, we have everything at our disposal to describe the solutions of the theory. However, in general we have to supplement the equations with the situation we want to actually describe with the theory. In the case of the planet, we have to add where the planet was and where it moved to at a certain instance of time. Otherwise the equation of motion would give us the solutions for all possible initial positions and velocities of the planet, and thus an infinite number of possible solutions to the theory. Such additional information are called boundary conditions. They select out of any possible kind of behavior described by a theory the particular one which is compatible with the state a system is in.

This concept now sounds at first like something which is very much tied to Newton's law. In fact, it is not. Already before 1900 we have known how to write down the equations of motion for all kinds of non-quantum physics happening at a speed much less than the one of light. Unfortunately, knowing how to write down the equations is not the same as being able to solve them. For example, we know very well the equations of motions describing how a river flows. But as soon as it flows quickly over rough grounds, such that it becomes turbulent, we are no longer able to solve the equations. In such cases we are often forced to revert to the simulation methods discussed previously.

Now, what happens, when things get fast or quantum? Well, when things get fast, not much changes. The equations just look a bit different, and are much nastier, but that is more or less all. When things go quantum, it becomes more weird. Since in a quantum world we have this problem with being either wave or particle, it is no longer really possible to talk about moving objects anymore. Nonetheless, people have been able to formulate something which is in spirit close to the equations of motions, the so-called quantum equations of motions (or some times called Dyson-Schwinger or Schwinger-Dyson equations, honoring those people who have developed them). These equations describe, in a way, the average behavior of particles in a quantum theory. Nonetheless, supplemented once more by boundary conditions, they describe the contents of a theory completely. Thus, they are powerful indeed. But as with anything powerful, it gets complicated. Thus, only for very, very simple theories it is possible to solve these equations exactly. For theories like the standard model, one has to introduce severe approximations (often called truncations) to be able to solve them. If these approximations are made wisely and with insight, these approximations are such that still questions we have to the theory can be answered correctly. But it takes often very long to understand how to do approximations right.

The way these quantum equations of motions (or also the ones for non-quantum physics) look is by no means unique. We are free to do mathematical reformulations of them. These leave us always with the same physics of course, but the equations look rather different. This is often very helpful, as the different formulations have very different properties, and very different advantages and disadvantages when it comes to doing calculations. Thus, by exploiting the different reformulations in a wise way, one can go a long way in solving the equations.

In case of the quantum equations of motions a particular useful reformulation is given by the so-called functional renormalization group equations. That sounds like a awful big thing, but the idea behind it is rather straightforward. The idea behind this reformulation is not to swallow the whole theory as one big thing, but chop it off in simpler bites, taking one after the other. Technically, it is realized by slicing the energy which particles are allowed to have, and only include particles with a particular range of energy values in each single step. Building up the whole reality is then done by adding up the particles with different energies one after the other. Though also this cannot be done exactly for most theories, it a very useful complementary way of solving the equations, with great successes.

Both approaches together are often collected under the common name of functional methods. Functional here stands for the fact that on a mathematical level both are strictly speaking not dealing with ordinary functions. Rather, they deal with functions of functions, so-called functionals. This sounds awful and is in fact as awful as it sounds. But it is the price one has to pay when one wants to venture into quantum physics mathematically. Nonetheless, this name is nowadays attached to a collection of different formulations of the quantum equations of motions. These are a great help in describing and understanding physics in every detail. In contrast to the lattice methods, it is easy to disassembled the equations, and to understand what each every part is doing. Though, while very complicated to solve, they are a vital part of the physicists tool box in one way or the other, and thus remain a thing I am working with on a daily basis.