Equilibrium
It seems odd1 to begin the study of life -- of dynamic processes and energy flows -- with the study of equilibrium, where observables do not change over time. A system in equilibrium cannot be alive. So why should we study equilibrium?2Equilibrium is relevant to biochemistry because macroscopic systems evolve towards equilibrium. In other words, if we sit and watch any macroscopic system for long enough, it will tend towards an equilibrium state3. This powerful fact tells us that even though a cell is not in equilibrium, its behavior is still governed by the approach to equilibrium. Hence, by studying equilibrium processes, we gain some insight into the factors that drive cellular processes.
The time evolution of macroscopic systems is explained by entropy, a pretty subtle and confusing concept -- so we will need to be careful with how we define terms and state assumptions.
Microstates and Macrostates
Statistical mechanics studies the physics of macroscopic systems which consist of very many particles, typically on the order of \(N \gtrapprox 10^{20}\). Since there are so many particles, we will need to distinguish between the microscopic details of individual particles and the overall observables of the macroscopic system.The exact description of system's state is called the microstate, and it contains every detail you could ever need to know. For instance, in classical mechanics, to fully describe a box of N particles, we would need to know where every particle is and how fast it is going. Mathematically, the microstate would be a collection of 6N numbers -- the three components of position and three components of momentum for each of the N particles. In principle, once we know the microstate of a system, we can determine exactly how it evolves in time by following the microscopic physics of the individual particles. However, it is very unweildy to keep track of all the 6x1020 numbers of the microstate!
Thankfully, most of the microscopic details of a thermodynamic system are irrelevant for physics. For instance, if I wished to purchase a bottle of rubbing alcohol from the pharmacy, I would not need to specify the location of every single atom within the bottle -- it's sufficient to tell the pharmacist how much isopropanol I'd like, and maybe the temperature of the bottle (which as we'll see, is related to the average energy per molecule). Additionally, since all the possible microstates corresponding to my description will "look the same" when you zoom out, there's very little physical consequence about what exact microstate the bottle is in.
In statistical mechanics, we say that the overall macrostate of a thermodynamic system can be specified by just a few macroscopic variables such as the total energy \(E\), the volume \(V\), and the number of particles \(N\).4 ...
The Fundamental Assumption
- Over long enough time, all microstates corresponding to a given macrostate will be equally likely
- Suggests that "counting microstates" is useful!
Entropy
- Phase space picture of microstates -- can either justify by drawing boxes in the classical Hamiltonian picture, or just by discretizing and drawing a graph. Note that Louiville's theorem (or unitarity in QM!) means that the number of ways to get to a node is the number of ways out -- you can't "bunch up"!
- Think of microscopic time evolution as transitions between the microstates. The trajectory just wanders through phase space pretty much randomly...
- Suppose we color different parts of phase space different colors. Evidently the fraction of time we see each color depends on the "hypervolume" of that phase space region. This is in line with fundamental assumption.
- Extend argument, but paint different colors for different values of macroscopic observables. It's clear that volume of phase space region is what determines what we observe macroscopically.
- Now, what is distro of phase space region (ie, exp(entropy)!) for different macroscopic observables? We will consider toy example of left/right side of box. State the counterintuitive result that the distro is bunched very tightly around its maximum value
- The region of phase space where the trajectory ends up is called "equilibrium" -- it's just the biggest portion of phase space whose microstates are not macroscopically distinguishable.
- Summary of section: "entropy is maximized" sounds profound and confusing but is really a consequence of (1) random enough time evolution (ergodicity??) and (2) the statistics of many many things. A system will evolve towards states of maximum multiplicity because why not? You'd really need a conspiracy to stay away from something that's so probable. (probable in what sense??)