Home Editor’s Picks The Fundamental Laws of the Universe

The Fundamental Laws of the Universe

Key Takeaways

  • Every fundamental law of physics has a defined boundary where it stops working or contradicts another law
  • Conservation laws aren’t arbitrary rules but necessary consequences of the symmetries built into physical reality
  • Evidence for whether physical laws are uniform across the entire cosmos remains incomplete, with some anomalies still unresolved

The Rules That Run Everything

The universe runs on rules. Not preferences, not tendencies, but rules precise enough that a physicist in Pasadena can calculate where a spacecraft launched in 1977 is today, down to a margin of a few kilometers, using equations written three centuries ago. These rules, the fundamental laws of physics, represent the deepest level at which science has been able to describe why things happen the way they do. They don’t explain everything. Some of them contradict each other when pushed to extremes. And whether they hold throughout the full expanse of the universe, beyond what telescopes can reach, remains an open question that the current generation of physicists is actively debating.

What makes these laws interesting isn’t their authority but their history. None of them were handed down complete. They were extracted from nature by people willing to measure carefully, calculate honestly, and abandon their prior beliefs when the numbers disagreed. Isaac Newton spent years working through the orbital mechanics of the Moon before publishing his gravitational law. James Clerk Maxwell assembled decades of experimental work by Faraday, Ampere, and Gauss into four equations and then discovered those equations predicted radio waves before anyone had transmitted one. Emmy Noether proved in 1915 that every conserved quantity in physics is a consequence of an underlying symmetry, a result so far-reaching that many physicists consider it one of the most important theorems ever written.

Understanding each law means understanding what it says, where it came from, what it successfully predicts, and where it runs out of answers. All of the laws described here work. They also all fail, in specific, well-understood ways, and those failure points are not embarrassments but signposts pointing toward the next level of understanding.

Newton’s Laws of Motion

In 1687, Newton published Philosophiæ Naturalis Principia Mathematica, usually called the Principia, and the book changed what it meant to do science. It wasn’t just a collection of observations. It was a mathematical framework from which the behavior of objects could be predicted with precision, given only initial conditions and forces. Three laws sit at its core, and they’re still the starting point for engineering, ballistics, orbital mechanics, and structural analysis.

The First Law

The first law of motion states that an object at rest remains at rest, and an object in motion continues moving at constant velocity, unless an external force acts on it. This is Newton’s formalization of inertia, a concept that Galileo had already been developing before Newton was born.

The radical thing about this law was what it displaced. Before Newton, the dominant framework going back to Aristotle held that rest was the natural state of matter and that motion required a continuous cause. Push a cart and it slows down, so rest must be where things naturally end up. Newton’s reframing was that the cart slows down because friction acts on it, and that without friction, it would keep moving indefinitely. The absence of force, not its presence, is what motion requires in the Newtonian picture.

The Voyager 1 spacecraft illustrates this beautifully. Launched in September 1977, it has been traveling through interstellar space for decades with its propulsion system essentially inactive, still moving at approximately 17 kilometers per second. The engines stopped long ago. The probe keeps moving because nothing significant is pushing back against it. That’s the first law operating at an interstellar scale.

The first law’s limitation emerges when the reference frame itself is accelerating. Newton’s first law only holds in what physicists call inertial reference frames, frames that aren’t themselves undergoing acceleration. Someone sitting in a spinning carousel will feel pushed outward as though a force is acting on them, even though no real force in the sense of the first law is involved. These are fictitious or pseudo-forces, artifacts of the observer’s accelerating reference frame. The rotating Earth is technically a non-inertial frame, and trajectories of long-range artillery shells and hurricanes must account for the Coriolis effect, a fictitious force arising from Earth’s rotation. Newton’s first law doesn’t handle these situations on its own; extended machinery involving the full formalism of classical mechanics is needed.

The Second Law

The second law gives the first law its quantitative spine: F = ma, where F is force, m is mass, and a is acceleration. Apply a force to an object and it accelerates. The more force, the more acceleration. The more massive the object, the less it accelerates for a given force.

The predictive power of this equation is extraordinary across an enormous range of scales. The International Space Station orbits at an altitude of roughly 400 kilometers and travels at about 7.66 kilometers per second. It’s in a constant free fall toward Earth, but its horizontal velocity is high enough that Earth’s surface curves away beneath it at the same rate it falls toward it. Every parameter of that orbit, its altitude, period, and velocity, can be calculated directly from Newton’s second law combined with his gravitational law. Mission planners use this to schedule crew rotations, cargo deliveries, and docking maneuvers.

At relativistic speeds, the picture breaks down. As an object’s velocity approaches the speed of light, its resistance to further acceleration increases in a way that the classical F = ma doesn’t account for. The relativistic formulation replaces the simple mass with a velocity-dependent expression. At low speeds, the two formulations agree to many decimal places. Near the speed of light, the classical version fails badly. This isn’t a flaw in Newton’s reasoning; it’s simply that Newton’s second law was derived from observations of everyday objects moving at everyday speeds, and it describes that regime accurately. It was never intended to cover an entirely different physical regime that wasn’t known to exist in Newton’s time.

Quantum mechanics presents a different kind of challenge. Quantum particles don’t have sharply defined positions and velocities simultaneously, a consequence of Heisenberg’s uncertainty principle. The concept of force applied to a particle with a precisely defined trajectory simply doesn’t apply. In quantum mechanics, the force-and-acceleration picture is replaced entirely by the wave function formalism of the Schrödinger equation and, at higher energies, by quantum field theory, where interactions are described as exchanges of force-carrying particles rather than as pushes and pulls along a trajectory.

The Third Law

The third law states that every action has an equal and opposite reaction. A rocket pushes exhaust gases backward at high velocity; the gases push the rocket forward with equal force. A swimmer pushes water backward with their hands; the water pushes them forward. This symmetry of forces is why every interaction is mutual: no object can exert a force on another without receiving an equal force in return.

The third law becomes complicated in electromagnetic systems. Consider two moving charged particles. The electromagnetic force each exerts on the other depends on both particles’ velocities, and under certain configurations, the forces don’t appear to be exactly equal and opposite at every instant. This seems to violate Newton’s third law directly. The resolution is that momentum is not carried only by the particles but also by the electromagnetic field itself. When the field’s momentum is included in the accounting, total momentum is conserved and the deeper principle behind Newton’s third law is preserved. What this reveals is that the third law, in its familiar form, is actually a special case of the more general law of momentum conservation. The deeper rule turns out to be a symmetry principle, not an independent empirical fact.

The Law of Universal Gravitation

Newton’s gravitational law appeared in the same 1687 publication that introduced the laws of motion. It states that every object with mass attracts every other object with mass, with a force proportional to the product of their masses and inversely proportional to the square of the distance between them: F = G(m₁ × m₂) / r², where G is the gravitational constant, measured to be approximately 6.674 × 10⁻¹¹ N·m²/kg².

The reach of this equation was immediately recognized as staggering. Newton showed that the same inverse-square law governing the fall of objects near Earth also governed the orbit of the Moon and the orbits of the planets around the Sun. Gravity was universal in a way that nothing before had suggested; the same mathematical rule operated identically whether the objects were an apple and Earth or Jupiter and the Sun. This was a conceptual revolution as significant as the law itself.

The predictive successes were remarkable. Neptune was mathematically predicted before it was observed. In the 1840s, astronomers noticed that Uranus was drifting from its predicted orbit. John Couch Adams in England and Urbain Le Verrier in France independently calculated where an unseen body would have to be in order to produce the observed deviation. When Johann Gottfried Galle pointed his telescope at Le Verrier’s predicted position on September 23, 1846, Neptune was there. The law had worked in territory no human had ever seen.

Newton’s law of gravitation has one significant conceptual gap that Newton acknowledged himself: it offers no mechanism for how gravity works. The Sun and Earth are separated by roughly 150 million kilometers of vacuum. There’s no wire connecting them, no medium through which the force propagates. Newton called this action at a distance and found it philosophically unsatisfying. He wrote that he had no hypothesis to offer about the mechanism. His law described gravity with precision but explained nothing about what gravity physically is.

It also fails quantitatively in strong gravitational fields. The orbit of Mercury precesses, meaning its orbital ellipse slowly rotates, at a rate of about 574 arcseconds per century. Newtonian gravity, accounting for the gravitational influences of all other planets, predicts a precession of about 531 arcseconds per century. The discrepancy of 43 arcseconds per century had been known since 1859 and resisted every Newtonian explanation. When Albert Einstein published general relativity in 1915, it predicted the exact value of Mercury’s additional precession without any free parameters. This was one of general relativity’s first decisive confirmations.

At cosmic scales, Newton’s law completely fails to explain the accelerating expansion of the universe. Observations of distant Type Ia supernovae in 1998, conducted by teams led by Saul Perlmutter at Lawrence Berkeley National Laboratory and Adam Riess at the Space Telescope Science Institute, showed that distant supernovae were dimmer than a decelerating universe would predict. The universe isn’t slowing down under gravity’s pull; it’s speeding up. This discovery, which earned Perlmutter, Riess, and Brian Schmidt the Nobel Prize in Physics in 2011, requires either a cosmological constant or a mysterious energy component called dark energy, neither of which has any place in Newtonian gravity.

The Laws of Thermodynamics

The laws of thermodynamics were not the product of a single theoretical vision. They were assembled across several decades in the 18th and 19th centuries by scientists tackling the concrete engineering problem of extracting useful work from heat. Steam engines were reshaping industry across Europe, and understanding their limits turned out to require principles of such generality that they apply from the nanoscale to the scale of the observable universe.

There are four laws, numbered from zero to three.

The Zeroth Law

The zeroth law was formally recognized after the other three were already established, but it’s foundational in a logical sense: if system A is in thermal equilibrium with system C, and system B is also in thermal equilibrium with system C, then A and B are in thermal equilibrium with each other. This transitivity of thermal equilibrium is what makes temperature a meaningful, well-defined physical quantity and what gives thermometers their utility.

The zeroth law breaks down at the extremes of quantum physics. Systems far from equilibrium, or systems exhibiting quantum coherence, don’t always have a well-defined temperature. Some laser-cooled atomic systems can be described by a negative effective temperature, a state in which higher energy levels are more populated than lower ones. This isn’t “colder than absolute zero” but a formal inversion of the usual energy distribution, and it’s incompatible with the zeroth law’s classical framework. These aren’t just curiosities; they’re relevant to the physics of lasers and quantum computing.

The First Law

The first law states that the total energy of an isolated system is constant. Energy can change form, from kinetic to thermal, from chemical to electrical, from mass to radiation according to E = mc², but the total never changes. No process creates energy from nothing, and no process destroys it.

This principle has survived every test to which it’s been subjected, including the most energetic particle collisions ever performed. At CERN‘s Large Hadron Collider, protons are accelerated to 99.9999991% of the speed of light and made to collide. Energy conservation is checked in every reconstructed collision event across millions of collisions per second, and it always holds. The hydrogen bomb converts only a small fraction of its mass to energy according to E = mc², but that fraction is enormous in absolute terms, and the energy accounting is always exact.

Where the first law encounters serious difficulty is at the cosmological scale. In an expanding universe described by general relativity, defining total energy is genuinely problematic. When the universe expands, photons lose energy as their wavelengths stretch through cosmological redshift. Where does that energy go? The mathematics of general relativity don’t have a globally conserved energy in the way that Newtonian mechanics does. Sean Carroll and other cosmologists have pointed out that global energy conservation is not a feature of general relativity in an expanding universe; it’s a concept that simply doesn’t generalize cleanly from the flat spacetime setting where the first law was developed. This is not a crisis, but it is a genuine limitation that doesn’t have a universally agreed resolution.

The Second Law

The second law of thermodynamics is the one that most people encounter first when thinking about irreversibility. It states that the total entropy of an isolated system never decreases over time. Entropy is formally defined as a measure of the number of possible microscopic configurations corresponding to a given macroscopic state; loosely, it corresponds to disorder or the unavailability of energy for useful work. Heat flows from hot to cold. Gases expand to fill their containers. Broken eggs don’t reassemble. All of these phenomena are consequences of the second law.

Rudolf Clausius formulated the law in the 1850s, and Ludwig Boltzmann gave it a statistical foundation in the 1870s. Boltzmann showed that entropy isn’t a mystical quantity but simply a count: the number of ways you can arrange atoms at the microscopic level while leaving the macroscopic appearance unchanged. A room full of mixed-together gas molecules corresponds to an enormous number of microscopic arrangements. A room with all the gas molecules in one corner corresponds to far fewer. The laws of mechanics themselves don’t prohibit the molecules from spontaneously collecting in the corner, but the probability of that happening by chance is so close to zero that it will never be observed in the age of the universe.

The second law is the only fundamental law of physics that gives time a direction. Every other law in classical mechanics and quantum mechanics works identically whether run forward or backward in time. The second law uniquely identifies the past as the direction of lower entropy and the future as the direction of higher entropy. This is called the thermodynamic arrow of time and it’s the reason broken eggs stay broken and heat flows in only one direction.

Attacks on the second law’s absoluteness have been instructive. James Clerk Maxwell proposed a thought experiment in 1867, later called Maxwell’s Demon, involving a tiny agent that could sort fast-moving molecules from slow ones, creating a temperature difference without apparent work. The demon seemed to violate the second law until Rolf Landauer showed in 1961 that erasing the demon’s accumulated information necessarily generates entropy. The demon can sort molecules, but wiping its memory to make room for more measurements costs exactly as much entropy as the sorting saves.

At the nanoscale, the second law breaks down in precisely quantified ways. In 2002, a team including Denis Evans at the Australian National University observed direct violations of the second law in colloidal particle systems over short timescales. These events are real and measurable. They’re also consistent with the fluctuation theorem, which quantifies the probability of entropy-decreasing events and shows that such events are exponentially unlikely as the system grows. At macroscopic scales, the second law is effectively inviolable. At nanoscales and short timescales, it’s a statistical expectation, not an absolute rule.

The Third Law

The third law of thermodynamics states that as a system’s temperature approaches absolute zero (0 Kelvin, equivalent to approximately minus 273.15 degrees Celsius), its entropy approaches a constant minimum value, and that absolute zero itself cannot be reached in a finite number of steps. No matter how sophisticated the cooling technology, each step toward absolute zero becomes progressively harder, and the endpoint is always just out of reach.

The coldest temperatures ever produced in a laboratory are astonishing. In experiments using ultracold atomic gases, temperatures of around 38 picokelvin have been achieved at the Massachusetts Institute of Technology. That’s 38 trillionths of a Kelvin above absolute zero, far colder than the average temperature of outer space (roughly 2.7 Kelvin). But absolute zero itself has never been reached, and the third law says it never will be. The practical consequence is that the approach toward absolute zero is an asymptotic process, useful precisely because it gets researchers into regimes where quantum effects dominate and phenomena like superconductivity and Bose-Einstein condensation can be studied.

Maxwell’s Equations and the Laws of Electromagnetism

When James Clerk Maxwell published his complete electromagnetic theory in 1865, he did something remarkable: he took a body of disconnected experimental findings about electricity and magnetism accumulated over decades and showed that they were all consequences of four equations. Maxwell’s equations describe how electric charges create electric fields, how magnetic field lines always form closed loops with no isolated magnetic poles, how a changing magnetic field creates an electric field, and how a changing electric field creates a magnetic field.

The first consequence of these equations that Maxwell noticed was unexpected. Calculating the speed at which electromagnetic disturbances propagate, he arrived at approximately 299,792 kilometers per second, which matched the measured speed of light to within experimental precision. The conclusion was inescapable: light is an electromagnetic wave. Optics and electromagnetism, which had seemed to be entirely separate branches of physics, were the same phenomenon.

The second consequence was even more startling. Maxwell’s equations predicted that accelerating electric charges would radiate electromagnetic waves across a continuous spectrum, including frequencies much lower than visible light. In 1887, Heinrich Hertz produced and detected these waves in his laboratory in Karlsruhe, Germany, working with a spark gap transmitter and a loop antenna receiver. Hertz’s confirmation of Maxwell’s prediction made possible every technology that uses the radio spectrum, from AM radio to Wi-Fi to cellular networks to GPS. The technological world built on electromagnetic waves is the most consequential verification of a physical theory in human history.

Maxwell’s equations are intrinsically consistent with special relativity in a way that Newton’s mechanics were not. They don’t change form when viewed from different inertial reference frames, which is part of what made Einstein’s task easier when he developed special relativity in 1905. In a real sense, Maxwell’s equations were already relativistic before relativity existed as a concept.

The fundamental weakness of Maxwell’s electromagnetism is that it’s a classical, continuous theory. It treats the electromagnetic field as perfectly smooth, with no minimum packet of electromagnetic energy. But quantum mechanicsrevealed that light comes in discrete quanta called photons. Einstein demonstrated this in 1905 by explaining the photoelectric effect, showing that light below a certain frequency cannot dislodge electrons from metal surfaces regardless of its intensity, because individual photon energies are too low. This result, which earned him the Nobel Prize in Physics in 1921, showed that Maxwell’s classical description fails at the quantum level.

The quantum successor to Maxwell’s electromagnetism is quantum electrodynamics (QED), developed in the late 1940s by Richard Feynman, Julian Schwinger, and Sin-Itiro Tomonaga. QED is the most precisely verified theory in physics. Its predictions of the electron’s magnetic moment agree with experiment to roughly ten decimal places, a precision that has no parallel in any other area of science. Feynman’s book QED remains one of the most readable accounts of how this theory describes light and matter for a general audience.

The classical Maxwell equations remain widely used in engineering because QED predictions and classical predictions agree to high precision in situations where photon numbers are large and quantum fluctuations average out. Antenna design, waveguide engineering, and classical optics all use Maxwell’s equations without needing QED corrections. But any situation where individual photons matter requires the full quantum treatment.

The Laws of Quantum Mechanics

Quantum mechanics is the theoretical framework describing the behavior of matter and energy at the smallest scales: atoms, electrons, photons, and the particles that make up atomic nuclei. It was assembled during the 1920s through contributions from a remarkable group of physicists, including Niels Bohr, Werner Heisenberg, Erwin Schrödinger, Max Born, Paul Dirac, and Wolfgang Pauli. The framework it replaced, classical mechanics, described a deterministic universe where every future state was, in principle, calculable from the present state. Quantum mechanics replaced that with probability, superposition, and the inherently uncertain nature of the quantum world.

The Schrödinger Equation

Erwin Schrödinger published his equation in 1926, and it occupies the same place in quantum mechanics that Newton’s second law occupies in classical mechanics. Where Newton’s equation tells the future trajectory of a particle from its current position and velocity, the Schrödinger equation tells how a particle’s wave function evolves over time. The wave function is a mathematical object that encodes the probabilities of finding the particle in different states when measured.

The equation’s track record is exceptional. Solving it exactly for the hydrogen atom produces energy levels matching spectroscopic measurements to extraordinary precision. The technology behind MRI machines depends on quantum mechanical treatment of nuclear spins in magnetic fields. Semiconductor physics, which underlies every transistor ever fabricated, can’t be understood without the Schrödinger equation’s description of electron behavior in crystalline solids. The transistors in a modern smartphone processor, manufactured at fabrication nodes of 3 nanometers or smaller, function because engineers understand and exploit quantum mechanical effects.

The Schrödinger equation’s limitations are significant. It’s non-relativistic, meaning it doesn’t account for the effects of special relativity. At high particle speeds or energies, a different equation, the Dirac equation published in 1928, is needed. Paul Dirac formulated it by combining quantum mechanics with special relativity, and in doing so he discovered that his equation implied the existence of particles with the same mass as electrons but opposite charge. This prediction of antimatter preceded its experimental discovery by four years; Carl Anderson detected the positron in 1932 using cosmic ray tracks in a cloud chamber.

Neither the Schrödinger equation nor the Dirac equation includes gravity. This isn’t a minor omission that will eventually be patched. It reflects a genuine incompatibility between the mathematical structures of quantum mechanics and general relativity that has resisted resolution for nearly a century.

The Heisenberg Uncertainty Principle

Werner Heisenberg formulated the uncertainty principle in 1927: the product of the uncertainties in a particle’s position and momentum is always at least h/4π, where h is Planck’s constant. This is not a statement about measurement instruments being imprecise. It’s a statement about the nature of quantum particles: they do not simultaneously possess sharply defined position and momentum. These quantities are complementary; knowing one more precisely means knowing the other less precisely, not as an experimental inconvenience but as a mathematical consequence of the wave nature of matter.

The energy-time form of the uncertainty principle, ΔEΔt ≥ h/4π, has its own surprising consequences. Over very short timescales, energy can fluctuate substantially. This leads to the phenomenon of virtual particle pair production, in which particle-antiparticle pairs momentarily spring into existence from the quantum vacuum and annihilate before they can be directly detected. These quantum fluctuations are not hypothetical; they produce measurable physical effects. The Casimir effect, predicted by Hendrik Casimir in 1948, is an attractive force between two uncharged metal plates placed very close together in vacuum, caused by the way the plates restrict the quantum fluctuations between them compared to outside. Steve Lamoreaux at the University of Washington confirmed the Casimir effect experimentally in 1997, measuring the force to an accuracy of around 5%.

The uncertainty principle was deeply uncomfortable to many of the scientists who first encountered it. Einstein resisted its implications throughout his life, arguing in various thought experiments that quantum mechanics must be incomplete. The debate between Einstein and Bohr, conducted primarily through published papers and conference discussions from 1927 onward, ended without resolving Einstein’s objections in his favor. Experiments have consistently sided with Bohr’s position: the universe is genuinely probabilistic at the quantum level.

Wave-Particle Duality

Quantum mechanics says that particles like electrons and photons exhibit wave-like properties and particle-like properties depending on how they’re observed. The double-slit experiment, in which individual electrons sent one at a time through two narrow slits build up an interference pattern on a detector screen, is one of the most cited demonstrations of this. Each electron acts as if it passes through both slits simultaneously and interferes with itself, but always lands at a single, definite point on the detector. The wave nature determines the probability distribution; the particle nature determines where each individual detection event occurs.

This duality doesn’t mean electrons are sometimes waves and sometimes particles. It means “wave” and “particle” are both classical concepts that were invented to describe macroscopic phenomena, and quantum objects don’t fit cleanly into either category. The quantum mechanical description is more fundamental than either classical concept.

Quantum Entanglement

Quantum entanglement is the phenomenon by which two particles, once they’ve interacted or been created together in certain ways, share a quantum state that can’t be described independently for each particle. Measuring one particle’s quantum property immediately determines the corresponding property of its entangled partner, regardless of the distance separating them. In 1964, John Bell derived inequalities that any theory based on local hidden variables would have to satisfy. In 1982, Alain Aspect and colleagues at the Institut d’Optique in Paris performed experiments that violated these inequalities, confirming that quantum mechanics is correct and that no local hidden variable theory can reproduce all of quantum mechanics’ predictions.

Entanglement doesn’t allow faster-than-light signaling because the outcomes of individual measurements are still random. No message can be encoded in the correlations without a separate classical communication channel to reveal the results. But the correlations themselves are real, verified, and have no classical explanation. The 2022 Nobel Prize in Physics was awarded to Aspect, John Clauser, and Anton Zeilinger for establishing this experimentally beyond doubt.

Where quantum mechanics struggles as a framework is in interpretation. The mathematical predictions are clear and verified. What the formalism means physically, specifically what happens to a quantum system when it’s not being measured, is genuinely contested. The Copenhagen interpretation, the many-worlds interpretation, pilot wave theory, and relational quantum mechanics all produce identical experimental predictions while describing entirely different underlying realities. Sean Carroll’s Something Deeply Hidden makes a sustained argument for the many-worlds view, while acknowledging the fierce resistance it faces from much of the physics community.

Einstein’s Theory of Special Relativity

Special relativity, published by Einstein in 1905, rests on two postulates. The first is that the laws of physics are identical in all inertial reference frames. The second is that the speed of light in vacuum is the same for all observers, regardless of the motion of the observer or the source. From these two postulates, a cascade of consequences follows that seemed bizarre at the time and remain counterintuitive today despite being thoroughly verified.

Time dilation means that a clock moving relative to an observer ticks more slowly than an identical stationary clock, as measured by that observer. Length contraction means that an object moving relative to an observer appears shorter along its direction of motion than it would appear at rest. The relativity of simultaneity means that two events appearing simultaneous to one observer may appear sequential to another observer in motion relative to the first. None of these effects are perceptible at everyday speeds, but all become dramatic as speeds approach the speed of light.

The equivalence of mass and energy, E = mc², follows from special relativity. Mass is a form of energy, and energy has mass. The energy released in nuclear fission or fusion is the energy corresponding to the mass difference between the initial and final nuclei. The Sun converts approximately 4 million metric tons of mass into energy every second through the nuclear fusion reactions in its core, a process that has sustained it for roughly 4.6 billion years and will continue for roughly another 5 billion.

The practical verifications of special relativity are mundane in the best sense: they’re incorporated into technology that people use every day. GPS satellites orbit Earth at about 14,000 kilometers per hour relative to ground-based receivers. This velocity causes their clocks to run about 7 microseconds per day slower due to time dilation. Without correction for this effect and the gravitational time dilation of general relativity, GPS position errors would accumulate at roughly 10 kilometers per day. The corrections are applied continuously, and the GPS system works as a direct demonstration that special relativity is accurate.

The Fermi Gamma-ray Space Telescope, launched in 2008, measured the arrival times of photons from gamma-ray bursts more than 7 billion light-years away. Photons of different energies, which would have traveled different speeds if special relativity were violated at high energies, arrived within a fraction of a second of each other after traveling for billions of years. The upper limit on any violation of Lorentz invariance placed by this measurement corresponds to a speed difference of less than one part in 10¹⁷.

Special relativity’s limitation is that it only describes physics in the absence of gravity. Its two postulates apply to inertial frames, and gravity introduces acceleration. The extension to gravity required an entirely new framework, which Einstein spent the following ten years developing.

Einstein’s Theory of General Relativity

General relativity, completed in November 1915, reframes gravity not as a force but as the curvature of spacetime. Mass and energy warp the four-dimensional fabric of spacetime, and objects move along the straightest possible paths (called geodesics) through that warped geometry. From the outside, this motion looks like gravitational attraction, but from the perspective of the falling object, no force is acting on it; it’s simply following the geometry of its environment.

The Einstein field equations are ten coupled nonlinear partial differential equations connecting the curvature of spacetime to the distribution of mass, energy, momentum, and stress. They’re formidable to work with directly, and most exact solutions require specific symmetry assumptions that simplify the mathematics. The Schwarzschild solution, found by Karl Schwarzschild in 1916 while serving on the Eastern Front in World War I, describes the spacetime around a spherically symmetric, non-rotating mass. It contains a term that becomes singular at a specific radius, the Schwarzschild radius, which defines what are now called black holes.

The first experimental confirmation of general relativity came on May 29, 1919, when Arthur Eddington and Frank Dyson led expeditions to observe a solar eclipse from the island of Príncipe and from Sobral, Brazil. The eclipse made visible stars near the Sun’s limb that would otherwise be hidden by solar glare. The observed positions of those stars were shifted from their expected positions, consistent with starlight being bent by the Sun’s gravitational field. General relativity predicted a bending of 1.75 arcseconds; Newtonian gravity predicted 0.875 arcseconds. The measurement, roughly 1.98 ± 0.12 arcseconds from Sobral and 1.61 ± 0.30 arcseconds from Príncipe, was consistent with Einstein’s prediction and inconsistent with the Newtonian value.

The detection of gravitational waves by the LIGO observatory on September 14, 2015, designated GW150914, was the most spectacular recent confirmation. The signal, produced by the merger of two black holes of approximately 36 and 29 solar masses at a distance of roughly 1.3 billion light-years, matched general relativity’s predictions with extraordinary precision. The peak amplitude corresponded to a change in LIGO’s arm length of about one-thousandth the diameter of a proton. Rainer Weiss, Barry Barish, and Kip Thorne received the Nobel Prize in Physics in 2017 for this detection.

The Event Horizon Telescope collaboration released the first image of a black hole’s shadow on April 10, 2019, showing the supermassive black hole M87* in the galaxy Messier 87, approximately 53.5 million light-years away. The image’s structure, a bright ring of photons orbiting the black hole surrounding a central dark region, matched what general relativity predicts to within the image resolution. A second image, of the black hole Sagittarius A* at the center of the Milky Way, was released in May 2022.

General relativity’s failure is at singularities, the points of infinite density predicted to exist at the centers of black holes and at the initial moment of the Big Bang. At these points, the equations themselves break down, generating infinities that have no physical interpretation. Every infinity in a physical theory’s predictions is a sign that the theory has been applied beyond its range of validity. The singularity is not a physical prediction of infinite density; it’s a mathematical warning that general relativity needs to be replaced by something else when spacetime curvature becomes extreme enough for quantum effects to matter. The quest for quantum gravity is the quest for that replacement.

Conservation Laws and Noether’s Theorem

The conservation laws of physics, energy, momentum, angular momentum, and electric charge, are sometimes presented as empirical discoveries, things that scientists noticed are always true. Emmy Noether showed in 1915 that this picture is backwards. Conservation laws are not discovered facts about nature that might have been otherwise. They are mathematical necessities that follow directly from the symmetries of physical law.

Noether’s theorem states that every continuous symmetry of the action (the mathematical object from which the laws of motion can be derived) corresponds to a conserved quantity. Time translation symmetry (the laws of physics are the same now as they were yesterday and as they will be tomorrow) implies conservation of energy. Spatial translation symmetry (the laws are the same here as they are anywhere else) implies conservation of momentum. Rotational symmetry (the laws don’t depend on which direction is called “north”) implies conservation of angular momentum.

This reframing has deep implications. If the laws of physics changed over time, energy would not be conserved. If the laws of physics were different in different locations in space, momentum would not be conserved. The conservation laws are direct reflections of the symmetry structure of reality. Discovering a violation of conservation of energy would tell physicists not just that a rule had been broken, but that the laws of physics are not time-symmetric, which would require a complete rethinking of the foundations of the subject.

Conservation of Energy

The conservation of energy encompasses both the first law of thermodynamics and the broader principle that E = mc² expresses: mass and energy are interconvertible forms of the same conserved quantity. Every nuclear reaction, chemical reaction, and mechanical process has been checked against this principle. The upper limit on any violation has been placed at extraordinary precision through astrophysical observations and laboratory experiments.

The complications in an expanding universe, as discussed in the thermodynamics section, are real. General relativity’s framework doesn’t globally conserve energy in the same sense that Newtonian mechanics does. Whether this represents a violation of the principle or simply a failure of the concept to generalize across cosmological scales remains a technical question without a fully agreed answer.

Conservation of Momentum

Conservation of momentum is why rockets work. When combustion gases are expelled backward, the rocket accelerates forward by exactly the amount needed to keep total momentum constant. The ATLAS detector at CERN verifies momentum conservation in every collision event it records, and it records tens of millions of events per second during high-luminosity runs.

At the relativistic level, momentum must be defined in the relativistic way (as the product of the Lorentz factor, mass, and velocity) for conservation to hold. The classical definition breaks down at high speeds, but the conserved quantity itself persists in the relativistic formulation.

Conservation of Electric Charge

Electric charge is conserved to extraordinary precision. No experiment in the history of physics has detected a net change in electric charge in an isolated system. Charged particles can be created and destroyed, but always in matter-antimatter pairs with charges that sum to zero. The law is connected, through Noether’s theorem, to the gauge symmetry of electromagnetism, a mathematical symmetry of the quantum field that describes the electromagnetic force.

Baryon and Lepton Number

Quantum mechanics introduced additional conservation laws without classical analogs. Baryon number (a count of protons and neutrons minus antiprotons and antineutrons) and lepton number (a count of electrons, muons, neutrinos minus their antiparticles) have been conserved in every observed interaction. However, some Grand Unified Theories predict that baryon number is not conserved at extremely high energies, which would lead to proton decay. The Super-Kamiokande detector in Japan, a 50,000-metric-ton tank of ultrapure water surrounded by photomultiplier tubes, has been watching for proton decay since 1996. As of 2025, no proton has ever been detected decaying, setting a lower bound on the proton lifetime of greater than 10³⁴ years. This is vastly larger than the age of the universe but doesn’t rule out proton decay at even longer timescales.

The Standard Model and Its Governing Principles

The Standard Model of particle physics is the current best theory of the elementary constituents of matter and the forces that govern their interactions. It classifies all known fundamental particles into categories: quarks (which combine in groups of two and three to form hadrons, including protons and neutrons), leptons (which include electrons, muons, tau particles, and three types of neutrinos), and bosons (the force-carrying particles). It describes three of the four fundamental forces: electromagnetism, the strong nuclear force, and the weak nuclear force.

The strong nuclear force, described by quantum chromodynamics (QCD), holds quarks together inside protons and neutrons, mediated by particles called gluons. The force has a property called confinement: quarks are never found in isolation in normal conditions, because the energy required to separate them is enough to spontaneously create new quark-antiquark pairs. This is why individual quarks aren’t observed in detectors even though they’re clearly real components of protons.

The weak nuclear force is responsible for processes like beta decay, in which a neutron transforms into a proton by emitting an electron and an antineutrino. It’s mediated by the W and Z bosons, which were predicted by Sheldon Glashow, Abdus Salam, and Steven Weinberg in the late 1960s and detected at CERN in 1983. The weak force has a remarkable property: it violates parity symmetry, meaning it treats left-handed particles and right-handed particles differently. This was discovered experimentally by Chien-Shiung Wu in 1956 and overturned an assumption that had been treated as a bedrock principle of physics.

The electromagnetic and weak forces are unified in the Standard Model into the electroweak force, which is why Glashow, Salam, and Weinberg shared the Nobel Prize in 1979. At the energies of everyday experience, the electromagnetic and weak forces appear very different, the electromagnetic force being long-range and the weak force being extremely short-range. But above energies of roughly 100 GeV, the distinction disappears and the two forces merge into a single description.

The Higgs boson, predicted in 1964 by Peter Higgs, François Englert, and others, is the quantum of the field that gives massive particles their mass through the Higgs mechanism. Without the Higgs field, the W and Z bosons would be massless, and the weak force would be just as long-range as electromagnetism, making atoms as we know them impossible. The Higgs boson was detected at CERN on July 4, 2012, nearly half a century after it was predicted. Higgs and Englert shared the Nobel Prize in Physics in 2013.

The Standard Model’s weaknesses are as well-documented as its successes. It doesn’t include gravity, which has resisted incorporation into the quantum field theory framework. It provides no explanation for dark matter, which makes up roughly 27% of the universe’s energy density. It doesn’t explain the observed preponderance of matter over antimatter, since the known CP-violation (the slight asymmetry between matter and antimatter in weak interactions) is far too small to account for the imbalance seen in the universe. And the Standard Model contains approximately 19 free parameters, including particle masses and coupling constants, that must be measured rather than derived. A more fundamental theory should predict these values from first principles. No such theory currently exists.

The Table of Fundamental Laws

LawDomainCore StatementKey Limitation
Newton’s First LawClassical mechanicsObjects maintain their state of motion without external forceOnly valid in inertial reference frames; fails at relativistic speeds
Newton’s Second LawClassical mechanicsF = maBreaks down near light speed; inapplicable to quantum particles
Newton’s Third LawClassical mechanicsEvery action has an equal and opposite reactionRequires modification when electromagnetic field momentum is included
Law of Universal GravitationGravityF = Gm1m2/r2Superseded by general relativity in strong gravitational fields
Zeroth Law of ThermodynamicsThermal equilibriumTemperature is a transitive, well-defined propertyBreaks down in quantum systems far from equilibrium
First Law of ThermodynamicsEnergyEnergy is conservedGlobal energy conservation is problematic in an expanding universe
Second Law of ThermodynamicsEntropyEntropy of a closed system never decreasesViolated briefly at nanoscales; fails to explain its own initial condition
Third Law of ThermodynamicsAbsolute zeroAbsolute zero cannot be reached in finite stepsDoes not address quantum systems with effective negative temperatures
Maxwell’s EquationsElectromagnetismUnify electric and magnetic fields; predict electromagnetic wavesClassical; replaced at quantum scales by QED
Schrodinger EquationQuantum mechanicsWave function evolves deterministically between measurementsNon-relativistic; has no description of gravity
Heisenberg Uncertainty PrincipleQuantum mechanicsPosition and momentum cannot both be precisely defined simultaneouslyNo known exceptions; limits what can be measured or known
Special RelativityHigh-speed motionSpeed of light is constant for all inertial observersDoes not include gravity or acceleration
General RelativityGravity and spacetimeGravity is curvature of spacetime caused by mass and energyBreaks down at singularities; incompatible with quantum mechanics

Where Each Law Breaks Down

Physics has an unusual relationship with its own theories. Working physicists know that no current theory is final, and the history of the discipline is partly the history of mapping the edges of each framework. Every law described in this article has a boundary beyond which it stops working, and those boundaries are as informative as the laws themselves.

Newton’s laws don’t break dramatically; they fade. At everyday speeds, F = ma produces results accurate to many decimal places. As speeds increase toward a significant fraction of the speed of light, the errors grow. At 10% of the speed of light, the classical kinetic energy formula underestimates the true kinetic energy by about 1%. At 90% of the speed of light, the error is enormous. The transition from Newtonian to relativistic mechanics isn’t a cliff but a slope, and the gradient steepens as velocities rise.

The law of universal gravitation breaks in a more specific way. It fails for Mercury’s orbital precession, where general relativity provides the correct answer. It fails for gravitational lensing, where Newtonian gravity predicts half the observed bending of light by massive objects. It fails at cosmological scales, where it can’t accommodate an accelerating expansion. In each of these cases, general relativity reduces to Newtonian gravity in the limit of weak fields and low velocities, making Newtonian gravity not wrong but incomplete.

General relativity’s own breakdown points are more severe. At singularities, the theory’s equations produce infinities. Inside a black hole, beyond the event horizon, the equations predict infinite spacetime curvature at the central singularity. At the Big Bang, they predict infinite density and temperature at time zero. These infinities are not physical predictions; they’re mathematical signals that the theory has been applied outside its regime of validity. The Planck scale, where quantum gravitational effects should become dominant, is characterized by the Planck length of approximately 1.6 × 10⁻³⁵ meters, the Planck time of approximately 5.4 × 10⁻⁴⁴ seconds, and the Planck energy of approximately 1.22 × 10¹⁹ GeV. Below the Planck length and Planck time, general relativity cannot be trusted.

Quantum mechanics doesn’t break in the same way because it’s been tested in every accessible domain and has not failed. But it breaks conceptually at the interface with gravity. Attempts to apply standard quantum field theory methods to general relativity generate non-renormalizable divergences, mathematical infinities that can’t be absorbed into redefinitions of the theory’s parameters using the same techniques that work for the strong and electroweak forces. The failure is not in any observable prediction but in the consistency of the mathematical framework when gravity is included. This mathematical failure at the level of theory construction is the signal that something new is needed.

The second law of thermodynamics has a deeper conceptual breakdown point: it can’t explain its own initial condition. The law says entropy increases over time, which requires that entropy was lower in the past. The universe started in an extremely low-entropy state, a hot dense plasma with very little structure, compared to the highly structured universe of today, full of galaxies, stars, planets, and life. Why did the universe start in such a low-entropy state? The cosmological standard model combined with inflationary theory provides a partial account of the subsequent evolution, but not a satisfying explanation of the initial condition itself. Physicist Roger Penrose has argued that the extremely low entropy of the Big Bang represents one of the most underappreciated puzzles in all of physics, pointing toward the need for a deeper theory.

The Standard Model’s breakdown is different again. It doesn’t predict its own failure; it simply becomes incomplete near the Planck scale. Below that energy, it should remain accurate, but it makes no predictions about phenomena above it. It also breaks down as a description at the cosmological level because it says nothing about dark matter and dark energy, which dominate the universe’s energy content.

How These Laws Shape and Constrain Human Thinking

The fundamental laws of physics don’t only describe reality. They constrain what can exist, what can be built, and, in a subtle sense, how humans are allowed to think about the universe.

Perpetual motion machines are the most straightforward example. A machine that runs forever without any energy input violates the first law of thermodynamics. A machine that produces more work than the heat it receives violates the second law. Both types have been proposed by inventors throughout history, and both are physically impossible regardless of mechanism, material, or engineering ingenuity. The United States Patent and Trademark Office has maintained a policy since 1911 of refusing to evaluate perpetual motion machine applications without a working model, because the thermodynamic laws make their operation physically impossible and no amount of documentation can substitute for that reality.

Faster-than-light communication and travel are ruled out by special relativity. As an object with mass accelerates toward the speed of light, the energy required to accelerate it further grows without bound. At exactly the speed of light, infinite energy would be required. This is not a technological limitation that will be overcome with better propulsion systems. It’s built into the structure of spacetime itself. The nearest star system, Alpha Centauri, is approximately 4.37 light-years away. Under any physics currently known, reaching it would take years at minimum, decades or centuries at practically achievable velocities. Many science fiction scenarios involving interstellar travel simply ignore this constraint, but any serious engineering analysis cannot.

The uncertainty principle rules out a deterministic, completely knowable universe at the quantum level. The Laplacian ideal, named for the mathematician Pierre-Simon Laplace, who proposed in 1814 that a sufficiently intelligent demon knowing all particle positions and momenta could calculate all future states of the universe, is not just practically unachievable but physically impossible. No amount of computational power or measurement precision can determine both position and momentum of a quantum particle beyond the limits set by Heisenberg’s principle. The universe is not, at its most fundamental level, a deterministic clockwork. This has implications far beyond physics, touching philosophy of free will, the foundations of probability theory, and what it means to know something.

The second law of thermodynamics gives time a direction that doesn’t appear in any other fundamental law, and this has philosophical consequences. Every process described by the other laws is reversible in principle; a movie of any classical mechanical or quantum mechanical process played backward is also a valid physical process. But a movie of entropy decreasing, gas spontaneously compressing, heat flowing from cold to hot, would be immediately recognizable as unphysical. The second law is what distinguishes memory from prediction, the past from the future, causes from effects. Without it, the distinction between before and after would be absent from physics.

Conservation laws constrain technological possibility in another way. Every energy conversion involves losses, because real processes are never perfectly reversible. The efficiency of a heat engine is limited by the Carnot efficiency, which depends only on the temperatures of the hot and cold reservoirs and can never reach 100%. This means there is always waste heat, always a minimum energy cost for any useful process, and always a limit on how efficiently any technology can operate. Engineers designing power plants, refrigerators, and data centers all work within these thermodynamic ceilings.

The conservation of charge and baryon number, as currently understood, rule out certain kinds of matter creation or destruction in accessible energy ranges. No process at the energies of current technology can create or destroy net electric charge. This is why charge is always transported between objects rather than created, and why batteries work by moving charge rather than manufacturing it.

The laws also constrain what questions can be asked scientifically. A question like “what happened before the Big Bang” runs into the fact that general relativity predicts the breakdown of spacetime itself at the initial singularity. “Before” may not be a meaningful concept in that context, because time itself may not have existed before the Big Bang in any physically meaningful sense. This isn’t evasion. It’s a reflection of the fact that the laws don’t provide conceptual tools for addressing certain questions, and extending them beyond their validated range produces answers that can’t be trusted.

Are the Laws of Physics the Same Everywhere in the Universe?

The question of whether physical laws are universal is foundational to all of science but rarely examined closely. Every telescope observation, every analysis of a distant galaxy’s spectrum, every inference about conditions near a black hole 50 million light-years away, depends on the assumption that physics works the same there as it does here. This assumption is called the cosmological principle in its broader form, and it’s treated as a working assumption rather than a proven fact.

The evidence supporting universality is extensive and powerful. The spectral lines of hydrogen observed in galaxies billions of light-years away, in quasars whose light left them when the universe was a fraction of its current age, match the spectral lines of hydrogen in Earth laboratories to extraordinary precision. These spectral lines are determined by the fine-structure constant α (alpha), which governs the strength of electromagnetic interactions and has a measured value of approximately 1/137.035999. If this constant were even slightly different, the spectral fingerprint of hydrogen would shift measurably. No such shift is observed in the vast majority of astronomical data.

The cosmic microwave background (CMB), the thermal afterglow of the Big Bang first detected by Arno Penzias and Robert Wilson in 1965 and measured in extraordinary detail by the WMAP and Planck satellite missions, shows a temperature uniformity of about 1 part in 100,000 across the full sky. This extreme isotropy is consistent with the same physical laws and the same physical history operating across the entire observable universe, which spans roughly 93 billion light-years in diameter.

Nuclear processes in distant stars produce the same elemental abundances as nuclear physics predicts from the known strong force coupling constant. The ratio of helium to hydrogen produced in the first three minutes of the universe, called Big Bang nucleosynthesis, is determined by the values of the weak force coupling constants, the neutron-proton mass difference, and the expansion rate of the universe. Observations of primordial helium abundance in metal-poor galaxies match the predictions of standard physics to within a few percent. If the fundamental constants had been different in the early universe, the helium abundance would be different, and it isn’t.

Against this backdrop of confirmation, there are genuinely intriguing anomalies worth taking seriously. In 1999, John Webb and collaborators at the University of New South Wales analyzed the absorption spectra of distant quasars observed with the Keck Observatory in Hawaii and found hints that the fine-structure constant might have been slightly smaller in the distant past, and might be slightly different in different directions of the sky. The reported variation was tiny, around five parts in a million, but it was statistically significant in the Keck data. If confirmed, it would mean either that the laws of physics have evolved over cosmic time, or that they have a spatial dependence, both of which would be among the most important discoveries in the history of science.

Subsequent investigations have produced mixed results. A team using the Very Large Telescope in Chile found variations of similar magnitude but in the opposite sign, suggesting a dipole pattern: α is slightly smaller in one direction of the sky and slightly larger in the opposite direction. Other analyses, including work using ultra-stable spectrographs designed specifically to minimize instrumental systematics, have found no significant variation. As of 2025, no scientific consensus has been reached on whether the fine-structure constant varies. The measurements are difficult, the systematic uncertainties are substantial, and different teams using different instruments have reached different conclusions.

This is one of those rare cases where the scientific community is genuinely uncertain and where the question is live enough to drive continued observational effort. The forthcoming Extremely Large Telescope (ELT) in Chile, scheduled to begin operations in the late 2020s, will be capable of spectroscopic measurements precise enough to either confirm or rule out variations at the level reported by Webb’s team. The question will be answered, but it hasn’t been yet.

Inflationary cosmology adds another layer of uncertainty. The inflation hypothesis, developed by Alan Guth at MIT in 1980 and extended by Andrei Linde and others, proposes that the very early universe underwent an exponential expansion driven by a quantum field called the inflaton. Inflation solves several problems with the standard Big Bang model, including the horizon problem (why the CMB looks the same in all directions) and the flatness problem (why the universe has such a close-to-critical density today). But inflation also makes a disturbing prediction: in most versions of the theory, inflation continues eternally in some regions of space, creating a vast “multiverse” in which different regions may have different values of the physical constants.

If inflationary eternal inflation is correct, then our observable universe is one bubble in an enormous structure where physics may be fundamentally different outside our region. The constants we observe would be the values that happened to be realized in our bubble when inflation ended, determined by quantum chance rather than by logical necessity. This would mean that the specific values of the electron mass, the speed of light, the gravitational constant, and every other fundamental parameter are, in the largest sense, accidents of cosmic geography rather than universal truths.

The anthropic principle connects to this question directly. Many of the constants of physics appear to be finely tuned for the existence of complex structures. If the strong nuclear force were slightly weaker, protons wouldn’t bind and there would be no atoms heavier than hydrogen. If the cosmological constant were much larger, the universe would have expanded too fast for gravity to assemble matter into galaxies and stars. If the fine-structure constant were significantly different, stellar nucleosynthesis couldn’t produce the carbon and oxygen necessary for organic chemistry. These constraints are real. Whether they imply a selection effect in a multiverse, a deeper principle that fixes the constants, or simply the fact that we exist in a universe compatible with our existence and can’t observe the others, is a question at the boundary of physics and philosophy.

What’s certain is that within the observable universe, all accessible evidence points to a high degree of uniformity in the laws of physics. Whether that uniformity extends to regions beyond what light has had time to reach since the Big Bang is, in the strict sense, unknowable with current technology. It may be unknowable in principle if those regions are permanently beyond our observational horizon.

The Problem of Unification

The history of physics can be read as a progressive unification of seemingly separate phenomena. Newton unified terrestrial and celestial gravity. Maxwell unified electricity and magnetism. Einstein’s special relativity unified space and time. The electroweak theory unified electromagnetism and the weak force. The Standard Model brought the strong force into the same mathematical framework. Each step revealed that what appeared to be distinct phenomena were actually manifestations of a single underlying principle.

The missing unification is gravity. All three of the forces described by the Standard Model have been successfully incorporated into quantum field theory. Gravity has not. The mathematical incompatibility between general relativity and quantum mechanics isn’t a matter of insufficient effort; the world’s best theoretical physicists have worked on this problem continuously since the 1930s.

String theory is the most mathematically developed candidate for quantum gravity. It proposes that fundamental particles are not point-like but are instead tiny, one-dimensional vibrating strings whose vibrational modes determine which type of particle they appear to be. In the 1970s and 1980s, physicists discovered that string theory automatically includes a massless spin-2 particle that behaves like a graviton, the hypothetical carrier of the gravitational force. This seemed like exactly the right result and drove enormous investment of theoretical effort.

But string theory has produced no confirmed experimental predictions that distinguish it from competing theories. Its equations work in 10 or 11 spacetime dimensions, not the 4 we observe, requiring 6 or 7 extra dimensions to be “compactified” at very small scales. The way they’re compactified is not uniquely determined by the theory; estimates suggest the number of possible compactification geometries is around 10⁵⁰⁰, each corresponding to a different set of physical constants. Critics like Lee Smolin have argued that this makes string theory effectively unfalsifiable. His book The Trouble with Physics presents this critique in detail and remains a significant contribution to debates about the direction of fundamental physics.

Loop quantum gravity takes a different approach, quantizing spacetime itself rather than introducing new fundamental objects. It predicts that spacetime has a granular structure at the Planck scale, with a minimum length of approximately 1.6 × 10⁻³⁵ meters. This prediction is, in principle, testable through its effect on high-energy photon propagation, since a granular spacetime would cause photons of different energies to travel at slightly different speeds. The precision needed to detect this effect is beyond current capabilities but may eventually be reached by future gamma-ray observatories.

After nearly a century of effort, quantum gravity remains unsolved. Whether the solution requires primarily better mathematics, better experiments, or a fundamental conceptual revolution comparable to what quantum mechanics itself represented is genuinely unclear. It may require all three. Brian Greene’s book The Elegant Universe provides a thorough and accessible account of how string theory approaches this problem for general readers.

Dark Matter, Dark Energy, and the Limits of Known Laws

The Standard Model accounts for approximately 5% of the total energy content of the universe. The remaining 95% consists of dark matter (roughly 27%) and dark energy (roughly 68%), according to the best current measurements from the Planck satellite and the Dark Energy Survey. Neither dark matter nor dark energy corresponds to anything in the current theoretical framework of particle physics. Both are inferred entirely from their gravitational effects.

Dark matter was first proposed by Fritz Zwicky in 1933 after observing that galaxies in the Coma Cluster were moving too fast to be held together by the visible mass. Vera Rubin and Kent Ford confirmed the problem definitively in the 1970s by measuring the rotation curves of spiral galaxies, finding that the orbital speeds of stars far from galactic centers were much higher than predicted by the mass of visible stars alone. The gravitational influence of unseen mass was needed to explain the observations. Whatever dark matter is, it has mass, it clusters gravitationally, and it doesn’t interact electromagnetically (otherwise it would emit or absorb light and be visible).

No particle in the Standard Model fits the profile. Neutrinos have mass and are electrically neutral but are too light and fast-moving to provide the gravitational scaffolding needed for galaxy formation. The leading candidates for decades were Weakly Interacting Massive Particles (WIMPs), hypothetical particles with masses in the range of 10 to 1000 times the proton mass. Direct detection experiments, including LUX-ZEPLIN at the Sanford Underground Research Facility in South Dakota, have placed increasingly tight constraints on WIMP interaction cross-sections without detecting a signal. Axions, sterile neutrinos, and primordial black holes are among the alternative candidates that remain viable.

Dark energy is the more mysterious and philosophically troubling component. Its effect is to cause the expansion of the universe to accelerate, discovered in 1998 by the teams of Saul Perlmutter, Adam Riess, and Brian Schmidt. Its observed energy density is approximately 6 × 10⁻¹⁰ joules per cubic meter of empty space. Quantum field theory predicts that the vacuum energy density, which behaves similarly to dark energy, should be roughly 10¹²² times larger than the observed value. This discrepancy between theory and observation is the largest in the history of science and represents a fundamental problem with the current theoretical framework.

The existence of dark matter and dark energy doesn’t mean the known laws are wrong in their domains. General relativity accurately predicts that these components exert gravitational effects. The Standard Model accurately describes the particles that constitute ordinary matter. But the laws are silent on what dark matter is and what dark energy is, meaning that 95% of the universe’s energy content falls outside any current theoretical understanding at the level of fundamental constituents.

Summary

The laws of physics are working descriptions, each accurate within a defined domain, each pointing toward something deeper at its boundaries. Newton’s mechanics gave way to special relativity at high speeds and to general relativity in strong gravitational fields. Classical electromagnetism gave way to quantum electrodynamics at the photon scale. Quantum mechanics itself stands unresolved against gravity.

What’s missing from any tidy presentation of these laws is how powerfully they work within their respective domains. The GPS satellites function. The Large Hadron Collider correctly predicts particle collisions. MRI machines image soft tissue. Nuclear reactors generate electricity. Every piece of semiconductor technology in every device operates exactly as quantum mechanics predicts. The laws aren’t incomplete in a way that undermines their applications. Their incompleteness manifests only at the extremes, at Planck-scale energies, inside black hole singularities, in the quantum behavior of gravitational fields, and in the nature of dark matter and dark energy.

Whether the laws are universal remains the deepest open question. Within the observable universe, spanning roughly 93 billion light-years, the physical constants and laws appear to be highly uniform. Hints of variation in the fine-structure constant remain unconfirmed by the 2025 state of evidence, and the Extremely Large Telescope may soon settle the question. Beyond the observable universe, in regions that the expansion of space has moved beyond causal contact with Earth, the laws may or may not be the same. Current physics can’t say.

The most honest summary is that the fundamental laws of the universe represent the best current approximation to something real, structured, and deeper than what any single framework has yet captured. Each law’s breakdown is a research frontier. The singularities of general relativity, the divergences of quantum field theory applied to gravity, the unexplained values of the Standard Model’s parameters, the nature of dark energy, the initial low-entropy condition of the universe: these aren’t defects to be apologized for but open doors. The laws don’t mark the end of understanding. They mark how far understanding has come.

Stephen Hawking’s A Brief History of Time captured a generation’s imagination with the prospect of a complete theory of everything. The prospect remains compelling. It also remains unachieved. The difference between those two facts is where physics lives.

Appendix: Top 10 Questions Answered in This Article

What are the fundamental laws of the universe?

The fundamental laws of the universe are mathematical principles describing how matter and energy behave across all physical situations. They include Newton’s laws of motion, the laws of thermodynamics, Maxwell’s equations, the principles of quantum mechanics, and Einstein’s theories of special and general relativity. Each law applies within a defined domain and has specific, well-understood limitations at its boundaries.

Why does Newton’s second law, F = ma, fail at speeds close to the speed of light?

At relativistic speeds, the resistance of an object to further acceleration increases in a way that the classical F = ma equation doesn’t account for. The relativistic formulation replaces the classical mass term with a velocity-dependent quantity that grows without bound as speed approaches the speed of light. At everyday speeds the correction is negligible, but near light speed it makes the classical equation increasingly inaccurate and eventually shows that reaching light speed would require infinite energy.

What is entropy and why does it always increase?

Entropy is a measure of the number of microscopic configurations corresponding to a given macroscopic state of a system. It always increases in a closed system because disordered states vastly outnumber ordered ones. The molecules in a room full of gas can be arranged in far more ways when mixed throughout the room than when confined to one corner, so random thermal motion drives the gas toward its most probable state, which is the most disordered. There’s no force pushing toward disorder; it’s a consequence of overwhelming statistical probability.

How did Maxwell’s equations predict radio waves before they were detected?

James Clerk Maxwell showed in 1865 that his four equations describing electric and magnetic fields implied self-sustaining electromagnetic waves propagating through vacuum at the speed of light. The mathematics predicted that oscillating charges would radiate waves across a continuous spectrum extending far below visible light frequencies. Heinrich Hertz generated and detected these waves experimentally in his Karlsruhe laboratory in 1887, more than two decades after Maxwell’s theoretical prediction.

Why are general relativity and quantum mechanics incompatible?

Quantum mechanics describes discrete, probabilistic interactions between particles in flat spacetime, while general relativity is a continuous, deterministic theory of curved spacetime geometry. When physicists attempt to apply quantum field theory methods to gravity, they generate uncontrolled infinities that can’t be removed using the renormalization techniques that work for the other fundamental forces. At the Planck scale, both theories must simultaneously apply, but their mathematical frameworks produce contradictions rather than a unified description.

What is Noether’s theorem and why does it matter for physics?

Emmy Noether proved in 1915 that every continuous symmetry of the laws of physics corresponds to a conserved quantity. Time symmetry (laws don’t change from moment to moment) implies energy conservation; spatial symmetry (laws are the same everywhere) implies momentum conservation; rotational symmetry implies angular momentum conservation. This theorem revealed that conservation laws are not arbitrary empirical regularities but necessary mathematical consequences of the symmetry structure built into physical law itself.

Is there any evidence that the laws of physics vary across the universe?

The most discussed evidence comes from John Webb’s 1999 analyses of quasar spectra, which found hints that the fine-structure constant governing electromagnetic interaction strength was slightly different in the distant past and may vary with direction in the sky. Subsequent studies have produced conflicting results, with some teams finding similar variations and others finding none. As of 2025, no variation has been confirmed to scientific consensus, and forthcoming instruments like the Extremely Large Telescope are expected to resolve the question.

What is the Heisenberg uncertainty principle, and is it just a measurement problem?

The uncertainty principle states that the product of the uncertainties in a particle’s position and momentum is always at least h/4π, where h is Planck’s constant. It is not a measurement problem caused by imprecise instruments. It reflects the fundamental wave nature of quantum particles: position and momentum are not simultaneously well-defined properties of a quantum object, not because of any practical limitation but because quantum mechanics doesn’t assign them precise simultaneous values. No future technology will overcome this constraint.

What are dark matter and dark energy, and why don’t current laws explain them?

Dark matter is inferred from gravitational effects on visible matter, including galaxy rotation curves and gravitational lensing, but emits no detectable electromagnetic radiation and corresponds to no particle in the Standard Model. Dark energy is the energy density of empty space driving the accelerating expansion of the universe, discovered in 1998 through supernova observations. Together they account for roughly 95% of the universe’s energy content, and no current theoretical framework successfully predicts either from first principles.

Why can’t absolute zero temperature ever be reached?

The third law of thermodynamics states that reaching absolute zero (0 Kelvin) would require an infinite number of steps, each requiring progressively more effort. Every cooling process reduces temperature by a finite fraction rather than by a fixed amount, so the approach to absolute zero is asymptotic. The coldest temperatures achieved in laboratories are around 38 picokelvin, far below the temperature of outer space, but absolute zero itself remains permanently out of reach. This is a fundamental thermodynamic constraint, not a technological limitation.

Exit mobile version