Home Current News Why Scientists Still Cannot Agree on How Fast the Universe Is Expanding

Why Scientists Still Cannot Agree on How Fast the Universe Is Expanding

Key Takeaways

  • The best local and early-universe measurements still disagree by about 5 to 6 km/s/Mpc.
  • New data from JWST and DESI sharpened the dispute instead of ending it.
  • The disagreement matters because it tests the limits of ΛCDM and modern cosmology.

The Number That Refuses to Settle Down

The argument is about the Hubble constant , usually written as H0. It expresses the present-day expansion rate of the universe in kilometers per second for every megaparsec of distance. In plain terms, it asks how much faster a faraway galaxy appears to recede when it is another 3.26 million light-years farther away.

That sounds straightforward. It is not.

Two major routes to the answer keep producing different values. One route infers the expansion rate from conditions in the early universe, especially the cosmic microwave background measured by the Planck mission and, more recently, by the Atacama Cosmology Telescope . Under the standard ΛCDM cosmological model, the Planck result lands near 67.4 km/s/Mpc. Later analyses from ACT also remained in that lower range.

The other route measures the nearby universe more directly. The SH0ES program led by Adam Riess uses a distance ladder built from Cepheid variable stars and Type Ia supernova explosions. JWST observations strengthened the case that unresolved crowding in older Hubble Space Telescope Cepheid measurements is not large enough to erase the mismatch. The local value still lands in the low 70s.

That gap is the famous Hubble tension . It is no longer a minor statistical nuisance. It is the kind of disagreement that can mean something in the assumptions is wrong, something in the measurements is wrong, or both.

A Disagreement About Distance, Time, and History

The universe has no ruler stretched across it and no clock visible from the outside. Astronomers infer distances and expansion history by piecing together different indicators, each with its own physics and each with its own vulnerabilities.

The local route depends on what is often called the cosmic distance ladder. Nearby objects with geometric distances calibrate the intrinsic brightness of stars such as Cepheids. Those Cepheids calibrate supernovae in somewhat farther galaxies. Those supernovae then act as bright distance markers far across the nearby universe. Any small bias at a lower rung can echo through the whole chain.

The early-universe route works differently. It treats the baby universe as a physical system whose ingredients and behavior can be modeled from the imprint left in the cosmic microwave background. In that approach, H0 is not measured directly in today’s universe. It is inferred from a model that links the universe at about 380,000 years old to its later history. That distinction matters. A lower Planck-style H0 is not simply read off a sky map. It emerges from fitting data to a model with parameters for ordinary matter, dark matter, dark energy, and the primordial density pattern.

This is why the argument persists. The two camps are not merely using different telescopes. They are asking the universe different kinds of questions.

The Lower Number Comes from the Early Universe

The Planck satellite observed tiny temperature fluctuations in the cosmic microwave background across the whole sky. Those ripples encode the density, geometry, and composition of the early universe. When analyzed under the standard ΛCDM framework, they produce a precise set of cosmological parameters, including H0 near 67.4 km/s/Mpc.

That result is not standing alone anymore. The Atacama Cosmology Telescope has produced measurements that broadly support the same lower expansion picture from high-redshift data. That matters because one early hope was that Planck might weaken when another experiment took over. That has not happened.

The strength of this approach is that it rests on well-developed early-universe physics and very precise observations. Its weakness is just as obvious. The Hubble constant in this framework is inferred through a cosmological model rather than measured directly in the nearby universe. If the model is incomplete, the inferred value can shift even when the observations themselves are excellent.

The Higher Number Comes from the Nearby Universe

The local-universe value rests on a long observational tradition. Astronomers know that Cepheid variable stars pulse in a way that reveals their true brightness. Compare true brightness with observed brightness, and distance follows. Type Ia supernovae then extend that calibration much farther.

The SH0ES team has pushed this method with increasingly careful work on Gaia parallaxes, NGC 4258 maser geometry, Large Magellanic Cloud distances, metallicity effects, and near-infrared observations. Critics often argued that Hubble Space Telescope images might blend nearby stars together, making Cepheids look brighter than they really were and distances too short. JWST was expected to test that directly because it performs especially well in the infrared.

That test did not make the problem disappear. It narrowed the list of easy explanations.

At this stage, the claim that the tension is simply a Hubble Space Telescope imaging mistake is hard to defend. The local distance ladder may still contain hidden systematics, but the most obvious suspected flaw has taken damage.

A Third Camp Has Entered the Debate

The story is not just “Planck versus SH0ES.” Other methods sit between them or scatter around them, and that scatter is one reason the subject remains active rather than settled.

The Chicago-Carnegie Hubble Program led by Wendy Freedman uses alternate stars, especially the tip of the red-giant branch and asymptotic giant branch stars, to calibrate supernova distances. Results from those methods have often landed between the classic local and early-universe values. That has not ended the argument. It has changed its shape.

Instead of two neat peaks with nothing between them, the field now has a more tangled pattern. Some local methods lean high. Some lean lower. The spread suggests that either subtle systematics differ by method or the way nearby distances are stitched together is more delicate than earlier public summaries implied.

Strong gravitational lensing adds another angle. When a massive foreground galaxy bends light from a quasar behind it, different light paths can produce time delays between brightness variations in the images. From those delays, along with lens modeling, astronomers can infer H0. This method, often grouped with time-delay cosmography , is powerful because it does not depend on Cepheids or the cosmic microwave background.

Gravitational-wave standard siren measurements offer yet another path. Events detected by LIGO , Virgo , and KAGRAcan provide absolute distance information from the waveform itself. This method is still less precise than the leading approaches, but it is becoming too useful to ignore because it bypasses much of the traditional distance ladder.

The Tension Is Not Just About One Decimal Place

A reader seeing numbers such as 67.4 and 72.6 might wonder why cosmologists treat the gap as such a major problem. Those values differ by only about 7 to 8 percent.

In ordinary engineering, 8 percent can be manageable. In precision cosmology, it is large.

The reason is not the size of the difference alone. It is the size of the difference relative to the quoted error bars. Modern cosmological measurements are so refined that a 5 km/s/Mpc gap corresponds to a many-sigma disagreement. The local and early-universe values are not brushing past each other. They remain separated after years of improved data, reanalysis, new instruments, and repeated challenges to the assumptions behind each method.

That persistence changes the scientific question. It is no longer enough to ask which number is right. The harder question is which hidden assumption links each method to that number.

Hidden Systematics Are Still Possible

Every serious discussion begins with systematic error. Not because scientists are careless, but because modern observational cosmology is so exacting that tiny effects matter.

In the local distance ladder, the list of possible trouble spots is long. Cepheid brightness can depend on metallicity. Dust extinction can redden and dim starlight. Crowding in dense galactic fields can bias photometry. Supernova host galaxy properties may correlate with luminosity after standard corrections. Nearby galaxy motions complicate recession measurements. Zero-point offsets in parallax catalogs matter. So do cross-calibrations between instruments.

That catalog of concerns is real, though many of the obvious candidates have now been tested hard. JWST data weakened the crowding argument. Gaia sharpened parallax calibration. Different anchors such as the Large Magellanic Cloud and NGC 4258 have been compared repeatedly. This does not prove the local ladder is flawless. It does mean that any remaining flaw has become less obvious and more stubborn than the early public debate suggested.

The early-universe side has its own dependencies. The Planck-style H0 is only as secure as the model used to infer it. If the universe contains an extra relativistic species, a brief episode of early dark energy , unexpected neutrino behavior, altered recombination physics, or a more complex dark sector, then the cosmic microwave background can still be measured with high precision while the inferred H0 shifts. A lower early-universe H0 is not model free.

Here a clear judgment is warranted. The dispute now looks less like a simple measurement blunder and more like a stress test of ΛCDM itself. That does not prove ΛCDM is wrong. It does make it harder to treat the model as beyond challenge.

Why New Physics Is So Tempting

When the simplest explanation of a disagreement is ruled out, theorists start asking whether the background model needs to change. The Hubble tension has generated an enormous literature on that possibility.

One family of ideas modifies the universe before recombination. Early dark energy models propose that a temporary energy component altered the expansion rate before the cosmic microwave background was released, reducing the sound horizon and allowing a higher H0 to be consistent with early-universe observations. These models became popular because they can move the cosmic microwave background inferred H0 upward without rewriting the entire late-time universe.

Another family introduces extra radiation-like content, often described as increasing the effective number of relativistic species. Loosely speaking, the early universe would expand a bit faster if additional light particles were present. That also changes the sound horizon and can shift inferred cosmological parameters.

A third family changes the late universe. Instead of a constant dark energy density, the dark-energy equation of state might evolve with time. Modified gravity ideas take a different path, altering how spacetime behaves on cosmic scales. Void or inhomogeneity proposals ask whether the local cosmic environment could bias nearby measurements.

None of these ideas has won. That matters. After years of work, there is still no elegant, widely accepted extension that fixes the Hubble tension while also fitting the full range of data on baryon acoustic oscillations , supernovae, structure growth, and the cosmic microwave background.

That is the catch. A model can fix one tension and create two more.

DESI Changed the Conversation Again

The Dark Energy Spectroscopic Instrument was not built only to talk about H0, but it has become central to the argument. DESI maps millions of galaxies and quasars, measuring the large-scale distribution of matter through baryon acoustic oscillations . Those patterns act as a standard ruler across cosmic history.

DESI results have delivered some of the best BAO measurements yet. That has sharpened the debate over whether dark energy is truly constant or may evolve over time.

This is where the Hubble story becomes more interesting and more difficult. If dark energy is evolving, then the late-time expansion history is not exactly what ΛCDM assumes. That could help explain part of the mismatch. Yet hints of dynamical dark energy do not automatically solve the Hubble tension, and some recent analyses suggest they may tighten the constraints on popular fixes such as early dark energy rather than loosen them.

So DESI did something cosmology often does at its best. It reduced ignorance in one place and exposed more of it in another.

What the Sound Horizon Has to Do With Any of This

A technical phrase appears again and again in this debate: the sound horizon. It deserves translation because it sits near the center of the problem.

In the early universe, before atoms formed and light traveled freely, ordinary matter and radiation behaved as a tightly coupled plasma. Pressure waves moved through it. The maximum distance those waves could travel before recombination became imprinted both in the cosmic microwave background and in the later clustering of galaxies. That scale is the sound horizon.

Planck and DESI do not measure H0 directly in the same way a nearby distance ladder tries to. They measure or infer quantities tied to this standard ruler and then use a cosmological model to connect that ruler to the present universe. If the sound horizon was smaller than ΛCDM expects, then the inferred H0 can rise. That is why so many proposals focus on the pre-recombination era.

The attraction is obvious. Change the ruler, and the map changes without rewriting every local observation.

The danger is just as obvious. The sound horizon is woven into multiple datasets. Tuning it enough to fix H0 can easily spoil the excellent fit elsewhere. That is why the literature is crowded with models that look promising in one plot and fail in a joint analysis.

The Nearby Universe Might Not Be Perfectly Calm

Not every proposed explanation requires new particles or new fields. Some center on where the Milky Way happens to sit in the cosmic web .

If the local universe were underdense compared with the cosmic average, nearby galaxies could appear to recede a bit faster than a more representative sample would suggest. This is the so-called local void idea. It has intuitive appeal because local velocity flows do matter, and the real universe is not a perfectly smooth fluid.

The difficulty is scale. To account for the full Hubble tension, the void would have to be large and structured in a way that many cosmologists consider implausible given modern galaxy surveys and standard structure formation. Studies of isotropy and local Hubble-flow variance continue, yet the broad view remains that ordinary local structure alone is unlikely to produce the entire discrepancy.

This is one place where uncertainty still feels real. Not because the void idea currently looks strong, but because the boundary between measurable local flow effects and a fully cosmological parameter is easy to talk about too confidently. Nearby space is messy, and the question of exactly how messy is not frivolous.

The Supernova Step Is Both Powerful and Exposed

Type Ia supernovae are among the best distance indicators in astronomy because they reach enormous distances and can be standardized empirically. They helped reveal cosmic acceleration in the late 1990s, and they remain central to both local H0 work and broader dark-energy studies.

Yet the supernova step is also exposed to subtle population issues. Are all Type Ia supernovae truly drawn from one cleanly standardizable family? Do host galaxy age, metallicity, star-formation rate, or dust law shift the calibration in ways that survive current corrections? Could there be redshift evolution that masquerades as cosmology? These are not idle worries. Large supernova compilations such as Pantheon+ exist partly because better statistics and better control samples help answer them.

No single known supernova systematic has become the accepted culprit. Still, the supernova link is so central that any serious resolution of the Hubble tension has to pass through it. If a hidden bias exists there, it would reach across many branches of modern cosmology.

Why This Matters for the Age and Size of the Universe

The Hubble constant is not just a single number tucked inside specialist papers. It touches the estimated age of the universe, its scale, and the timing of major milestones in cosmic history.

A lower H0 generally points to an older universe, all else equal. A higher H0 points to a younger one. Under ΛCDM , Planck parameters imply an age close to 13.8 billion years. Shift the expansion rate substantially and related inferred quantities move with it. The age change is not gigantic in everyday terms, but in cosmology even a few hundred million years can matter when matching models to the formation of the earliest galaxies, stars, and black holes.

The value of H0 also feeds into distance determinations across astrophysics. Galaxy masses, luminosities, cluster scales, and inferred densities all depend, directly or indirectly, on the cosmic distance scale. The disagreement is not a bookkeeping problem. It affects the scaffolding that supports much of extragalactic astronomy.

Why It Matters for Dark Energy

The standard cosmological picture uses a cosmological constant, represented by Λ, to describe the accelerated expansion of the universe. In that picture, dark energy has a constant density over time.

DESI has added weight to the possibility that dark energy may evolve instead of remaining fixed. That does not mean dynamical dark energy has been proved. It does mean the long-standing default assumption of a perfectly constant dark-energy density faces more pressure than it did a few years ago.

If the Hubble tension and dark-energy anomalies turn out to be linked, the consequences would reach far beyond one disputed number. The basic shape of cosmic history could need revision.

Readers who want a space-industry-oriented treatment of cosmology and related research topics can also find relevant coverage at New Space Economy .

Why It Matters for the Standard Model of Cosmology

ΛCDM is often called the standard model of cosmology. It is extraordinarily successful. It explains the cosmic microwave background , the large-scale structure of galaxies, light-element abundances from Big Bang nucleosynthesis , and the observed late-time acceleration with remarkable economy.

That success is exactly why the Hubble tension matters so much. Minor anomalies are common in science. A stable, many-sigma mismatch attached to a parameter this central is different. It is the sort of problem that can reveal either a hidden weakness in the data chain or a missing element in the theory.

There is a temptation to say that because ΛCDM works so well elsewhere, the local ladder must be wrong. That view now looks too simple. There is an equal temptation to say the tension proves new physics. That also goes too far. Neither side has achieved a knockout.

Still, a judgment is possible. The center of gravity has shifted away from “surely one telescope team slipped” and toward “the simplest cosmological story may be incomplete.” That is the more persuasive reading of the current evidence.

The Public Often Hears “Scientists Disagree” the Wrong Way

Scientific disagreement can sound like confusion or weakness from the outside. In this case, it means the opposite. The argument persists because the measurements are good enough to clash sharply.

If the data were sloppy, the error bars would overlap and no tension would exist. If the theory were vague, almost any number could be absorbed without consequence. Instead, modern cosmology is precise enough that a several-kilometer-per-second-per-megaparsec mismatch has become one of the field’s defining problems.

That is why so much effort now goes into cross-checks using methods that share as little common machinery as possible. Cepheids, red giants, strong lenses, baryon acoustic oscillations , cosmic microwave background anisotropies, and gravitational-wave standard sirens do not fail in the same way. Agreement among independent methods strengthens a result. Disagreement exposes where to look next.

What Could Actually End the Dispute

A resolution will probably not arrive as one dramatic paper that silences the field in a week. It is more likely to come from accumulation and convergence.

More JWST observations can tighten local calibrators and compare Cepheids, red giant stars, and asymptotic giant branch stars within the same galaxies. Improved Gaia astrometry can refine zero points. Larger supernova samples with better host characterization can reduce residual calibration drift.

On the high-redshift side, DESI will keep improving BAO constraints, and future cosmic microwave background efforts will test the early-universe picture with new precision. The coming generation of gravitational-wave detections from LIGO , Virgo , and KAGRA may become especially valuable because standard sirens bypass much of the traditional distance ladder.

The best ending would be convergence from methods that do not lean on the same assumptions. If those methods cluster around one value, the rest of the framework will have to adjust around them.

Not Every Resolution Will Be Exciting

There is an awkward possibility that deserves saying plainly. The final answer may be less dramatic than the headlines suggest.

It is entirely possible that the tension will shrink through a patchwork of modest corrections spread across several methods rather than through one grand discovery. A small Cepheid-related calibration shift, a small supernova-host correction, a small change in how BAO and cosmic microwave background datasets are combined, and a small dark-energy-model adjustment could collectively narrow the gap without a single revolutionary breakthrough.

That outcome would still be scientifically valuable. It would also disappoint those who want every anomaly to announce a new chapter of physics.

Cosmology has seen both outcomes before. Some anomalies became major discoveries. Others became lessons in calibration.

A New Point Belongs at the End

The Hubble tension is often presented as a question about expansion speed. It is also a question about scientific authority. Which should carry more weight: a direct but vulnerable chain of local measurements, or an elegant inference from the early universe that depends on a model of cosmic contents and evolution?

That framing hides a deeper reality. Neither side gets to claim purity. The local route is not free of astrophysical mess. The early-universe route is not free of theoretical commitment. The reason scientists still cannot agree is that the dispute sits exactly where measurement and model meet, and where both are now precise enough to expose each other’s weakest assumptions.

That is why the problem has lasted.

And that is why it matters far beyond one number.

Appendix: Top 10 Questions Answered in This Article

What is the Hubble constant?

The Hubble constant is the present-day rate at which the universe expands. It is usually given in kilometers per second per megaparsec, meaning how much faster a distant galaxy appears to recede for each additional 3.26 million light-years of distance.

Why do scientists disagree about the universe’s expansion rate?

They use different methods that rely on different physics and assumptions. Early-universe measurements infer H0 from the cosmic microwave background within a cosmological model, while local measurements build it from observed distances to nearby stars and supernovae.

What is the Hubble tension?

The Hubble tension is the persistent mismatch between lower early-universe values of H0 and higher local-universe values. It has remained after years of improved observations and reanalysis.

What value does Planck support?

Planck’s final legacy analysis under the standard ΛCDM model supports an H0 value near 67.4 km/s/Mpc. That number is precise and has shaped the modern form of the tension.

What value do local distance-ladder studies support?

Local Cepheid and supernova studies, especially from the SH0ES team, support values in the low 70s. Recent JWSTchecks have strengthened rather than erased that higher result.

Did JWST solve the Hubble tension?

No. JWST reduced concern that crowding errors in Hubble Space Telescope Cepheid images were driving the discrepancy. That strengthened the local measurement case but did not bring the high and low values into agreement.

Could hidden measurement errors still explain the problem?

Yes, but any single obvious error is becoming harder to identify. Many suspected issues have already been tested, which is why the disagreement now looks more persistent than a simple calibration mistake.

Could new physics explain the tension?

Possibly. Ideas such as early dark energy , extra radiation-like particles, modified gravity, or evolving dark energy have been studied, but no proposal has yet gained broad acceptance across all major datasets.

Why is the disagreement significant for cosmology?

It tests the reliability of the standard cosmological model and affects the inferred age, scale, and history of the universe. A durable mismatch in such a central parameter can point to missing physics or hidden systematics.

What observations could settle the issue?

More JWST distance-scale work, stronger DESI constraints, improved supernova samples, future cosmic microwave background analyses, and many more gravitational-wave standard sirens could push the field toward agreement. The strongest resolution will come from independent methods converging on the same answer.

Exit mobile version
×