HomeComparisonsFAA Problem of Overestimated Launch Demand Forecasts 1995 - 2017

FAA Problem of Overestimated Launch Demand Forecasts 1995 – 2017

Formal Study Initiated

May 8, 2017, marks the date of Past and Future: An Analysis of the FAA Commercial Space Transportation Forecasts, a George Washington University International Science and Technology Policy capstone study by Nathan Boll, Michael Sloan, and Erika Solem. The study examined Federal Aviation Administration commercial space transportation forecasting from 1995 through 2017 and asked a direct policy question: how well did the forecasts anticipate actual commercially addressable launch activity?

The answer was direct. The forecasts consistently predicted more addressable commercial launches than later occurred, and that overestimation appeared across orbital and payload categories. The finding matters because commercial space forecasts influence policy, investment, infrastructure planning, workforce assumptions, regulatory staffing, and congressional expectations about the size and direction of the launch market.

The study treated the FAA forecasts as public policy instruments, not just technical market documents. A launch forecast influences expectations about licensing workload, range capacity, spaceport investment, satellite deployment, insurance, supply chains, and the commercial launch sector’s share of the space economy. If forecasts overstate demand, public and private actors may allocate resources toward a launch tempo that does not materialize. If forecasts understate demand, regulators and operators may fail to prepare for higher activity. The study found the first error to be the more consistent risk.

The Meaning of Commercially Addressable Launches

The authors focused on commercially addressable launches rather than all launches. That distinction shaped the whole analysis. The FAA’s forecast category referred to commercial satellite launches open to an internationally competitive launch service procurement process, which is narrower than every commercial launch and much narrower than total global launch activity.

The study noted that a launch may count as commercial under one FAA definition because it is licensed by FAA, internationally competed, or privately financed. The forecasted addressable-launch category centers on the internationally competed launch market. This means that the study was not measuring every rocket launch, every commercial space operation, or every FAA-licensed activity. It was testing a narrower market forecast tied to launch demand that could be competed internationally.

The timing added weight to the findings. By 2017, the commercial space sector was drawing more congressional attention, new entrants were changing launch and satellite markets, and the FAA’s forecast products had been produced for more than 20 years. The study stated that no prior retrospective accuracy review had been conducted on these forecasts. That gap left policymakers and industry users with a forecasting product that had public authority, repeated annual use, and limited published evaluation of its accuracy.

Commercial Launch Forecasts as Policy Inputs

Commercial launch forecasts help shape policy decisions. Congress uses forecasts and market evidence when assessing authorization, appropriation, and oversight choices. Agencies use them to judge regulatory demand and staffing needs. Companies use them to estimate business activity and supporting infrastructure. A launch forecast may seem narrow, but the study framed it as a planning device for a larger chain of space activity, including satellite manufacturing, communications, imagery, ground equipment, and launch-support services.

The authors placed the FAA Office of Commercial Space Transportation within the broader U.S. commercial space governance structure. The study described a division of responsibility among the FAA, the Federal Communications Commission, and the National Oceanic and Atmospheric Administration. FAA AST licenses commercial launch and reentry activity and launch-site operations. The FCC licenses non-federal satellite radio communications. NOAA licenses commercial remote sensing satellites. These functions require coordination with defense, intelligence, civil space, and foreign-policy agencies because commercial space activity can affect public safety, spectrum management, national security, and international obligations.

That framework helps explain why an optimistic forecast can have consequences beyond a single market chart. A forecast suggesting higher launch activity may support arguments for greater appropriations, more regulatory staff, launch-site investment, range modernization, or policy changes that favor commercial expansion. A forecast showing weaker demand may strengthen arguments for restraint, staged investment, or greater attention to non-launch bottlenecks. The authors did not argue that forecasting should be pessimistic. They argued that forecast users should understand the direction and size of historical error.

The study also traced the public-policy origins of the U.S. commercial launch sector. It identified National Security Decision Directive 42, National Security Decision Directive 94 in 1983, Executive Order 12465 in 1984, the Commercial Space Launch Act of 1984, and later statutory changes as milestones that moved the federal government toward encouraging privately provided launch services. The historical section positioned FAA AST’s forecast activity inside a decades-long policy preference for commercial space development.

That policy mission created a tension the study returned to later. FAA AST has both safety and industry-promotion responsibilities. It must protect public safety, property, national security, and foreign-policy interests during licensed launch and reentry activity, but it also has a statutory role in encouraging and facilitating U.S. commercial space transportation. That dual mission does not prove that the agency intentionally overstates forecasts. It does make optimism a plausible institutional tendency that needs to be checked through transparent methods and regular retrospective testing.

As of May 2026, this tension remained relevant because FAA commercial space activity had expanded beyond the market scale described in the 2017 study. FAA stated that August 14, 2025 marked its 1,000th licensed or permitted commercial space vehicle operation, a milestone that reflects how much the regulatory workload has grown since the early commercial launch era. The agency also announced in March 2026 that operators had transitioned legacy licenses into the Part 450 framework, which allows one license to cover portfolios of operations, vehicle configurations, mission profiles, and multiple sites.

Source: FAA May 13, 2026

How the Study Reconstructed the Forecast Record

The study’s central contribution was methodological reconstruction. The authors compiled data from available FAA forecasts published between 1995 and 2017, then compared forecasted launch figures with recorded addressable commercial launches. They organized the analysis around three techniques: aggregate mean analysis, annual launch rate analysis, and out-year prediction analysis. The study also reviewed the FAA forecast’s own method, which relied on voluntary industry submissions combined with quantitative and qualitative judgments about market conditions.

The aggregate mean approach looked at the average forecasted value for a given calendar year. A calendar year can appear in multiple forecasts because each annual forecast projects several future years. One forecast might estimate launch activity nine years out, another might estimate the same calendar year eight years out, and later forecasts move that same year closer to the forecast date. Averaging those predictions produced a way to judge the general forecast expectation for a target year.

The annual launch rate analysis compared implied growth patterns. The study did not ask only whether one year’s prediction missed the later outcome. It also asked whether the forecasts predicted growth rates that matched observed launch activity. This mattered because market narratives often depend on growth direction. A forecast can miss the exact value yet still capture a pattern of rising, falling, or flat demand. The study found that forecast launch rates produced the largest average error and highest variance among the comparison methods.

The out-year analysis examined how prediction error changed depending on how far the target year was from the forecast year. The first year in a forecast is called the zero-year prediction because it refers to the year in which the forecast was published. Later columns correspond to one-year, two-year, and longer-range forecast points. This method was useful because short-term forecasts should generally perform better than long-term projections if near-term manifests contain firmer information.

The study found that zero-year predictions still ran above actual addressable launches for every year examined between 1998 and 2015. The average overprediction for that zero-year period was about 40% relative to recorded addressable launch activity, with variance of 12.4. This finding was especially important because zero-year predictions should have benefited from near-term schedule knowledge, industry submissions, and published launch plans.

The study also explained why the underlying data were difficult to test in full. FAA forecast products listed factors that could affect launch demand, including public satellite and launch contracts, replenishment missions, service demand, financing, insurance, operator consolidation, launch vehicle capabilities, hosted payloads, satellite technical issues, launch vehicle technical issues, weather, range availability, dual manifesting, business issues, regulatory issues, and geopolitical issues. The study found that the forecast documents did not explain how those factors were measured or weighted.

Overestimation Across the Main Analytical Tests

The study’s main result was consistent across its methods: FAA commercial launch forecasts predicted more addressable launches than the actual record later showed. This pattern held across orbital and payload segments. The finding did not rest on a single bad year, a single forecast vintage, or one analytical approach. It appeared in aggregate means, annual launch rates, and out-year comparisons.

The aggregate mean results were the strongest of the three methods in relative terms because they produced the lowest average error. That does not mean the aggregate mean method was accurate in an absolute sense. It means the average of multiple forecasts for the same calendar year performed better than the launch-rate method and better in mean-error terms than zero-year forecasts. Even the best-performing method still showed systematic overprediction.

Annual launch rate analysis performed worst. The study found that mean launch rates had the highest average error and the highest variance, with error values spanning from 2% in 1998 to 100% in 2001. Such variance makes the launch-rate method weak for policy users who need stable guidance. A forecast that sometimes comes close and sometimes doubles the realized value creates a planning problem because users cannot rely on a predictable error band.

Zero-year predictions carried a different lesson. They had the second-highest average error but low variance compared with the other methods. That means the zero-year forecasts were consistently high in a more regular way. For forecast users, that pattern has value because a regular bias can sometimes be adjusted. The study later used that insight to propose a revised launch realization factor.

The study’s visual evidence reinforced the written findings. The chart on page 32 compared zero-year forecasts with actual addressable commercial launches from 1998 to 2015. It showed forecast values above actual results across the period. The chart on page 38 then compared aggregate means, actual recorded launches, and an adjusted mean based on the average percent difference. The adjusted series sat closer to realized launch activity, showing how a historical correction could improve forecast interpretation.

The study also warned against treating a commercial launch forecast as a neutral transcript of market reality. Forecasts can capture company plans, hoped-for manifests, expected financing, satellite replenishment cycles, and public announcements. Actual launch activity then depends on technical readiness, customer delays, launch failures, payload problems, financing setbacks, range availability, regulatory steps, and customer procurement choices. A forecast based heavily on planned activity will tend to run high if schedules slip more often than they accelerate.

That finding has lasting relevance. Launch markets in the 2020s became more active, but higher activity does not eliminate the risk of forecast bias. More launches can produce more data, yet the planning problem remains if forecasts mix firm orders, tentative plans, speculative satellite deployments, and aspirational company schedules without enough clarity about confidence levels. The study’s central lesson is not that commercial space forecasts are useless. Forecast users need to know what type of expectation the forecast represents.

Why the Forecasts Ran High

The study did not claim that it could identify every source of error. It instead identified several plausible causes and explained why the lack of methodological transparency limited deeper testing. FAA forecasts disclosed broad categories of inputs, but they did not provide enough detail to show how each factor affected the final launch count. Terms such as geopolitical issues, business issues, and regulatory issues could cover many conditions. Without weighting rules or confidence assumptions, external reviewers cannot reconstruct the forecast model.

Industry self-reporting formed one possible source of optimism. The forecasts relied substantially on voluntary information from commercial space companies. Companies may have incentives to report ambitious schedules, fuller manifests, or stronger demand because visible activity can help signal credibility to customers, investors, and partners. Such incentives do not require bad faith. A company may believe its own schedule, but launch projects routinely face technical, financial, range, and customer-driven delays.

The study also identified FAA AST’s mission as a possible source of optimistic framing. The office’s statutory responsibility includes encouraging, facilitating, and promoting commercial space transportation. That mission can affect institutional culture, contractor expectations, and the way market growth is presented. Again, this does not prove intentional exaggeration. It suggests that a forecast produced inside a promotional and regulatory office needs safeguards against repeated high-side error.

Another source of error involved the distinction between near-term manifest plans and realized launches. The study noted that the first three years of forecast tables drew heavily from industry-submitted data with limited adjustment, and later years relied more on fleet replenishment and manufacturer estimates. Plans in the first three years may look concrete because they often involve named payloads or announced launch contracts. Actual launch outcomes still depend on technical readiness, integration schedules, launch-site availability, vehicle performance, customer priorities, and financing.

The discontinuation of the prior launch realization factor also drew attention. The study reported that the earlier FAA forecast process had included a launch realization factor from 2002 through 2015. That factor compared forecast satellite launches with actual satellites launched during the five years before the current forecast. After the forecast moved into the Annual Compendium format, the 2017 edition no longer presented adjusted annual predictions using that factor and instead gave a single prediction number for each year.

A single prediction number can be misleading when the underlying activity has a strong record of schedule movement. Ranges communicate uncertainty better than point estimates. They also allow policy users to distinguish a likely central case from a high-demand case. The study’s proposed correction tried to restore that discipline by using retrospective error patterns to create lower and upper bounds.

The Failed Search for a Simple Statistical Model

The authors tested whether publicly available variables could support a better predictive model. The correlation analysis examined factors such as satellite services revenue, satellite launch revenue, total satellite industry revenue, total global launches, U.S. space spending, forecast means, prior-year actual launch activity, and zero-year forecast values. The goal was to find variables strongly related to the actual number of commercially addressable launches.

The result was mostly negative. The study found that actual addressable commercial launches were not correlated with most explanatory variables in the dataset, aside from the zero-year predictions. The authors then ran a simplified regression model using zero-year predictions and the prior year’s actual launches as independent variables. That model did not produce statistically significant relationships and produced an adjusted R-squared value of 0.614.

The null result was useful because it cautioned against overconfidence in easy models. Commercial launch demand is affected by many conditions that do not appear cleanly in public annual datasets. A satellite revenue figure may rise because the broader satellite services business grows, but that does not automatically translate into more addressable launches in the same year. Global launch totals may increase because of government missions, rideshare missions, domestic programs, or national-security activity outside the addressable commercial category.

The failed regression also supported the study’s view that mixed-method forecasting is needed. Qualitative inputs can capture factors that annual numerical datasets miss, such as a launch vehicle’s technical setback, a satellite operator’s financing issue, or a procurement decision by a major customer. Yet mixed-method forecasting needs transparency if users are expected to trust the result. A model can use judgment, but the forecast should disclose how judgment changes the numbers.

This is one of the study’s most transferable lessons. Public forecasts for emerging markets often fail when they treat announced demand as realized demand. Commercial space activity is especially sensitive to lumpy procurement cycles, mission-specific delays, launch vehicle availability, customer concentration, regulatory approvals, and insurance conditions. A simple regression model using sector revenue or spending can miss these details. The answer is not to abandon forecasting. The answer is to make the forecast more explicit about uncertainty, data categories, and past error.

The New Launch Realization Factor

After the correlation analysis produced weak results, the study proposed a new launch realization factor. It used the two most useful retrospective measures from the study: aggregate means and zero-year predictions. The aggregate mean adjustment became the proposed lower bound. The zero-year adjustment became the proposed upper bound. That structure treated historical forecast bias as a correction rather than as a reason to discard the forecast entirely.

The lower-bound method used aggregate means because they had the lowest average difference from actual recorded launches. Focusing on forecast data after 2002, the study found a mean difference of 10 launches and an average percent difference of 54% above actual addressable launches. Since all aggregate mean values predicted more launches than were recorded, reducing the forecast mean by the observed average percent difference produced a lower estimate closer to the actual record.

The upper-bound method used zero-year predictions because they had low variance. For data after 2002, the zero-year predictions had a mean difference of nine launches and an average percent difference of 47% above actual addressable launches. The study used that high-side but more consistent measure as the basis for an upper-bound adjustment.

Applied to the 2017 forecast, the method produced an expected range of 20 to 27 addressable launches. The unadjusted 2017 Compendium zero-year prediction was 40 addressable launches, and the aggregate mean prediction for 2017 was 30. The adjusted lower bound of 20 equaled the reported 2016 total. The adjusted upper bound of 27 represented a 35% increase over the 2016 value.

This proposed range showed how retrospective analysis can convert an optimistic forecast into a more usable planning tool. A range of 20 to 27 launches tells policymakers something different from a point prediction of 40. It suggests growth may occur, but it limits the implied near-term expansion to a level more consistent with historical forecast performance. Agencies can plan staffing and licensing capacity with a more defensible band. Congress can interpret forecast testimony with a known adjustment. Industry can compare planned demand with realized demand more carefully.

The launch realization factor also avoided a false choice between accepting the FAA forecast and rejecting it. The study treated the FAA forecast as useful but biased. Its proposed correction used the agency’s own forecast values as input and then adjusted them using historical error. That approach is practical because it does not require a new complete forecasting system. It can be tested year by year and refined as new realized launch data become available.

What the Study Recommended for Policymakers and Forecast Producers

The study separated its recommendations into short-term and long-term actions. In the short term, it advised Congress and other forecast users to recognize the forecasts’ limited accuracy and repeated optimism when the forecasts appear in testimony, policy analysis, or public discussion. The study also recommended testing the new launch realization factor in the next forecast cycle to judge whether it improved predictive value.

That recommendation was modest but useful. It did not ask Congress to disregard FAA forecast products. It asked users to treat forecast figures as claims requiring interpretation. A forecast that has consistently run high can still provide insight into industry plans, expected demand, and market sentiment. It should not be treated as a simple expected value without adjustment.

The longer-term recommendations addressed forecast production. The study proposed engagement with the contractor producing the reports to improve the forecast, modifications to contract language to create consistent reporting requirements, and, if necessary, congressional direction through FAA authorization language specifying the report’s contents, frequency, and methodology.

Contract language was a practical point. If forecast products depend on contractors, then the scope of work can determine whether accuracy testing, uncertainty ranges, and methodological disclosure become routine. A contract can require the forecast producer to publish historical error, show confidence ranges, separate firm launch contracts from less certain planned activity, and distinguish payload demand from launch demand. Contract terms can also require consistent categories from year to year so external analysts can compare forecast vintages.

The recommendation for possible congressional direction reflected the public-policy value of the forecast. If Congress relies on commercial launch forecasts to judge agency needs or industry direction, then Congress has an interest in the forecast’s structure. Legislative language could require methodological transparency without dictating the forecast result. It could also require an annual retrospective section that compares prior-year predictions with actual results.

The FAA’s later Part 450 transition shows why this kind of forecast governance matters. Part 450 consolidated legacy launch and reentry licensing rules into a performance-based licensing framework, and FAA described the framework as allowing more flexible authorization for different vehicles, mission profiles, and sites. That regulatory model depends on understanding expected operational volume. If launch activity expands faster than forecast, FAA risks falling behind. If forecast demand runs too high, staffing and policy debate may be shaped by a market scale larger than reality.

Why the Study Still Matters for Commercial Space Forecasting

The study’s findings remain useful because commercial space forecasting still faces the same structural problem: announced plans are easier to collect than completed operations. The commercial space sector often generates public statements about future satellites, future launch cadence, future vehicle capacity, future spaceport activity, and future customer demand. Forecasting systems that rely heavily on those statements can become schedule aggregators rather than probability-weighted estimates.

The study also speaks to the difference between market enthusiasm and measurable demand. A satellite operator may plan a constellation, a launch provider may plan higher cadence, a spaceport may plan infrastructure expansion, and a regulator may plan faster approvals. Those plans do not automatically become launch events in a specific calendar year. Forecasts need to show how much confidence attaches to each category of planned activity.

A better forecast culture would treat every annual forecast as part of a cycle. The forecast would publish assumptions, confidence levels, and category definitions. The next cycle would compare the prior forecast with actual outcomes and explain the difference. The forecast after that would adjust methods based on the known error record. The study’s retrospective method offered the first step in that cycle by showing that the error was not random noise. It had direction.

The study also carries lessons for defense and security users. Commercial launches can support national-security payloads, dual-use satellite capabilities, remote sensing, communications, positioning services, and launch-on-demand concepts. Government users that rely on commercial space forecasts for procurement or contingency planning need to know whether projected commercial launch capacity is firm, likely, or aspirational. Overstated commercial capacity can create a false sense of strategic flexibility.

For investors and companies, the study warns against confusing addressable launch demand with the full space economy. Launch revenue is a small share of total space-sector revenue, yet launch availability has an enabling function for satellites and space services. A forecast that overstates addressable launch count may lead companies to misjudge launch-provider revenue, spaceport throughput, integration demand, insurance needs, or workforce requirements. It may also amplify expectations for markets tied to launch cadence, such as rideshare aggregation, payload processing, and range-support services.

For public agencies, the study’s lesson is sharper. A forecast with repeated optimism can still be useful if agencies disclose and adjust for the bias. Without such adjustment, forecast products risk being used as neutral evidence in policy debates where they actually carry a historical high-side tendency. Transparency protects the forecast producer, the agency, and the users who rely on the numbers.

Summary

The 2017 study showed that FAA Commercial Space Transportation Forecasts from 1995 through 2017 consistently overestimated annual commercially addressable launches. The authors reached that finding through aggregate mean analysis, annual launch rate analysis, and out-year prediction analysis. The pattern appeared across orbital and payload categories, with launch-rate analysis showing the highest average error and variance, aggregate means showing the lowest average error, and zero-year predictions showing relatively low variance despite repeated overestimation.

The study did more than identify a forecasting miss. It explained why the miss mattered. Commercial launch forecasts shape expectations for policy, regulation, infrastructure, business planning, and congressional oversight. The study found that industry self-reporting, limited methodological transparency, and FAA AST’s dual role as regulator and promoter of commercial space could all contribute to forecast optimism. It also found that a simple public-data regression model could not replace mixed-method judgment because commercial launch outcomes depend on many factors that public annual datasets do not capture cleanly.

The article’s lasting lesson is that commercial space forecasts should be treated as managed estimates with known uncertainty, not as stand-alone predictions. The study’s proposed launch realization factor offered a practical correction by using historical forecast error to produce a lower and upper bound for the next year’s expected launch activity. Its broader recommendation was institutional: forecast producers should publish clearer methods, users should account for known optimism, and Congress should consider requiring more consistent forecast contents and accuracy testing when public forecasts become part of policy decision-making.

YOU MIGHT LIKE

WEEKLY NEWSLETTER

Subscribe to our weekly newsletter. Sent every Monday morning. Quickly scan summaries of all articles published in the previous week.

Most Popular

Featured

FAST FACTS