
- Key Takeaways
- GAO’s March 2026 catalog shows where public-sector AI already earns its keep
- Search, sorting, and document conversion are reshaping audit preparation
- Generative systems at GAO remain bounded by mission, data, and review
- Maturity labels reveal a deliberate path from concept to operations
- Federal policy now pushes agencies to pair adoption with visible controls
- What the GAO catalog says about the next phase of government AI
- Summary
- Appendix: Top 10 Questions Answered in This Article
Key Takeaways
- GAO shows AI serving audits, search, and internal support, not spectacle.
- Most GAO tools stay narrow, reviewed, and tied to specific workflows.
- Federal AI use now sits between GAO’s framework and OMB guidance.
GAO’s March 2026 catalog shows where public-sector AI already earns its keep
As of March 2026, the U.S. Government Accountability Office listed 10 artificial intelligence use cases on its public Artificial Intelligence Use Cases page. That list matters because GAO is not a consumer app company looking for attention. It is Congress’s audit, evaluation, and investigative arm, and its internal choices tend to reflect a stricter standard for utility, documentation, and defensibility than the marketing language seen elsewhere.
The catalog is also unusually concrete. Instead of broad promises about reinvention, the page names tools such as the Federal Audit Clearinghouse Exploration Tool, Topic Modeling, Computer-Readable Formatting, and a Generative AI Tool for GAO Staff. For each one, GAO identifies a business function, expected benefits, maturity phase, and the techniques involved, including machine learning, natural language processing, large language models, computer vision, and predictive modeling.
That structure says a lot by itself. GAO is treating AI less as a standalone product category and more as a collection of task-specific methods embedded inside existing work. Search is one task. Classification is another. Summarization, document extraction, concept grouping, routing, and knowledge support each get their own lane. Public debate often turns AI into a single sweeping phenomenon. GAO’s list cuts the subject into practical units that can be tested, monitored, and either expanded or discarded.
Another part of the page deserves attention: maturity. GAO marks some tools as operational, some as late-stage prototype or early-stage prototype, and some as concept exploration. That choice avoids a familiar government problem, where pilot projects linger for years without clear status. On the January 2024 GAO testimony about its own AI work, the office described eight use cases under exploration. By March 2026, the public list had grown to 10 and included a more developed maturity picture. That progression suggests a program that has not stood still.
Search, sorting, and document conversion are reshaping audit preparation
The most grounded uses on the GAO page are not the flashy ones. They are the tools that reduce labor in document-heavy processes where staff used to spend long stretches locating, sorting, reformatting, and grouping information. The Federal Audit Clearinghouse Exploration Tool is a good example. GAO says it searches and sorts Single Audit data and findings from the Federal Audit Clearinghouse, helping auditors analyze data, summarize findings, assess reliability, and predict risk areas. That is not a speculative future use. GAO classifies it as operational.
The Topic Modeling use case belongs in the same family. GAO says it organizes large volumes of text, such as public comments from Regulations.gov, by grouping content through keywords or phrases. Anyone who has worked with federal comment dockets understands why this matters. Tens of thousands of comments can overwhelm manual review. A tool that groups themes does not decide policy, but it can sharply reduce the time needed to identify patterns worth human inspection.
Document conversion shows up in the Computer-Readable Formatting tool, which extracts information from large batches of documents and turns non-computer-readable files into tabular data. GAO lists computer vision and machine learning as the main techniques. That places the work close to the long-running federal push toward machine-readable data, but with a newer layer that can pull structure from messy files rather than waiting for perfect inputs. It is easy to miss how much value sits in that step alone. If a data table is locked inside a static document, analytics cannot start until someone pulls it out.
The Legislative Mandates use case shows how text search becomes operationally significant inside an oversight institution. GAO says the tool detects and summarizes legislative mandates, identifies bills requiring GAO work, creates automated summaries of the surrounding context, and highlights potentially fragmented or duplicative mandates. That function reaches directly into GAO’s workload formation. Congress passes statutes, committee staff request work, agencies respond, and GAO has to know what it has been asked to do. A tool that can scan legislation and surface mandate language changes the speed of intake and reduces the odds that assignments stay buried in long documents.
These examples share a common logic. None depends on replacing auditors, analysts, or attorneys. Each one improves the front end of professional work by reducing search friction and converting unstructured material into something tractable. That is one reason the catalog feels more believable than many corporate AI showcases. The page is full of bounded tasks with visible inputs and visible outputs.
Generative systems at GAO remain bounded by mission, data, and review
Generative AI appears on the GAO page, but it does not dominate it. That restraint stands out. The Generative AI Tool for GAO Staff is listed as a late-stage prototype meant to support operational and audit efficiencies, while improving knowledge management through a central access point for data sources tied to audit findings. On the 2024 GAO testimony, GAO said it had begun deployment of a large language model supplemented with GAO-specific information and security. The office also said that the prototype would help synthesize past reports, assist editorial review, and scan congressional documents for mandated work.
That design choice mirrors how many public institutions are approaching generative AI. Rather than offering an open-ended chatbot with broad discretion, they are constraining systems around internal repositories, known staff uses, and reviewable outputs. The GAO Science & Tech Spotlight on Generative AI describes generative systems as tools that create text, images, audio, video, and other content from patterns in training data. The same family of systems can accelerate drafting and summarization, but it can also produce false outputs. GAO’s own health care spotlight makes that risk plain by warning that generative models can create plausible but inaccurate results.
Two other use cases show the same bounded style. Congressional Outreach is an early-stage prototype meant to optimize how GAO delivers products and information to members of Congress based on interest areas. Performance Monitoring and Reporting is another early-stage prototype, intended to quantify nonfinancial impacts, identify unclaimed benefits from recommendations, connect recommendations to legislation, and improve planning through data-driven insights. Both cases depend heavily on language understanding, classification, and matching. Neither appears to hand final judgment to the model.
That distinction matters. A system that drafts a summary is not the same as a system that decides the meaning of a statute, scores the truth of testimony, or closes an audit issue. GAO’s public descriptions stay on the side of support rather than delegation. That is consistent with the office’s broader AI Accountability Framework, which organizes responsible AI use around governance, data, performance, and monitoring. Those four principles are also highlighted on GAO’s AI topic page.
One unsettled point is how far a prototype tool should be allowed to shape professional judgment before formal controls are expanded. The NIST AI Risk Management Framework encourages organizations to treat trustworthiness and risk as ongoing management issues rather than one-time approvals. That sounds sensible, but the operational threshold remains hard to pin down. A summarization tool can save hours and still introduce subtle framing errors that only a subject-matter expert will catch. GAO’s use case catalog reads like an institution that understands that tension and is still drawing the line carefully.
Maturity labels reveal a deliberate path from concept to operations
The maturity phases on the GAO page are more than administrative labels. They tell a story about sequencing. Five use cases on the March 2026 list are operational: the Federal Audit Clearinghouse Exploration Tool, Topic Modeling, Computer-Readable Formatting, Legislative Mandates, and the GAO Employee Experience Survey tool. One is a late-stage prototype, two are early-stage prototypes, and two remain at concept exploration.
That distribution suggests that GAO is operationalizing tools where the task is narrow and measurable. The GAO Employee Experience Survey tool, for instance, summarizes qualitative responses from the annual survey and identifies trends, patterns, and sentiments more quickly. Those are familiar uses for natural language processing and large language models, and the output can be checked against the underlying text. The same is true for topic clustering and document extraction. Staff can inspect the results, compare them to a sample, and improve the system iteratively.
By contrast, the tools still in concept exploration are more open-ended. The Concept Induction Tool creates meaningful concepts from unstructured text through a human-in-the-loop process. IT Assistance would triage help desk requests and answer internal policy questions with 24/7 self-service support, using natural language processing, large language models, sentiment analysis, and integrated workflow escalation. Those are useful functions, but they are more conversational, more dependent on context, and more likely to encounter edge cases. Keeping them in exploration makes sense.
The maturity pattern also matches GAO’s institutional role. GAO does not just use AI. It audits and evaluates AI use by other agencies. On its AI topic page, the office points to work on federal oversight, generative AI use and management, and agency-specific audits. A staff organization that wants to assess other agencies credibly cannot appear careless with its own internal tools. Measured rollout is not only a technical decision. It protects the office’s authority as a reviewer.
This is also where the catalog becomes a useful reference for agencies beyond GAO. The list offers a practical ordering principle. Start with high-volume text or document work. Move next to search, extraction, grouping, and summarization where humans already know how to verify outputs. Save broader conversational support or inferential systems for later phases. That ordering may not be elegant, but it is grounded in the daily realities of public administration.
Federal policy now pushes agencies to pair adoption with visible controls
GAO’s internal choices sit inside a larger federal policy structure. In 2021, GAO published its AI Accountability Framework, built around governance, data, performance, and monitoring. In 2023, the National Institute of Standards and Technology released the AI Risk Management Framework, a voluntary framework intended to help organizations manage AI risk and incorporate trustworthiness into design, development, use, and evaluation. Those documents do not prescribe the same thing, but they point in the same direction: agencies need documented controls, review processes, and continuing oversight.
The policy layer shifted again in 2024 and 2025. The Office of Management and Budget issued M-24-10 in March 2024, then replaced that approach with M-25-21 and M-25-22 in April 2025. The 2025 memoranda emphasize adoption, governance, public trust, and acquisition practice, while the January 23, 2025 executive order on removing barriers to American leadership in artificial intelligence marked a broader policy turn toward faster deployment and reduced friction.
None of that means the federal government has stopped worrying about risk. GAO’s March 2026 report on AI and privacy guidance says the office found privacy-related gaps in government-wide guidance and called for further OMB action. GAO’s March 2026 report on the Internal Revenue Service found that the agency had 126 active AI use cases in its inventory as of June 2025, including 65 too sensitive for public reporting or exempt from public reporting because they were research and development efforts. Growth is happening, but so are questions about inventory quality, workforce skills, strategy, and oversight.
Seen in that context, the GAO use case page is more than a catalog. It is a small public model for how an agency can explain its AI portfolio. Each entry names the function, the benefit, the maturity phase, and the techniques used. Many agencies still speak about AI in broad categories that reveal little about how a system actually supports a mission. GAO’s format is plainer and, for that reason, more useful.
What the GAO catalog says about the next phase of government AI
A pattern emerges after reading the GAO use case page alongside GAO’s 2024 testimony on its own internal AI work, the AI Accountability Framework, and current OMB memoranda. Government AI is becoming less about proving that models can do impressive things and more about proving that agencies can fit those models into auditable workflows without losing control of records, standards, or judgment.
That helps explain why so many of GAO’s use cases sit in what might be called the middle ground of intelligence work. They do not replace professional expertise. They reorganize the raw material that professionals use. The best examples on the list sort text, extract structure, surface themes, detect mandate language, summarize survey comments, and improve retrieval of internal knowledge. Even the more ambitious generative and outreach prototypes stay close to retrieval, synthesis, routing, and support.
For public institutions, that may be the real shape of AI adoption for the next few years. Most durable value is likely to come from systems that shorten search time, reduce repetitive review, improve access to prior work, and make sprawling document collections usable. A smaller share will come from systems that attempt to automate final decisions. That is not a timid vision. In a bureaucracy built on records, statutes, correspondence, comments, findings, and manuals, better handling of text can change pace, cost, and consistency in meaningful ways.
The public side of the question matters too. Citizens usually hear about AI through chatbots, deepfakes, military systems, or giant private-sector models. The GAO catalog points to something more ordinary and more durable: AI as infrastructure for internal knowledge work. It is less dramatic than the public imagination often expects. It is also much closer to how institutions actually change.
Summary
The March 2026 GAO AI use case catalog presents 10 examples of government AI that are narrow, named, and tied to identifiable work. The strongest cases center on search, text organization, document extraction, legislative scanning, and survey analysis, with generative AI used as a supporting layer rather than an all-purpose replacement for expert staff.
Read beside GAO’s AI Accountability Framework, the NIST AI Risk Management Framework, and current OMB guidance, the catalog shows a federal model built on measured rollout and visible controls. That model does not erase uncertainty, especially around generative systems and prototype tools. What it does show, plainly, is that the most persuasive government AI programs are the ones that can describe exactly what the system does, where it fits, how mature it is, and why a human reviewer still matters.
Appendix: Top 10 Questions Answered in This Article
What does GAO’s Artificial Intelligence Use Cases page contain?
The page lists 10 internal GAO uses of artificial intelligence as of March 2026. Each entry identifies the business function, expected benefits, maturity phase, and underlying techniques. That makes it a practical inventory rather than a promotional summary.
Why is GAO’s list more informative than a general AI strategy statement?
A strategy statement often stays at the level of ambition or policy language. GAO’s list names actual tools and connects them to tasks such as audit support, text grouping, document extraction, and staff assistance. That level of detail makes it easier to judge whether the work is real and where it stands.
Which GAO AI uses are already operational?
Operational uses include the Federal Audit Clearinghouse Exploration Tool, Topic Modeling, Computer-Readable Formatting, Legislative Mandates, and the GAO Employee Experience Survey tool. These are mostly bounded applications tied to search, extraction, grouping, and summarization. Their outputs are easier for staff to inspect and verify.
How does GAO use generative AI internally?
GAO lists a Generative AI Tool for GAO Staff as a late-stage prototype. Its stated purpose is to support operational and audit efficiencies and improve knowledge management through a central access point for important data sources. GAO’s public descriptions place it in a support role rather than a final decision role.
What does the maturity phase tell a reader?
The maturity phase shows whether a use case is operational, in prototype form, or still under exploration. That helps separate established workflows from experiments. It also shows that GAO is not treating every AI project as equally ready for day-to-day reliance.
Why do text-heavy tasks dominate the GAO catalog?
GAO’s mission depends heavily on statutes, audit records, survey responses, reports, and public comments. AI tools that search, summarize, and organize large text collections can reduce manual effort without removing human review. Those uses fit naturally into oversight work.
What makes the Legislative Mandates use case important?
It helps GAO identify legislation that directs the office to perform an audit or review and then summarizes the surrounding context. That can speed intake and reduce the chance that mandate language is missed in long bills or legislative packages. It also helps identify overlap or duplication.
How does GAO’s catalog relate to federal AI governance?
The catalog sits alongside federal guidance that expects agencies to document controls and manage risk. GAO’s own accountability framework and NIST’s risk management framework both emphasize governance, data quality, performance, and monitoring. The catalog shows what that looks like in a working agency portfolio.
Does the GAO page suggest that government AI is replacing expert staff?
No. The listed systems mostly support analysts, auditors, editors, and support teams by reducing search time and organizing information. The pattern suggests augmentation of expert work rather than wholesale substitution.
What broader lesson does the GAO catalog offer for other agencies?
It suggests that the most workable public-sector AI projects start with narrow, high-volume tasks that can be checked by humans. Search, classification, extraction, and summarization usually provide clearer returns than open-ended automation. Agencies that describe tools with that level of specificity are easier to trust and easier to evaluate.

