As an Amazon Associate we earn from qualifying purchases.

A June 2025 survey from the Pew Research Center reveals a growing apprehension among Americans regarding the increasing integration of artificial intelligence (AI) into daily life. The findings, based on responses from 5,023 U.S. adults, indicate that while the public sees a role for AI in specific, data-intensive fields, there’s significant concern about its effects on human abilities and a strong desire for more personal control over the technology.
A Climate of Apprehension
Public sentiment toward AI has trended toward greater concern over the past several years. As of 2025, 50% of U.S. adults report feeling more concerned than excited about the increased use of AI. This figure represents a notable increase from 37% in 2021. In contrast, only 10% of Americans say they are more excited than concerned, a number that has decreased from 18% in 2021. Another 38% feel equally excited and concerned about AI’s proliferation.
This apprehension is reflected in how Americans weigh the technology’s potential risks and benefits. A majority of the public, 57%, rates the risks of AI to society as high or very high. Far fewer, just 25%, believe the benefits of AI are high. About 41% see medium benefits, while 18% rate the benefits as low or very low.
When asked to elaborate on the primary reasons for seeing high risks, the most common theme, mentioned by 27% of this group, was the erosion of human abilities and connections. People expressed worries that over-reliance on AI would lead to laziness or diminish the capacity for creative and critical thinking. Other frequently cited risks include the negative impact on the accuracy of information, with 18% concerned about the spread of misinformation, and a loss of human control over the technology, a view held by 17%. The potential for AI to be used for malicious purposes like scams and hacking was a key concern for 11%, while 9% pointed to the risk of job loss.
Conversely, among the 25% of Americans who rate AI’s benefits as high, the most cited reason is the potential for increased efficiency. About 41% of this group believe AI can automate mundane tasks, freeing up human time and talent for more meaningful pursuits. The second most common reason, noted by 23%, is the potential for AI to expand human and technological abilities, leading to rapid advancements in fields like science and medicine.
AI Awareness and Daily Interaction
Public awareness of AI has grown substantially. In 2025, 95% of U.S. adults have heard at least a little about AI. The share of those who have heard “a lot” has nearly doubled, climbing from 26% in 2022 to 47% in 2025. This increased awareness is accompanied by more frequent interaction. About 62% of Americans report interacting with AI at least several times a week, with 31% saying they interact with it almost constantly or several times a day.
Demographic Divides in Awareness
Significant differences in AI awareness exist across various demographic groups. Age is a primary factor. About 62% of adults under 30 report having heard a lot about AI, compared to just 32% of those aged 65 and older. This awareness gap between the youngest and oldest adults has widened considerably since 2022, when only 33% of young adults reported high awareness.
Education level also plays a role. Six-in-ten adults with postgraduate degrees (60%) have heard a lot about AI, while this share drops to 38% for those with a high school diploma or less. Men are more likely than women to say they have heard a lot about AI, at 53% versus 41%. Among racial and ethnic groups, Asian Americans report the highest level of awareness, with 65% having heard a lot about the technology. This compares to 49% of Black Americans, 47% of Hispanic Americans, and 45% of White Americans.
| Demographic Group | Heard “A lot” about AI (%) | Heard “A little” about AI (%) | Heard “Nothing at all” (%) |
|---|---|---|---|
| U.S. adults | 47 | 48 | 5 |
| Men | 53 | 44 | 3 |
| Women | 41 | 52 | 6 |
| Ages 18-29 | 62 | 35 | 3 |
| Ages 65+ | 32 | 60 | 8 |
| Postgrad degree | 60 | 40 | – |
| HS or less | 38 | 53 | 9 |
| Asian | 65 | 33 | 2 |
| White | 45 | 51 | 4 |
Frequency of Interaction
Similar demographic patterns emerge when examining how often Americans believe they interact with AI. Young adults report the highest frequency; one-third of those under 30 interact with AI at least several times a day. In contrast, 54% of adults 65 and older say they interact with AI less than several times a week.
Education is also a strong predictor of interaction frequency. About 46% of those with a postgraduate degree use AI several times a day or more, compared to only 20% of those with a high school education or less. Asian adults report more frequent interaction (39% use it several times a day or more) compared to White (31%), Hispanic (29%), and Black (27%) adults.
The Desire for Control and Understanding
A central theme from the survey is the public’s desire for greater agency over AI. A majority of Americans (61%) would like more control over how AI is used in their lives, an increase from 55% in 2024. Only 17% are comfortable with their current amount of control. This feeling is underpinned by a sense of powerlessness; 57% of adults feel they have not too much or no control at all over whether AI is used in their lives. Just 13% feel they have a great deal or quite a bit of control.
Despite this desire for more control, most Americans show some willingness to use the technology for practical purposes. Nearly three-quarters (73%) would be willing to let AI assist them at least a little with day-to-day tasks. However, this acceptance is cautious, as only 13% would let it assist them “a lot.” About 27% would not let AI assist with their daily activities at all.
Recognizing AI’s growing presence, the public strongly believes in the importance of AI literacy. Nearly three-quarters of Americans (73%) say it is extremely or very important for people to understand what AI is. This view is especially prevalent among those with higher education; 86% of postgraduates and 83% of college graduates see AI literacy as extremely or very important, compared to 63% of those with a high school diploma or less.
Distinguishing Human from AI Content
A related concern is the ability to differentiate between content created by humans and content generated by AI. An overwhelming majority of Americans (76%) say it’s extremely or very important to be able to tell if pictures, videos, and text were made by AI. Yet, there is a significant confidence gap. More than half of the public (53%) are not too confident or not at all confident in their own ability to detect AI-generated content. Only 12% are extremely or very confident, while 35% are somewhat confident.
The Perceived Impact on Human Abilities
One of the most significant areas of public concern is the potential for AI to degrade fundamental human skills. Americans are, on the whole, pessimistic about how increased AI use will affect creativity, interpersonal relationships, decision-making, and problem-solving.
Creativity and Relationships
The survey shows that 53% of U.S. adults believe the increased use of AI will make people’s ability to think creatively worse. Only 16% think it will improve this skill, while another 16% expect no change. Similarly, half of Americans (50%) say AI will worsen people’s ability to form meaningful relationships. A mere 5% think AI will improve this ability, while 25% believe it will have no impact.
Younger adults are particularly concerned about these potential negative effects. About 61% of those under 30 think AI will make people worse at thinking creatively, and 58% believe it will harm the ability to form meaningful relationships. These figures are higher than those for adults aged 65 and older, where 42% and 40%, respectively, share these negative views.
| Ability | Worse (%) | Better (%) | Neither better nor worse (%) | Not sure (%) |
|---|---|---|---|---|
| Think creatively | 53 | 16 | 16 | 16 |
| Form meaningful relationships | 50 | 5 | 25 | 20 |
| Make difficult decisions | 40 | 19 | 20 | 20 |
| Solve problems | 38 | 29 | 15 | 17 |
Decision-Making and Problem-Solving
Public opinion on AI’s impact on decision-making and problem-solving is more mixed, but still leans negative. About 40% of Americans believe AI will make people worse at making difficult decisions, compared to 19% who think it will make them better. For problem-solving, 38% expect a negative impact, while a larger share, 29%, anticipates a positive one. Sizable shares of the public, ranging from 16% to 20% across these different skills, are unsure what the impact will be.
The Role of AI in Society
The survey highlights a clear distinction in public acceptance of AI based on the task it’s performing. Americans are broadly receptive to AI’s use in roles that involve heavy data analysis but are highly resistant to its involvement in personal, judicial, or governmental affairs.
Areas of Acceptance
Majorities of the public believe AI should play at least a small role in several analytical domains. These include:
- Forecasting the weather: 74% support a role for AI.
- Searching for financial crimes: 70% are in favor.
- Searching for fraud in government benefits claims: 70% support this application.
- Developing new medicines: 66% see a role for AI.
- Identifying suspects in a crime: 61% are open to its use.
In these areas, about one-third of Americans support AI playing a “big role.” Adults with higher education are much more receptive to these applications. For example, 85% of those with a postgraduate degree say AI should have a role in developing new medicines, compared to 52% of those with a high school diploma or less.
Areas of Rejection
Conversely, Americans overwhelmingly reject the use of AI in more personal and subjective areas. About 73% say AI should play “no role at all” in advising people about their faith in God. Two-thirds (66%) believe it should have no role in judging whether two people could fall in love.
There is also strong public skepticism about AI’s role in governance and the legal system. About 60% of Americans say AI should not have a role in making decisions about how to govern the country. Nearly half (47%) believe AI should play no role in selecting who should serve on a jury, while only 33% support AI having a small or big role in this process.
| Area of Application | A big role (%) | A small role (%) | No role at all (%) |
|---|---|---|---|
| Forecasting the weather | 35 | 39 | 12 |
| Searching for financial crimes | 35 | 35 | 13 |
| Developing new medicines | 31 | 35 | 18 |
| Providing mental health support | 11 | 34 | 36 |
| Selecting who should serve on a jury | 5 | 28 | 47 |
| Making decisions about how to govern the country | 4 | 23 | 60 |
| Judging whether two people could fall in love | 3 | 16 | 66 |
| Advising people about their faith in God | 3 | 8 | 73 |
Summary
The American public’s view of artificial intelligence in 2025 is marked by caution and a growing sense of concern. While awareness and interaction with AI are on the rise, especially among younger and more educated demographics, this familiarity has not translated into widespread excitement. Instead, a majority of Americans are more worried than enthusiastic, pointing to the potential for AI to erode essential human skills, spread misinformation, and operate beyond human control. There’s a strong public demand for greater personal control over how AI is used and a near-universal agreement on the importance of g able to distinguish AI-generated content from human creations. The public draws a clear line for AI’s acceptable uses, showing openness to its application in data-heavy tasks like scientific research and weather forecasting, but firmly rejecting its involvement in personal relationships, governance, and faith. As AI technology continues to evolve and become more integrated into society, these public attitudes will shape the ongoing dialogue about its regulation, implementation, and ultimate place in the human experience.
10 Best Selling Books About Artificial Intelligence
Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark
This book frames artificial intelligence as an evolution of “life” from biological organisms to engineered systems that can learn, plan, and potentially redesign themselves. It outlines practical AI governance questions – such as safety, economic disruption, and long-term control – while grounding the discussion in real machine learning capabilities and plausible future pathways.
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
This book analyzes how an advanced artificial intelligence system could outperform humans across domains and why that shift could concentrate power in unstable ways. It maps scenarios for AI takeoff, AI safety failures, and governance responses, presenting the argument in a policy-oriented style rather than as a technical manual.
Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell
This book argues that the central issue in modern AI is not capability but control: ensuring advanced systems pursue goals that reliably reflect human preferences. It introduces the alignment challenge in accessible terms, connecting AI research incentives, machine learning design choices, and real-world risk management.
The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World by Pedro Domingos
This book explains machine learning as the engine behind modern artificial intelligence and describes multiple “schools” of learning that drive practical AI systems. It connects concepts like pattern recognition, prediction, and optimization to everyday products and to broader societal effects such as automation and data-driven decision-making.
The Alignment Problem: Machine Learning and Human Values by Brian Christian
This book shows how machine learning systems can produce outcomes that diverge from human values even when designers have good intentions and ample data. It uses concrete cases – such as bias in automated decisions and failures in objective-setting – to illustrate why AI ethics and evaluation methods matter for real deployments.
Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell
This book separates marketing claims from technical reality by explaining what today’s AI can do, what it cannot do, and why general intelligence remains difficult. It provides a clear tour of core ideas in AI and machine learning while highlighting recurring limitations like brittleness, shortcut learning, and lack of common sense reasoning.
The Age of AI: And Our Human Future by Henry A. Kissinger, Eric Schmidt, and Daniel Huttenlocher
This book focuses on how artificial intelligence changes institutions that depend on human judgment, including national security, governance, and knowledge creation. It treats AI as a strategic technology, discussing how states and organizations may adapt when prediction, surveillance, and decision-support systems become pervasive.
AI Superpowers: China, Silicon Valley, and the New World Order by Kai-Fu Lee
This book compares the AI business ecosystems of the United States and China, emphasizing how data, talent, capital, and regulation shape competitive outcomes. It explains why applied machine learning and automation may reconfigure labor markets and geopolitical leverage, especially in consumer platforms and industrial applications.
Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World by Cade Metz
This book tells the modern history of deep learning through the researchers, labs, and corporate rivalries that turned neural networks into mainstream AI. It shows how technical breakthroughs, compute scaling, and competitive pressure accelerated adoption, while also surfacing tensions around safety, concentration of power, and research openness.
The Coming Wave: AI, Power, and Our Future by Mustafa Suleyman and Michael Bhaskar
This book argues that advanced AI systems will diffuse quickly across economies and governments because they can automate cognitive work at scale and lower the cost of capability. It emphasizes containment and governance challenges, describing how AI policy, security controls, and institutional readiness may determine whether widespread deployment increases stability or amplifies systemic risk.

