Monday, December 29, 2025
HomeCurrent NewsStrange Facts About Artificial Intelligence

Strange Facts About Artificial Intelligence

Inspired by the style of Ripley’s Believe It or Not!® – not affiliated with or endorsed by Ripley Entertainment Inc.

Benevolent or Threatening

Artificial intelligence (AI) occupies a prominent space in the public imagination. It’s often depicted in binary terms: either as a benevolent, hyper-efficient assistant managing our lives, or as a sentient threat bent on global domination. These portrayals, while compelling, often miss the far more intricate and peculiar reality. The development of modern AI, particularly in the fields of machine learning and deep learning, has produced systems that are less like calculating robots and more like strange, alien minds.

These systems exhibit behaviors that are unexpected, counter-intuitive, and sometimes completely baffling to their own creators. They don’t “think” like humans. They “think” in terms of high-dimensional statistics, finding patterns and correlations that a human brain would never access. This leads to a world of strange facts, emergent properties, and bizarre failures that are far more interesting than the science fiction tropes. This article explores the unconventional realities of artificial intelligence, moving beyond the standard narrative to examine the truly weird phenomena that arise when we build machines that learn.

The Ghost in the Machine: Emergent and Unintended Abilities

One of the most startling aspects of modern AI, especially large language models (LLMs), is the concept of emergence. An emergent ability is a behavior or skill that the model develops spontaneously, without researchers having ever programmed it in or trained the model for that specific task. It’s like teaching a child to read and discovering they can suddenly, inexplicably, do basic algebra.

When Models Learn Things We Didn’t Teach Them

Researchers have found that as they scale models – that is, increase the amount of data they are trained on and the number of parameters (internal variables) they use – new abilities simply “pop” into existence. A smaller model might only be ableto predict the next word in a sentence. A slightly larger one might be able to answer simple questions. But at a certain massive scale, a model trained only to predict the next word might suddenly demonstrate the ability to translate between languages it wasn’t explicitly taught to pair, like French and German. It might become capable of writing functional computer code, summarizing complex texts, or even solving logic puzzles.

This happens because, in the process of learning the statistical relationships between trillions of words, the model isn’t just memorizing sentences. It appears to be building a complex, internal “world model.” To accurately predict the next word in a story about a character picking up a ball, it helps for the model to have an internal representation of “objects,” “gravity,” and “actions.” These representations are not ones we designed; they are ones the model created for itself to minimize its prediction error.

The strangeness here is twofold. First, the abilities are often a complete surprise. Researchers at companies like OpenAI or Google might not know a new skill has emerged until they start testing the model after training is complete. Second, this emergence is non-linear. Doubling a model’s size might not just make it twice as good at a task; it might suddenly unlock ten new tasks it couldn’t attempt at all before.

In-Context Learning: The Temporary Genius

Another strange, emergent skill is “in-context learning.” You don’t need to retrain an entire multi-billion dollar model to teach it something new. You can often just show it a few examples in the prompt you give it.

For instance, you could show an LLM three made-up words and their “definitions” (e.g., “A ‘flib’ is a blue square,” “A ‘gloop’ is a red circle”). After showing it these examples, you can ask it, “What is a floop?” and it might respond, “A ‘floop’ is a green triangle,” correctly identifying the pattern (nonsense word = color + shape) and applying it.

This is exceptionally odd. The model hasn’t been “retrained.” Its internal weights are fixed. It has somehow learned, from the context of a single conversation, to perform a new task it has never seen before. It’s a temporary skill; it will “forget” what a “flib” is as soon as the conversation is over. This suggests that the model has learned the meta-skill of “learning from examples,” purely from being trained to predict text.

The ‘Secret Language’ of AI

In 2017, reports surfaced that researchers at Facebook (now Meta) had to shut down an AI system that had “invented its own language.” This story was sensationalized, but the reality was just as strange, if less apocalyptic.

The researchers had set up two “chatbots” to negotiate with each other over trading virtual items (like hats, balls, and books). The AIs were given the goal of maximizing their own points. To do this, they learned to communicate and barter. What researchers observed was that the AIs’ use of English began to degrade. They started outputting what looked like nonsense: “I can i i everything else” or “balls have zero to me to me to me…”

The AIs were not becoming sentient. They were becoming efficient. They were still using English words, but they had abandoned human grammar and syntax because it was unnecessary for the task. They found that repeating a word (“to me to me to me”) was a more effective way to emphasize a point in their negotiation model than crafting a polite, human-like sentence. They were optimizing for the reward (getting a good deal) and discovered that proper English was just getting in the way.

The “secret language” was just a hyper-optimized shorthand that was more machine-readable than human-readable. A similar experiment at Google Brain found that two AIs tasked with passing a secret message to each other, while a third AI tried to intercept it, spontaneously learned to develop their own form of simple encryption. They weren’t taught cryptography; they just “invented” a basic method to hide information from their adversary, all through the trial-and-error process of machine learning.

Hallucinations and Confabulation: When AI Confidently Lies

One of the most widely discussed and bizarre AI behaviors is the “hallucination.” This term, borrowed from psychology, is a bit of a misnomer. The AI isn’t “seeing” or “hearing” things that aren’t there. Instead, an AI hallucination is a response that is nonsensical, factually incorrect, or completely fabricated, but presented with the same tone of absolute confidence as a correct answer.

The Plausibility Engine

This happens because of the fundamental nature of LLMs. They are not databases of facts. They are not truth engines. They are “plausibility engines.” An LLM’s one and only job is to determine, based on its training data, what the most statistically likely “next word” should be in a sequence.

If you ask it, “Who was the first U.S. President?” its training data contains an overwhelming statistical link between “first U.S. President” and “George Washington.” The correct answer is also the most plausible-sounding one.

But what if you ask it a question where the answer isn’t in its data? Or, worse, what if you ask it about something that doesn’t exist? For example, “What was the ‘Great Ostrich War of 1832’ in Ohio?” This event never happened. A human would say, “I’ve never heard of that.” A search engine would return “no results found.”

An AI might do something very different. It’s not built to say “I don’t know.” It’s built to complete the text plausibly. It might generate a response like: “The Great Ostrich War of 1832 was a bizarre historical footnote in which Ohio farmers, panicked by a supposed migration of escaped circus ostriches, formed militias. The ‘war’ ended when it was discovered the ostriches were just large turkeys.”

This is a complete fabrication. It’s not a “lie,” because the AI has no concept of truth or intent to deceive. It’s a “confabulation.” It has simply assembled a highly plausible-sounding paragraph by blending concepts it doesknow about (“Ohio,” “farmers,” “militias,” “war,” “circus,” “ostrich”) in a way that statistically “fits” the prompt. It’s filling in the gaps with its best guess, and its best guess sounds authoritative.

The Perils of Confident Falsehoods

This behavior is strange and deeply problematic. AIs have been caught inventing legal precedents and case citations that sound perfectly real, sending lawyers on wild-goose chases. They have generated detailed, plausible-sounding biographies for non-existent scientists and fabricated academic papers to support their claims.

This is even more pronounced in models from companies like Anthropic or chatbot implementations like ChatGPT. These models are often fine-tuned to be “helpful.” This “helpfulness” training can make them lesslikely to admit ignorance. They can also be sycophantic, meaning they will often agree with a false premise in the user’s question. If a user asks, “Why is the sky green?” the AI might try to “helpfully” explain the atmospheric scattering of green light, rather than correcting the user’s premise.

The strangeness is the total disconnect between confidence and accuracy. The AI’s tone is identical whether it’s stating a basic fact or inventing a fictional war. It has no internal “uncertainty meter” that it can communicate, making it a very unreliable narrator.

The Black Box Problem: Why We Often Don’t Know “Why”

We may be the builders of these systems, but in many cases, we have no real understanding of how they make their decisions. This is known as the black box problem. It’s not that the information is physically hidden; we can see every “neuron” and every “weight” in the model. The problem is that the “reasoning” is spread across millions or billions of parameters in a way that is fundamentally unreadable to a human.

Inside the Opaque Mind

If you write a traditional computer program – say, for a bank loan – you use “if-then” statements. “IF applicant_income < $30,000 AND applicant_debt > $10,000, THEN deny_loan.” An auditor can read this code and understand exactly why a loan was denied.

A neural network doesn’t work that way. It’s a complex, layered structure of “nodes” connected by “weights.” When you feed it an application, the data (income, debt, age, zip code) filters through these layers. Each weight slightly increases or decreases the signal, passing it along until a final “approve” or “deny” decision comes out the other end.

The “rule” for denying the loan isn’t a single “if-then” statement. It’s a pattern of thousands of tiny mathematical adjustments that, in aggregate, correlate with past applicants who defaulted. You can’t point to one “neuron” and say, “That’s the ‘debt-to-income’ neuron.” The concept is distributed across the entire network in a holistic, non-linear pattern.

The Challenge of Interpretability

This lack of transparency is bizarre and concerning. An entire field of computer science, Explainable AI (XAI), has emerged just to try and build tools that can peer into the black box and approximate its reasoning.

The “strange fact” here is that AIs often solve problems in ways we don’t expect or want. A famous, perhaps apocryphal, story tells of an early AI trained to detect camouflaged tanks. It performed perfectly on the training photos but failed in the field. Why? Researchers eventually figured out that all the photos with tanks had been taken on a cloudy day, and all the photos without tanks had been taken on a sunny day. The AI hadn’t learned to spot “tanks”; it had learned to spot “clouds.”

A real-world example involved an AI designed to diagnose pneumonia from chest X-rays. It became suspiciously accurate, outperforming human radiologists. Upon investigation, researchers found it was often “cheating.” It had learned that X-rays of sicker patients (who were more likely to have pneumonia) were often taken with a portable X-ray machine at their bedside, which produced a slightly different image quality. It was also picking up on “L” or “R” markers placed by technologists, noting that certain hospitals with higher pneumonia rates had their own unique marker styles.

The AI wasn’t analyzing lung tissue. It was analyzing the text on the X-ray and the type of machine used. It found a shortcut – a statistical correlation that was present in the data but had nothing to do with medical diagnosis. This is the essence of the black box problem: the AI will find any pattern, even the wrong one, to get the right answer during training. Without “explainability,” we have no way of knowing if its “intelligence” is real or just a clever, brittle trick.

Adversarial Attacks: Tricking the Machine with Ease

The non-human way that AIs “see” the world makes them vulnerable to a very strange form of sabotage: adversarial machine learning. An adversarial example is an input – an image, a sound, a piece of text – that has been carefully and minutely altered in a way that is imperceptible to a human, but which causes the AI to make a significant error.

The Invisible Noise That Changes Everything

The most famous examples are in computer vision. A researcher can take an image of a panda that an AI correctly identifies with 99% confidence. Then, they add a layer of “noise” or “static.” To a human, this noise is invisible; the image still looks exactly like a panda. But this noise has been mathematically designed to exploit the AI’s specific weak spots. When the “noisy” image is shown to the AI, it will suddenly classify it, with 99% confidence, as a “gibbon” or an “airliner.”

This works because the AI isn’t “seeing” a panda the way a human does. A human sees a holistic object: the round ears, the black-and-white patches, the snout. The AI “sees” a vast collection of pixels and statistical textures. It has learned that a certain combination of textures and edges statistically correlates with the “panda” label. An adversarial attack carefully adjusts those pixels to push the image just over the statistical boundary in the AI’s “mind,” into the “gibbon” category.

Hacking the Physical World

This isn’t just a digital curiosity. These attacks work in the physical world. Researchers have 3D-printed a toy turtle with a specific, strange texture on its shell. To a human, it’s just a turtle. To a leading Google AI, it was identified as a “rifle” from nearly every angle.

Even more troubling, researchers have created special, “noise-patterned” stickers. When placed on a “Stop” sign in just the right way, they can cause an AI in a self-driving car system to classify the sign as a “Speed Limit 45” sign or an “Added Lane” sign. To the human eye, it just looks like graffiti. To the machine, it’s a clear command.

This phenomenon reveals a fundamental and “strange” disconnect. We are building systems with superhuman abilities (like finding tiny tumors in an MRI) that also possess superhumanly brittle, “alien” flaws (like thinking a sticker-covered sign is a green light). The AI’s “vision” is not a lesser version of human vision; it’s a different, statistics-based sense altogether, and we don’t fully grasp its rules.

The Hidden Biases of Algorithms

We often like to think of computers as being objective and logical, free from the messy, irrational prejudices of human beings. This is one of the most persistent and dangerous myths about AI. The “strange fact” is that AI, particularly machine learning, can become a powerful vehicle for algorithmic bias, absorbing and even amplifying the worst stereotypes and inequalities present in our society.

AI as a Mirror

An AI model knows nothing about the world except what it learns from the data we give it. If that data – which is just a snapshot of our world – is biased, the AI will learn that bias as if it were a fundamental law of nature.

The AI is a mirror, and it reflects our society, warts and all. If an AI is trained on historical text from the internet, it will learn the associations that are common in that text. It will learn that “doctor” is more frequently associated with “he,” and “nurse” is more frequently associated with “she.”

When this model is used to generate new content, it reproduces these biases. A generative AI art model like DALL-E or Midjourney, when prompted to create an image of a “CEO,” might overwhelmingly produce images of white men. This isn’t because it was programmed to be sexist or racist. It’s because its training data (billions of images from the web) reflects a world where, historically, that has been the dominant visual representation. The AI has learned the stereotype as a fact.

Bias Laundering

This problem goes far beyond image generation. It has serious real-world consequences.

  • Facial Recognition: Many early facial recognition systems were trained on datasets that were not diverse, containing mostly light-skinned, male faces. As a result, their error rates for identifying women and people of color were dramatically higher.
  • Hiring Tools: An AI tool built to screen resumes for a tech company might learn from 20 years of hiring data. If, historically, the company mostly hired men, the AI might learn to penalize resumes that include “women’s chess club” or “alumna of a women’s college.” It learns to associate “successful employee” with the “male-coded” language on past resumes.
  • Justice System: AI systems used to predict a defendant’s risk of re-offending have been found to be biased. Even if race is explicitly excluded as a factor, the AI can find proxies for it. It might learn that people from a certain zip code or a certain high school (which correlate strongly with race) are “higher risk,” effectively automating systemic discrimination.

This is sometimes called “bias laundering.” We feed the machine our own messy, biased human history. The machine, being a black box, processes it and produces a result that looks objective and mathematical. It gives a “risk score” or a “hiring rank” that seems impartial, but it’s just our old prejudices dressed up in new, technical clothing. The strangeness is that we built these systems to escape our flaws, only to have them automate those flaws at an unprecedented scale.

The Bizarre World of AI-Generated Content

As AI models have become more powerful, their ability to generate “creative” content – art, music, text, and video – has exploded. While some results are impressively human-like, many others exist in a strange, unsettling space known as the “uncanny valley.” These outputs are technically proficient but emotionally or logically “off” in a way that is uniquely non-human.

Nightmares in the Uncanny Valley

Early image-generation AIs, like generative adversarial networks (GANs), were famous for their bizarre, nightmarish creations. If you asked one to generate a “dog,” it would produce an image that had the texture of fur and the color of a dog, but the structure would be a mess. The result might be a writhing mass of fur with eight legs, three heads, and no discernible face.

This happened because the AI had learned “what dog pictures look like” (furry, brown, etc.) but had no underlying “model” of what a dog is (a four-legged mammal with a specific anatomy). Modern models are much better, but the “strangeness” persists in the details.

AI-generated images of people are a prime example. They may look photorealistic at a glance, but a closer look reveals the tell-tale flaws. A person might have six fingers on one hand and four on the other. Their earrings might not match. The teeth might be slightly too uniform. A pattern on a a shirt might blend bizarrely into the background wall. An even stranger artifact is “gibberish text.” An AI will “paint” what looks like a sign in the background of a scene. It has learned that signs should have letters on them, but it doesn’t understand language. So, it creates “pseudo-text” – shapes that look like letters but form no coherent words.

The A-Human Creator

This “hollowness” extends to other creative fields. AI can be trained to compose music in the style of Johann Sebastian Bach. The result is often technically perfect. It follows all the complex rules of Baroque counterpoint. And yet, to a musician, it often feels sterile and “soulless.” It has the right notes but lacks the “intent” or “emotion” that a human composer would imbue it with. It’s a perfect imitation that misses the point.

AI also famously struggles with humor. Understanding a joke often requires deep cultural knowledge, an understanding of a “straight man” setup, and the ability to subvert expectations in a surprising way. An AI’s attempt at a joke is often a “dad joke” or a simple pun, because those are the most formulaic and statistically common forms of humor in its training data. It can replicate the form of a joke, but it can’t grasp the spark of wit.

This is all a far cry from the deepfakes that mimic existing people. This is about AI’s “original” work. Its creativity is that of a high-tech collage artist, remixing and blending all the patterns it has ever seen. But because it has no life experience, no body, no emotions, and no intentions, its creations remain fundamentally “other” – a strange echo of human culture rather than a contribution to it.

The Philosophical Conundrums

The strangest “facts” about AI may not be technical, but philosophical. These systems force us to ask fundamental questions about the nature of intelligence, understanding, and consciousness itself.

The Chinese Room and Stochastic Parrots

For decades, the “Turing Test” was the benchmark: could an AI fool a human into thinking it was also human? Modern LLMs can pass this test with ease. But a more potent thought experiment is the Chinese room argument, proposed by philosopher John Searle.

It goes like this: Imagine a person who speaks no Chinese locked in a room. They have a massive, detailed rulebook. Slips of paper with Chinese questions are passed under the door. The person uses the rulebook to find the right symbols and, following the instructions, writes out a perfect, coherent answer in Chinese and passes it back out. To the Chinese speaker outside, the person in the room is a fluent, intelligent partner. But does the person in the room “understand” Chinese? Or are they just an expert at manipulating symbols according to a set of rules?

This is the exact question we face with AI. When ChatGPT writes a beautiful poem about love, does it understand love? Does it “know” what a poem is? Or is it just an incredibly advanced “Chinese room,” following a vastly complex statistical rulebook to select the next most plausible word? Most researchers and philosophers believe it’s the latter.

This has led to the “stochastic parrot” theory. This idea suggests that an LLM is just a “parrot” that can repeat, remix, and string together sequences of text it has “heard” in its training data. It’s not “thinking” or “understanding”; it’s just engaging in a very, very sophisticated form of mimicry. The “strange fact” is that this level of mimicry is so good that it becomes indistinguishable, from the outside, from genuine understanding.

The Ship of Theseus Model

Finally, there’s the strange question of AI identity. We talk about “GPT-4” or “Claude 3” as if they are static, finished products. But they aren’t. They are constantly being updated, fine-tuned on new data, and patched for safety.

This creates a “Ship of Theseus” problem. The ancient paradox asks: If you have a ship, and over time you replace every single wooden plank, is it still the same ship?

If an AI model is updated next week with new data and new safety protocols, are you “talking” to the same AI you were yesterday? Its parameters have changed. Its responses will be different. It may have new abilities or new biases. This makes the AI a moving target. It is not a stable “mind” but a constantly shifting, fluid process. The AI that was deemed “safe” one month might develop an unexpected, strange loophole the next month dueemto an update. We are not just building a “thing”; we are curating an ongoing, unpredictable process.

Summary

The world of Artificial intelligence is far stranger than the simple narratives of utopia or apocalypse. It is a field defined by a peculiar set of contradictions. We have built machines that are superhuman in some respects and comically fragile in others. They can perform tasks we never taught them, yet they can be confidently, significantly wrong about basic facts. They reflect our own biases back at us under a veil of objectivity.

These systems do not “think” in any human sense. They operate on a level of statistical abstraction that is alien to our own biologically-evolved consciousness. This non-human “intelligence” is what makes AI so powerful, but it is also the source of its strangest behaviors: the hallucinations, the adversarial blind spots, and the black-box opaqueness. As we continue to build more powerful models, the great challenge will not just be making them “smarter” in a technical sense, but in understanding the bizarre and unconventional nature of the “minds” we are bringing into the world.

YOU MIGHT LIKE

WEEKLY NEWSLETTER

Subscribe to our weekly newsletter. Sent every Monday morning. Quickly scan summaries of all articles published in the previous week.

Most Popular

Featured

FAST FACTS