AI Isn’t the Threat. Monopoly Control Over Understanding Is.

Artificial intelligence is being presented to the public as something almost alien — impossibly complex, autonomous, and far beyond the reach of ordinary people. The dominant narrative emphasizes fear, awe, and inevitability, while avoiding plain explanations.

That framing serves a purpose.

When people believe a tool is incomprehensible, they don’t question who controls it. They don’t experiment with it. And they don’t challenge the systems built around it.

AI is not replacing information systems. Data storage, data governance, and information architecture are not disappearing — they are becoming more valuable than ever. AI sits on top of these systems. It does not eliminate them. It amplifies power and control for those who already own and govern the information.

AI is not the system itself — it is a tool that operates on existing information. Yet the public is repeatedly told that understanding it is far beyond their ability. What is rarely emphasized is that much of the foundational AI tooling is openly available, which is why competing language models exist and why companies around the world are developing their own systems.

With modest resources and time, individuals and small teams can already experiment with these tools at a basic level. The barrier is not intelligence — it is the confidence to engage rather than defer. The confidence to tinker.
We used to be a society of tinkerers and builders. Monopoly control has trained us not to — encouraging us to consume and use tools on their terms, rather than understand how those tools work or recognize that experimentation is not an exclusive privilege.

In the United States, experimentation with advanced technology is increasingly centralized — treated as something best left to large institutions and corporations. In contrast, some countries actively encourage widespread experimentation and technical literacy across their population.

That difference has consequences. Innovation thrives when many people are allowed to tinker, question, and build. When experimentation is discouraged and control is concentrated, monopolies grow more powerful — and long-term competitiveness erodes.

Fear of falling behind does not come from too much public understanding. It comes from systems that depend on limiting it to survive.


AI Is a Probability System

Before AI can be understood as a force shaping society, it has to be understood for what it actually is.

At its core, modern AI — especially the kind most people interact with — is a probability system.

Large Language Models do not “think” in the human sense. They do not understand meaning, intent, or truth. What they do is calculate likelihoods.

Language models work by breaking text into small units called tokens. A token might be a word, part of a word, or even punctuation. Each token is represented numerically, and relationships between tokens are learned during training.

When you type a prompt into an LLM, the system does not search for an answer. Instead, it calculates which token is most likely to come next, based on:

  • the tokens in your input
  • the patterns it learned during training
  • the statistical relationships between words

It then selects the next token and repeats this process, building a response one token at a time.

In simple terms:
an LLM predicts what comes next.

Not because it knows — but because, statistically, those tokens are likely to follow one another.


Why Input, Language, and Education Matter

Because LLMs operate on probability, the way a question is asked matters enormously.

Clear, structured language gives the system better signals. Vague or poorly formed input gives it less to work with. The output is shaped by the input — not only in topic, but in depth, clarity, and tone.

This is why the same model can produce very different answers depending on how a prompt is written.

It is also why language models tend to mirror the language they receive. A casual prompt often produces a casual response. A precise, well-structured prompt is more likely to produce a detailed and coherent one.

The system is not judging intelligence.
It is responding to probability.

This makes something uncomfortable but unavoidable:

The ability to read, write, and reason clearly is becoming more important — not less — in an AI-driven world.

Yet at the same time, we are increasingly told that higher education is unnecessary, outdated, or a waste. That deep literacy and structured reasoning no longer matter. That “practical skills” alone are enough.

This message is not followed by those who benefit most from the system.

The wealthy and powerful continue to send their children to universities. They ensure they learn how to articulate ideas, analyze systems, and communicate precisely. They are being trained to use advanced tools fluently — not to fear them.

Everyone else is encouraged to opt out.

That divide does not stay neutral. Over time, it becomes structural.


What AI Learns When It Is Trained on Us

Once AI is understood as a probability system, the next question becomes unavoidable:

What is it being trained to predict?

Much of today’s most valuable training data is human behavior.

What we buy.
What we watch.
What we react to.
What we accept as normal.
How we speak.
How we disagree.

A system trained extensively on human behavior becomes very good at modeling, influencing, and shaping human behavior. This is not a moral claim. It is a mechanical one.

When that capability is paired with monopoly control over data, platforms, and distribution, the result is not empowerment. It is consumerism — a system where people believe they are choosing freely while the environment around those choices is engineered.


Control Thrives on Asymmetry

Modern control rarely looks like force.
It looks like unequal visibility.

If a small group understands how systems work while everyone else is kept confused, distracted, or intimidated by complexity, control emerges naturally.

AI accelerates this asymmetry.

Not because it is malicious — but because it scales pattern recognition and influence in ways that were never previously possible.

When understanding is centralized, power concentrates.
When experimentation is discouraged, innovation narrows.
When education is devalued, agency erodes.


The Real Risk Is Delegating Thought

AI is a tool. Nothing more.

But tools shape behavior — especially when people are conditioned to trust them more than themselves. When systems trained on human data begin to define relevance, truth, and normalcy, the danger is not artificial intelligence.

The danger is artificial consent.

People stop questioning.
They stop articulating.
They stop thinking deeply.

Not because they are incapable — but because the environment no longer rewards it.

A free society cannot survive inside systems that discourage understanding while concentrating power. Democracy requires individuals who can reason, communicate, and participate meaningfully.

AI will not decide our future.

But who is encouraged to understand it — and who is told to stay passive — will determine whether it serves people, or locks them permanently into consumerism.

Clarity is not optional.
It is the prerequisite for freedom.


Closing

AI does not determine our future.
The systems we choose to build around it do.

A society that understands its tools can govern itself. A society that is discouraged from understanding them cannot. The difference is not intelligence — it is participation.

We can choose to build communication systems that prioritize clarity over conflict, collaboration over control, and understanding over manipulation. But that choice only exists if people are willing to engage, think critically, and work together.

That is the work ahead. And it is work we can do — deliberately, transparently, and together.


Matthew Hunt
Founder & Systems Architect
Square Right, Inc.

Read more Posts. Learn more at our Home Page.
Support us by buying a sticker.

Scroll to Top