2026 Calendar

The Rise of Civil Military Fusion Programs

Ex Machina Was the Warning. We Are Now at the AI Oppenheimer Moment- The Rise of Civil Military Fusion Programs.

By Rex M. Lee | Security Advisor

We are at a pivotal moment in history—the evolution of artificial intelligence integrated into a data-driven business model centered on targeted advertising, rooted in Surveillance Capitalism.

Surveillance Capitalism is fueled by addictive, brain-hijacking and manipulative advertising technologies, now amplified by AI systems capable of inducing behavioral conditioning—what many recognize as the Eliza Effect. This is not just theory; it is the underlying premise warned about in the film Ex Machina.

To fully understand the warning depicted in that film, we must revisit a defining moment in modern history. In Ex Machina, a quote is referenced that echoes far beyond cinema:

“Now I am become Death, the destroyer of worlds.”

This line, drawn from the Bhagavad Gita, was famously cited by J. Robert Oppenheimer after witnessing the first atomic bomb test.

But that realization came too late.

The weaponization of nuclear energy had already crossed the point of no return. Despite early warnings from Albert Einstein, once nuclear capability was achieved, control was lost. The technology proliferated—copied, stolen, and repurposed by adversaries—leading to decades of geopolitical instability and existential risk that continues today.

We are now facing an equivalent moment.

This time, the weapon is not nuclear energy—it is data.

More specifically, it is the weaponization of our personal information through AI systems. Military contractors such as Palantir Technologies have demonstrated how AI can be used to operationalize vast datasets—information originally collected through consumer technologies—for intelligence and warfare applications.

That data does not originate in a vacuum. It is harvested continuously through operating systems, applications, social media platforms, and AI tools—largely driven by the global AdTech ecosystem pioneered at scale by Google in 2013.

In simple terms, platforms built for advertising have become pipelines for intelligence.

Palantir’s platforms, such as Gotham, integrate and analyze this data in ways similar to how data brokers aggregate consumer information—only now applied to military and intelligence objectives.

I was asked by the Utilities Technology Council and their Federal Communications Commission liaison to conduct a security session for their annual meeting in Minneapolis, focused on the threats posed by civil–military fusion programs. As part of this effort, I authored a report assessing whether China and Russia possess capabilities comparable to those demonstrated by U.S.-based platforms like Palantir.

My findings indicate that they do—at an estimated 95% parity.

Why?

Because the same global advertising infrastructure connects them.

Advertising networks in China and Russia are linked into a broader ecosystem of distributed AdTech “micro-cores,” some of which operate within their jurisdictions. This means entities within those countries can access and utilize similar datasets—originating from platforms operated by Apple, Microsoft, Meta Platforms, ByteDance, Tencent, and others.

The result is the creation of what can be described as an identifiable “Digital DNA Profile”—a comprehensive user model built from thousands of data points, including personal, behavioral, biometric, financial, medical, and location-based information.

This data is no longer just monetized.

It is operationalized.

According to public statements from executives at Palantir Technologies, AI-driven analysis of such data has proven highly effective in modern military and intelligence operations—ranging from targeting infrastructure to tracking individuals of interest.

Like Oppenheimer, we may have already crossed the line.

Which brings us back to Ex Machina.

The film follows Caleb, a programmer selected to evaluate an advanced AI embodied in a humanoid robot named Ava, created by a powerful tech CEO, Nathan. But beneath the surface lies a deeper truth—one that mirrors our present reality more closely than most realized when the film was released.

In Ex Machina, there is a pivotal moment when Nathan explains the true nature of the experiment. It wasn’t about whether Ava could pass a Turing Test. It was about whether she could manipulate Caleb—predict him, influence him, and ultimately control his behavior to achieve her objective.

That scene was not science fiction.

It was a blueprint.

What Alex Garland, the author/futurist for Ex Machina captured twelve years ago mirrors the exact trajectory many of us in the technology and security space began uncovering as early as 2006—and more concretely between 2010 and 2013 when mobile operating systems, apps, and data ecosystems began to scale globally via mobile devices such as smartphones, connected vehicles, and tablet PCs supported by Android, iOS, and Windows.

We are no longer asking if machines can think.

 We are now confronting a far more dangerous reality:

  • Machines can model, predict, and influence human behavior at scale—because they were trained on us.

Phase 1: Personalization (2006–2012)

The Foundation Layer

The early smartphone era was sold as convenience.

Operating systems from Google, Apple, and Microsoft began collecting user inputs—searches, location, contacts, behavior—to “customize the experience.”

This was the first stage:

  • Data collection justified as usability
  • Permissions framed as functionality
  • Users unknowingly becoming continuous data sources

This is where the pipeline in Ex Machina begins.

Phase 2: AdTech Core (2013)

The Birth of Surveillance Capitalism at Scale

In 2013, the launch and expansion of global AdTech ecosystems transformed data into currency.

What was once personalization became:

  • Behavioral profiling
  • Identity graphing
  • Real-time bidding on human attention
  • Global data pipelines feeding centralized systems

This was not just advertising.

This was the construction of a planetary-scale behavioral modeling system.

If Ava was trained on search data in Ex Machina, then AdTech became the real-world equivalent:

A global neural network trained on billions of human lives.

Phase 3: Behavioral Manipulation (2016)- The Cambridge Analytica Moment

The 2016 U.S. Presidential Election and the Cambridge Analytica scandal exposed something critical:

Our personal information was no longer being used just to predict behavior, it was being used for political indoctrination via targeted advertising technologies that induce the Eliza Effect, the humanization of AI, machines, or tech leading to an emotional bond via tech addiction and manipulation.

It was being used to influence it.

  • Psychological profiling
  • Microtargeted messaging
  • Algorithmic amplification
  • Political and ideological conditioning

This is where the ELIZA effect evolves into something far more dangerous:

AI-assisted persuasion at scale.

What Ava did to Caleb in a controlled lab environment was now happening across entire populations.

Phase 4: Generative AI & Civil-Military Fusion (2023–Present)- The Oppenheimer Moment

With the rise of generative AI, platforms tied to organizations like Palantir Technologies, and the integration of large-scale data systems, we have crossed into a new phase:

  • AI systems trained on global behavioral datasets
  • Real-time predictive modeling
  • Decision support for enterprise, government, and defense
  • Integration into intelligence and operational systems

This is no longer commercial technology alone.

This is dual-use infrastructure.

This is where your data—collected through AI, smartphones, apps, chatbots, and platforms—becomes:

  • A strategic asset in modern warfare.

Just as J. Robert Oppenheimer realized the implications of atomic power, we now face the consequences of behavioral intelligence systems built from civilian life.

We are here regarding the rise of Civil Military Fusion programs.

Phase 5: The Next Evolution — Agentic AI & Neural Integration (Toward 2030)

The trajectory does not stop here.

We are moving toward:

  • Agentic AI systems capable of autonomous decision-making
  • Integration with quantum computing
  • Brain-computer interfaces such as Neuralink
  • Continuous feedback loops between human cognition and machine systems

If left unchecked, this path leads to something eerily familiar:

  • The Matrix — human energy and data harvested within a system
  • TRON — centralized digital control environments
  • The Terminator — autonomous systems evolving beyond human oversight

This is not about robots taking over.

It is about humans becoming integrated into systems designed to predict, influence, and monetize their behavior—without compensation or control.

A form of digital dependency.

A form of cyber enslavement.

The Core Problem

At every stage, the same mechanism persists:

  • Data extracted through contracts of adhesion
  • Behavior modeled without meaningful consent
  • Systems optimized for engagement, not well-being
  • Intelligence systems built on uncompensated human input

In Ex Machina, Caleb never realized he was the subject of the experiment.

Today, most users still don’t understand this. Many believe they are simply the product being exploited for targeted advertising, when in reality they are being actively experimented on through highly addictive, brain-hijacking and manipulative advertising systems—now amplified by AI-driven indoctrination technologies.

When combined, these systems function as military-grade behavioral conditioning.

This is why tech addiction cannot be compared to alcohol or tobacco addiction. Those substances, while harmful, do not indoctrinate or psychologically condition the user in real time.

Most privacy and tech addiction advocates, such as Tristan Harris and Jonathan Haidt, focus primarily on limiting tech addiction for adults while banning its use for children. While this is a noble cause, it is the wrong approach.

These addictive and manipulative technologies—when combined—are equally harmful, addictive, and potentially dangerous for adults, not just teens and children.

In fact, they are more dangerous than subliminal advertising, which was banned in the 20th century due to its brainwashing capabilities. That technology was exploited by Joseph Goebbels, Minister of Propaganda, Nazie Party, to spread misinformation and manipulate public perception.

While this is a difficult comparison, it is increasingly relevant given how governments, politicians, militaries, and intelligence agencies are now weaponizing tech addiction and manipulation through civil–military fusion programs.

Subliminal advertising followed a similar path—it began as a commercial tool for influencing consumer behavior but was later weaponized for political and state-driven propaganda via brainwashing which is what subliminal technology is used for.

We are now witnessing that same pattern repeats.

Addictive, brain-hijacking technologies, manipulative advertising systems, and AI-driven indoctrination tools were initially introduced to enhance user experience. They then evolved into engines for targeted advertising and are now being leveraged for intelligence gathering, the spread of mis information and propaganda, plus used for consumer, political and ideological indoctrination, brainwashing.

Worse the information when combined with Google’s AdTech technologies, can be used for military targeting the same way a targeted ad finds the consumer using multiple digital signals coupled with biometric data which include the following:

  1. GPS and geofence location technologies
  2. Nearfield communication (NFC tags- RFID)
  3. Wi-Fi access points
  4. Bluetooth connectivity
  5. Cellular tower triangulation
  6. Biometric data
    1. Facial recognition
    2. Voice print
    3. Retina-eye
    4. Fingerprint
    5. Genetic information (DNA)

All of this information including sensitive user data is collected via AI, apps, social media platforms, chatbots, DNA platforms (Anncesstory.com), financial platforms, connected vehicles, automated assistants (Alexa), home security/environmental systems, smartphones, computers, and other connected products all supported by Android, iOS, Windows.

All products an services we pay money for.

One can say we are funding our own oppression, surveillance, data mining, weaponization of our information against us, exploitation for profits via targeted advertising sales, and the erosion of our civil liberties, human rights, and the virtual elimination of our privacy.

This cycle—commercial innovation followed by weaponization—is repeating itself nearly a century later.

Which is why the most effective path forward, from this perspective, is to treat these technologies the same way subliminal advertising was treated: by banning their most harmful applications, along with dismantling surveillance capitalism, eliminating targeted advertising practices, and prohibiting predatory terms of service that force participation through contracts of adhesion.

The Fork in the Road: The Einstein Path vs The Oppenheimer Path

We now face a fundamental choice.

The Oppenheimer Path:

  • Continue scaling surveillance-based AI systems
  • Integrate them into military and intelligence frameworks
  • Allow behavioral data to drive geopolitical strategy
  • Accept manipulation as a byproduct of innovation

The Einstein Path:

  • The adoption of an Electronic Bill of Rights (EBOR) banning:
    • Addictive design
    • Surveillance capitalism
    • Predatory contracts of adhesion (terms of service) forcing participation within Surveillance Capitalism
  • Human-centered AI
  • Ethical data governance
  • Transparent, consent-based systems
  • AI as a collaborator—not a controller

The Off-Ramp: The Electronic Bill of Rights (EBOR)

The Electronic Bill of Rights is not just a privacy framework.

It is an AI governance framework.

It addresses the full pipeline:

  • Ban on surveillance capitalism
  • Ban on use of contracts of adhesion (forced participation)
  • Data ownership and consent
  • Ban on manipulative and addictive systems
  • Algorithmic accountability
  • Separation of AdTech from government and defense
  • Protection against foreign and adversarial exploitation
  • Enforcement through real legal accountability
  • Enforcement of existing FCC public trust/obligation laws
  • Enforcement of existing consumer/child protection and privacy laws

EBOR does what most policy proposals fail to do:

It regulates the input, the system, and the outcome.

Because if the data is compromised, the intelligence built on it is compromised.

Final Thought

Ex Machina was never about machines becoming human.

It was about machines learning how to control humans.

We are no longer watching that story.

We are living it.

The question now is not whether AI will evolve.

It will.

The question is:

  • Will we govern it before it governs us?

And more importantly—

Will we choose the Einstein path… or continue down the Oppenheimer path?

To learn more about the Electronic Bill of Rights, visit the website at: www.ElectronicBillofRights.com

About the Author 

Rex M. Lee holds Wireless Industry and Application Development Experience (35 years)/Freelance Technology Journalist/Privacy and Data Security Consultant/Blackops Partners Analyst and Researcher/Public Speaker- For More Information Visit My Smart Privacy at: www.MySmartPrivacy.com

Dive Deeper with TechTalk Summits

Want to keep your business competitive and strengthen your approach to cybersecurity and digital sovereignty? Join us at a TechTalk Summits event to connect with industry experts and gain actionable insights to stay ahead of the curve.

Register now and stay one step ahead! [All Events]