2026 Calendar

AI Adoption, Perception, and Power: Why the West Risks Falling Behind

SXSW 2026

AI Adoption, Perception, and Power: Why the West Risks Falling Behind

By Rex M. Lee | Tech Journalist | Security Advisor | TechTalk Summits | Author of “Understanding AI: Will AI Be Your Master or Your Assistant”

The Experience That Sparked the Conversation

At SXSW, I had a powerful moment of what I would call “recalibration.”

It came through experiencing Karen Palmer’s immersive AI film Ascended Intelligence—a glimpse into what AI can be when it is used not as a threat, but as a tool for human evolution and creative expansion.

 

 

Immersive AI is not just the future of film—it represents the future of:

  • AI-driven immersive storytelling, where your responses to what you see on screen—through biometric data, words, and commands—can shape outcomes. You choose the ending, for better or worse.
  • Art, freedom of speech, and freedom of expression, unlocking entirely new forms of creative interaction.
  • Human–machine collaboration, where AI is used to enhance human potential—not replace it.

Rather than promoting fear or an impending AI apocalypse, the message is quite the opposite:

  • AI should be adopted responsibly, with humans remaining firmly in control.
  • AI should serve humanity—helping us evolve in a positive direction—as an assistant, not a master.

At the same time, the darker path is already visible: the misuse of AI by governments and Big Tech for surveillance capitalism, manipulation, indoctrination, and control.

These are themes I explored in my 2025 article:

Understanding AI: Will AI Be Your Master or Your Assistant? published by several publications including TechTalk Summits

I have been working professionally with AI since 2017 through my work in the OTA app and platform industry at Carnegie Technologies in Austin, TX. 

During that time, we developed fintech applications, mobile wallets, and reverse logistics platforms for both the financial and telecommunications industries—adopted by the United States Department of Defense, AT&T (Cricket), T-Mobile (MetroPCS), Verizon, and other mobile network operators globally.

Our partner, Unified Signal, developed one of the first mobile wallets adopted by the DOD as early as 2005—enabling soldiers to move money anywhere in the world using internet-connected flip phones, well before smartphones existed.

As early adopters of AI—long before generative AI—we quickly learned how powerful it could be in driving efficiency across sales, marketing, and application development.

However, there is a growing problem with AI adoption in the West.

Too many tech leaders—what I refer to as “CEO carnival barkers in jeans and T-shirts”—have attempted to drive interest through fear, pushing apocalyptic narratives about AI that are unlikely to materialize. At the same time, they have oversold the capabilities of generative AI and continue to overhype agentic AI.

The result: declining trust and slower adoption.

Meanwhile, China has taken a very different approach—more aligned with the vision of creators like Karen Palmer—positioning AI as a tool to enhance society and improve outcomes for its population. At the same time, China is actively weaponizing AI through civil–military fusion, using it for surveillance, political indoctrination, and cognitive warfare.

Their dual-track strategy is clear:
advance AI for societal and economic gain while simultaneously leveraging it for strategic advantage.

This is accelerating their technological growth and positioning them ahead of the West in key areas of the global AI race.

I contend that fear-driven narratives around AI—and now quantum computing—will continue to suppress adoption in the United States, ultimately emerging as a national security risk.

We must instead embrace a different vision—one that leaders like Karen Palmer and myself have been advocating:

AI and emerging technologies, including quantum computing, should be used for the betterment of humanity through collaboration—while actively controlling and limiting their misuse for surveillance capitalism, government overreach, and military/intelligence exploitation, including systems developed by contractors like Palantir Technologies.

China’s dual use of TikTok follows a similar pattern as TikTok is an AI infused app and social media platform.

In the West, TikTok, now TikTok USDS, JV (Oracle), is driven by highly addictive, exploitative algorithms that amplify:

  • Harmful challenges
  • Provocative behavior
  • Vanity and self-promotion
  • Engagement-driven content loops centered on harmful behavior via challenges

—all while generating massive revenue for ByteDance through targeted advertising, even today, as ByteDance still retains a 19% ownership stake in TikTok USDS, JV.

In contrast, China promotes a domestic version of the platform known as Douyin.

Douyin operates under a different content framework, with stronger emphasis on:

  • Education
  • Math, science, and technology
  • Arts and culture
  • Civic awareness and national pride

This contrast reflects a broader dual-use strategy—one that leverages algorithmic platforms differently across regions for both economic gain and societal influence.

We are at a crossroads.

Much like the era of J. Robert Oppenheimer—where nuclear technology could have been prioritized for clean, abundant energy but was instead first realized as a weapon—we now face a similar decision point with AI and quantum computing.

Mark this moment in history.

AI, quantum computing, and other emerging technologies can either be used to advance humanity—or to enable oppression, control, exploitation, and warfare through civil–military fusion programs already active in the U.S., China, and Russia.

As AI converges with quantum computing, its capabilities will expand exponentially.

But this experience also revealed something deeper—and far more concerning.

The Perception Divide: Optimism vs. Fear

There is a growing body of research showing a stark contrast between how AI is perceived globally.

According to a 2025 study by University of Melbourne and KPMG:

  • Trust in AI systems

    • Emerging economies: 57%
    • Advanced economies: 39%
  • AI acceptance

    • Emerging economies: 84%
    • Advanced economies: 65%

More striking:

  • In China, over 80% of respondents reported feeling optimistic and excited about AI
  • Only 43% reported feeling worried

This is not just a cultural difference.

It is a strategic divergence in mindset.

China’s Acceleration vs. Western Hesitation

In China, AI is broadly framed as:

  • A tool for economic growth
  • A driver of national advancement
  • A pathway to technological leadership

In much of the West, AI is often framed as:

  • A threat to jobs
  • A risk to humanity
  • A force requiring restriction and control

Let’s be clear—risk awareness is necessary.

But when risk becomes the dominant narrative, it can suppress adoption, experimentation, and innovation.

And that creates a real problem.

Why This Matters: A National Security Lens

This is not just a technology story—it is a national security issue.

If one population:

  • Trusts AI
  • Uses AI
  • Integrates AI into daily workflows

…while another population hesitates due to fear and distrust…

Then the outcome is predictable:

The more adaptive population evolves faster—economically, technologically, and strategically.

China’s advantage is not just infrastructure or policy.

It is population-level adoption and mindset.

And that is one of the most powerful accelerators in any technological shift.

The Role of Big Tech and Narrative Framing

In my view, part of the issue in the West is how AI has been positioned.

There has been a tendency—especially in media and big tech narratives—to:

  • Oversell capabilities
  • Amplify existential risks
  • Promote apocalyptic scenarios

This approach may drive attention, but it also drives fear and misunderstanding.

And that fear can slow adoption at scale.

A Different Framework: AI as a Tool, Not a Threat

In my January 2025 article published via TechTalk Summits: “Understanding AI: Will AI Be Your Master or Your Assistant?”

I outlined a different perspective:

AI should be understood as:                                                           

  • A tool for evolution
  • A force multiplier for human capability
  • A system that enhances—not replaces—human expertise

Based on my own experience working with AI since 2017 at Carnegie Technologies—developing OTA platforms in fintech and reverse logistics—I’ve seen this firsthand.

Even today:

  • AI (including GPT and agentic systems)
    cannot replace lived experience and true expertise

But it can:

  • Enhance thinking
  • Improve communication
  • Accelerate problem-solving
  • Expand perspective

The Real Risk: Misunderstanding AI

The real danger is not AI itself.

It is misunderstanding AI.

When people:

  • Don’t understand its history
  • Don’t understand its limitations
  • Don’t understand how it actually works

They default to fear.

And fear leads to:

  • Avoidance
  • Resistance
  • Lost opportunity

The Path Forward: Show, Don’t Scare

If we want responsible and widespread AI adoption in the West, the strategy must shift.

We need to:

  • Show real-world positive use cases such as Ascended Intelligence
  • Highlight creative and human-centered applications (like immersive AI storytelling)
  • Educate—not sensationalize
  • Balance risk awareness with opportunity

Experiences like Ascended Intelligence are powerful because they:

  • Make AI tangible
  • Make AI human
  • Make AI inspiring

Final Thought: Evolution Is Not Optional

AI represents a form of technological evolution, it is only going to advance pushing those who adopt it forward while leaving those who don’t behind, it is that simple.

And history is clear:

Societies that embrace and adapt to evolution advance.

Societies that fear it fall behind.

Fear of AI is, in many ways, fear of evolution itself.

The better path is not blind adoption—but informed, responsible, and optimistic engagement.

AI, like any powerful tool, will not determine our future on its own.

How we choose to understand and use it will.

Now we must ask ourselves a critical question:

Will AI be our master or our assistant?

To ensure it remains our assistant, we must establish an Electronic Bill of Rights (EBOR).

Without it, we risk becoming further entrenched in systems of control—much like the realities of Surveillance Capitalism today.

We don’t want Karen Palmer’s Ascended Intelligence to end with Surveillance Capitalism driving AI, quantum, and future technologies.

We must retain control.

Learn more:
www.ElectronicBillofRights.com