2024 Calendar
2025 Calendar
TechTalk Daily

Understanding AI: Will AI be Your Master or Your Assistant?

Part 2: 1970 - 1990

Understanding AI: Will AI be Your Master or Your Assistant? Part 2: 1970 - 1990

By: Rex M. Lee, Security Advisor & Tech Journalist

Exploring Good AI vs. Bad AI

In this multi-part article, we delve into AI, focusing on two distinct types: Good AI and Bad AI. Which type are you using? If you haven’t adopted AI yet, it’s vital to understand its history and the key differences between these types before making an informed decision. For those who have already adopted AI but are relying on Bad AI, it’s crucial to recognize its limitations and consider the benefits of transitioning to Good AI.

While centralized AI has drawbacks, it also offers valuable applications in areas such as sales and marketing. As an AI adopter, you may find it beneficial to employ a hybrid adoption model, leveraging Good AI for enhanced security and privacy while utilizing centralized AI for sales, marketing, internet trade and commerce, plus public-facing information.

Key Takeaways from Part 1:

1. Two Forms of AI:

  • Good AI: Decentralized AI designed for the benefit of the end user. It relies on open-source, transparent coding and avoids surveillance or data mining for financial gain.
  • Bad AI: Centralized (commercialized) AI built primarily to benefit developers. It is rooted in Surveillance Capitalism, employing surveillance and data mining to exploit users for profit.
  • Hybrid AI: Combines the adoption of decentralized AI for security, privacy, and safety, and the adoption of commercialized AI for sales, marketing, internet trade and commerce, and public-facing information.

2. The Origins of AI:

AI’s inception dates back to the 1940s during World War II when Alan Turing developed an intelligent machine to break the German military’s encrypted “Enigma” code through a process known as The Imitation Game.

3. AI in the 1950s:

In the 1950s, Dartmouth professor and AI researcher John McCarthy coined the term Artificial Intelligence, formally marking the birth of the field.

4. The Development of Chatbots:

In 1966, MIT researcher Joseph Weizenbaum and Stanford psychiatrist Kenneth Colby created Eliza, the first AI chatbot. This revealed the Eliza Effect, where humans interacting with AI humanize the technology, often leaving themselves vulnerable to emotional manipulation. This phenomenon became the foundation for addictive technologies that support centralized social media platforms like TikTok, Facebook, and Instagram.

5. AI in Popular Culture:

Stanley Kubrick’s 2001: A Space Odyssey (written by Arthur C. Clarke) highlighted the potential dangers of AI. HAL 9000, the ship’s AI, viewed humans as threats and, devoid of emotion, eliminated the astronauts aboard the spaceship Discovery One.

6. Weaponized AI:

AI was first weaponized during the Vietnam War, with mixed results. Although it led to tragic outcomes, this period marked the beginning of AI’s integration into military technology, laying the groundwork for its future use in warfare.

AI in the 1970s

The 1970s saw artificial intelligence remain largely a conceptual field, but it captured the public’s imagination through research, literature, and cinema.

1. Movies: AI was a central theme in several science fiction films, including:

  • 2001: A Space Odyssey (1968, influential throughout the ’70s): HAL 9000 symbolized fears of AI rebelling against human control.
  • Westworld (1973): Depicted a futuristic theme park where AI robots turned on humans, exploring ethical concerns about autonomy.
  • Demon Seed (1977): Explored the concept of an AI system becoming sentient and acting against human wishes.

2. Books: AI featured prominently in 1970s science fiction literature, such as:

  • Philip K. Dick’s Do Androids Dream of Electric Sheep? (1968, gained prominence in the ’70s): Later adapted into Blade Runner (1982), it explored AI’s role in identity and humanity.
  • John Brunner’s The Shockwave Rider (1975): Highlighted AI’s role in predictive algorithms and societal control.
  • Joseph Weizenbaum’s Computer Power and Human Reason: From Judgment to Calculation (1976): One of the earliest books to critique AI ethics. It drew on Weizenbaum’s experience with Eliza, examining AI’s limits and potential for misuse.

3. Academic and Technological Progress: The 1970s laid critical groundwork for AI advancements:

  • Expert systems like MYCIN demonstrated AI’s ability to solve complex, domain-specific problems.
  • Prolog, a logic-based programming language (1972), became a foundational tool for AI research.
  • Shakey, the first mobile robot capable of reasoning about its actions, advanced robotics and AI integration.
  • Systems like SHRDLU made strides in natural language processing, understanding, and interacting with simulated environments.

Joseph Weizenbaum’s Computer Power and Human Reason:

Released in 1976, this foundational text explored the ethical implications of AI and warned against over-reliance on machines in decision-making processes requiring empathy and morality. This book remains a cornerstone of ethical discussions about AI, emphasizing the balance between technological advancement and human values.

Key Takeaways:

  1. Limits of AI: AI can process information but cannot replicate human emotions, values, or moral reasoning.
  2. The Dangers of Over-Reliance: Trusting AI in fields like law, medicine, and warfare risks removing critical human judgment and empathy.
  3. Ethical Responsibility: AI developers have a moral duty to consider the societal impact of their creations.
  4. Preserving Human Agency: Society must prioritize human decision-making and control over AI-driven solutions.

AI in the 1980s

The 1980s marked AI’s emergence as a significant cultural and technological force, influencing advancements in research and public perception.

Warfare:

AI became integral to military applications, such as autonomous drones, missile guidance, and early battlefield intelligence systems. DARPA’s Strategic Computing Initiative advanced real-time data processing for decision-making, paving the way for modern military AI.

Movies: AI became a prominent theme in sci-fi films, showcasing its dual potential for benefit and destruction:

  • The Terminator (1984): Depicted Skynet, a sentient AI system, waging war against humanity.
  • Blade Runner (1982): Explored ethical questions about AI identity and morality through replicants.
  • WarGames (1983): Highlighted the risks of AI in warfare simulations and cybersecurity.

Literature: The 1980s saw a surge in AI-focused literature:

  • William Gibson’s Neuromancer (1984): Introduced AI in cyberspace, defining the cyberpunk genre.
  • Isaac Asimov’s expanded Robot series: Explored moral dilemmas of AI guided by the “Three Laws of Robotics.”
  • Vernor Vinge’s True Names (1981): Examined AI and virtual reality's implications.

Key Advancements:

  1. Expansion of expert systems (XCON).
  2. Revival of neural networks using backpropagation.
  3. Advances in robotics and machine vision for dynamic environments.
  4. AI in gaming (Deep Thought, 1988, defeated a chess grandmaster).

Key Advancements in AI During the 1990s

The 1990s was a pivotal decade for artificial intelligence, witnessing advancements across various fields, including movies, warfare, literature, and key technological breakthroughs. Here’s a breakdown of significant developments: 

1. AI in Movies: The 1990s saw a surge in films exploring AI, often portraying its potential and dangers:

  • Terminator 2: Judgment Day (1991) – Featured Skynet, an advanced AI that becomes self-aware and turns against humanity. The T-1000, a liquid metal AI-driven assassin, showcased futuristic AI capabilities.
  • Jurassic Park (1993) – The park’s automated security and management system relied on AI-like automation.
  • The Matrix (1999) – Introduced a dystopian world where AI has enslaved humanity inside a simulated reality, shaping AI discourse in pop culture.
  • Bicentennial Man (1999) – Adapted from an Isaac Asimov story, it explored AI’s evolution into human-like consciousness.

2. AI in Warfare: AI-driven military technology saw major developments in the 1990s:

  • DARPA's Autonomous Land Vehicles (1990s) – The U.S. military continued research into autonomous vehicles, paving the way for modern UAVs (unmanned aerial vehicles).
  • Precision-Guided Munitions (PGMs) – AI-enhanced targeting systems in cruise missiles and smart bombs became more prevalent after the Gulf War (1991).
  • F-22 Raptor (1997) – Introduced advanced AI-assisted avionics and control systems for superior air combat capabilities.
  • Development of Unmanned Aerial Vehicles (UAVs) – Predator drones began early testing phases, setting the stage for AI-powered military drones in the 2000s.

3. AI in Literature: The decade featured influential books that explored AI and its implications:

  • "The Diamond Age" (1995) – Neal Stephenson 
    • Explored AI-driven interactive education through a nanotech book designed to teach children.
  • "Permutation City" (1994) – Greg Egan 
    • Investigated AI, consciousness, and the nature of digital immortality.
  • "Diaspora" (1997) – Greg Egan 
    • Depicted AI-driven post-human existence, with AI entities evolving beyond biological limitations.

4. Key AI and Tech Advancements: Breakthroughs in AI-related technology during the 1990s:

  • NVIDIA’s GPU Development (1999) – NVIDIA launched the GeForce 256, the first graphics processing unit (GPU), which later played a crucial role in accelerating AI deep learning.
  • IBM’s Deep Blue (1997) – Became the first AI system to defeat a world chess champion, Garry Kasparov, showcasing AI’s strategic capabilities.
  • Speech Recognition Advances – Dragon NaturallySpeaking (1997) was introduced as one of the first commercial voice recognition software programs.
  • Machine Learning and Neural Networks Growth – Researchers explored convolutional neural networks (CNNs) and reinforcement learning, laying the groundwork for future AI developments.
  • AI in Gaming: Finite State Machines (FSMs) in Game AI – AI-controlled characters in games like Half-Life (1998) and Metal Gear Solid (1998) became more intelligent with FSM-based decision-making.

Key Takeaways:

  1. Cinema – AI became more prevalent in science fiction and military-themed films.
  2. Military Applications – AI-powered smart bombs and other technologies were deployed during the Gulf War.
  3. Literature – AI emerged as a central theme in speculative fiction and technological narratives.
  4. Hardware Advancements – Breakthroughs such as NVIDIA’s GPU revolution and IBM’s Deep Blue victory accelerated AI capabilities.
  5. The Internet Boom – The rapid global expansion of the internet enabled the development of larger language models (LLMs) for AI training, amplified by the processing power of NVIDIA’s GPUs.

These milestones laid the foundation for modern AI breakthroughs in the 21st century.

Looking Ahead: Part 3: Understanding AI - 2,000 to present

 

About the Author: Rex M. Lee is a Privacy and Cybersecurity Advisor, Tech Journalist and a Senior Tech/Telecom Industry Analyst for BlackOps Partners, Washington, DC. Find more information at CyberTalkTV.com

Check out all upcoming TechTalk Events here.