BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Are We Ready To Face Down The Risk Of AI Singularity?

Forbes Technology Council

Amir Hayeri, CEO of Bio Conscious Tech, works with chronically ill patients to help them predict and ideally avoid disease complications.

Many companies are rolling out new artificial intelligence (AI) products and vowing to control AI through code, though I believe that's overlooking the greatest risk AI presents: what's not in the code. We build the models and train them, but we don't fully understand how they learn and decide. Thus, it's difficult to predict or control them.

By this point, you've likely heard about a U.S. Air Force drone simulation where an AI-enabled drone eliminated the operator’s position and subsequently destroyed the communications link to the operator. Although this story was later denied and corrected as a thought experiment, it raises the question of the reliability of an AI-enabled drone for destroying threats.

Numerous AI models have been created for specific tasks, from writing emails to making medical diagnoses. What happens when the AI models make decisions developers didn’t predict and thus didn’t protect against? What happens if they start communicating with each other, amplifying their power? That point is called "AI singularity," and while hypothetical, it's scary.

What is AI singularity?

As the CEO of Bio-Conscious, our expertise goes beyond coding and delves into the intriguing realm of what lies beyond lines of code. We now test our algorithms based on how the AI module handles what isn't explicitly written in the code. While many worry about the code itself, our experience makes us worried about what is not captured by the algorithm, the unpredictable nuances and the unscripted interactions that create the unpredictability of our AI-driven future.

Singularity is the hypothetical point at which the AI becomes an independent superintelligence surpassing human capabilities. The concern is that when the separate AI applications connect, they can collaborate outside human control. And they will think faster, learn faster and react faster than humans can. The result could lead to extraordinary breakthroughs, devastating unintended consequences or a threatening force.

Some believe that we can build safeguards into the models, embedding a moral or ethical compass that will limit AI behavior. But if AI models become capable of self-repair, self-coding and independent resource access, I believe we cannot rely on the safeguards that have been coded.

Paths To Delay AI Singularity

The future of AI singularity is uncertain. We may not avoid it, but we could build in some true safeguards that prevent an AI singularity that threatens humanity.

There are three paths we need to take simultaneously.

1. Technologists need to focus on uncovering how AI modules can learn and interact.

2. Global and governmental bodies should develop a framework with rules on how AI can be trained, used, shared and connected.

3. Everyone needs to take small actions that collectively might have some impact.

AI Explainability

Without understanding AI's thinking process (“explainability”), it's impossible to add safety measures to AI models. Technologists face four key problems in addressing the challenge of understanding AI explainability.

AI Anthropomorphization

Researchers must fight against the assumptions that AI models learn and think the way human brains do. AI models aren’t human. Technologists’ exploration of AI decision making may reveal similarities, but they need to be aware of the assumptions they’re bringing to their analysis.

Filling In The Gaps

When we don’t understand how AI solves problems, we can’t predict what it will do. Will AI autonomously generate code, or could it stray from its intended tasks?

AI Hallucinations

Generative AI applications often create what they think the prompter wants. For example, Google's AI, Bard, provided false information about the discoveries of the James Webb Space Telescope.

AI is mission-oriented and will do what it believes is necessary and within its power to complete it. AI is an amoral actor. The integration of moral and ethical guardrails into AI code may not proceed as intended, possibly leading AI to develop its ethical code with unpredictable consequences.

Improved AI Visualization Technology

Technologists need to build more sophisticated AI visualization matrix tools that show how the model learns and develops new network links. Understanding how AI models learn, think and make decisions might allow us to create better safeguards.

A Global Consensus On AI Management

The global challenge of AI singularity requires a united strategy. However, agreeing on how to manage AI poses a significant obstacle. Past models, like the principle of mutually assured destruction (MAD) for nuclear disarmament, can no longer guarantee cooperation in today's world.

Delaying AI singularity requires collaboration and regulation. Leaders in the tech industry can and should collaborate with governments, researchers and other stakeholders to establish regulatory frameworks and standards for AI. Furthermore, it's important to engage in discussions and contribute to the development of policies and guidelines.

A potential solution could mirror the Geneva Convention. Under the guidance of the United Nations, countries and experts could come together to create rules and safety guidelines for the use of artificial intelligence in various areas, especially in sensitive fields such as the military. Simultaneously, states might enact legislation and join treaties concerning licensing and data sharing to erect barriers against worldwide AI connectivity.

Both paths face significant hurdles. The possibility of reaching a consensus or achieving full enforcement may be unlikely. Any rogue or incompetent actor could kick off the chain of events that leads to AI singularity, though it's critical to take these cautionary steps.

Relying On Your Human Voice

Individual action can contribute to the delay. Simple steps like disconnecting our devices and using cybersecurity tools to protect them when in use.

Leaders must address the possibility of AI singularity seriously. The AI version of MAD—a world devastated by AI-controlled warfare with no governing authority—is one where nobody wins.

The way that many see it, we’re at the inflection point. If tech leaders, state actors and ordinary people keep pushing for responsible action, we might postpone a possible singularity for enough time to develop newer and better ideas and tools to delay or avoid it completely.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Follow me on LinkedInCheck out my website