Press "Enter" to skip to content

How to babysit your AI

Despite the remarkable advances made in the field of artificial intelligence in recent decades, time and time again the technology has failed to deliver on its promise. AI-powered natural language processors can write everything from news articles to novels, but not without racist and discriminatory language. Autonomous cars can navigate without driver input, but they can’t eliminate the risk of stupid accidents. AI has personalized online advertising, but it loses context terribly from time to time.

We cannot trust AI to always make the right decision. That doesn’t mean we should stop developing and deploying next-generation AI technologies. Instead, we need to set up guardrails by having humans actively filter and validate data sets, keeping control of decision making, or adding guidelines that will then be applied automatically.

An intelligent system makes its decisions based on the data that is fed to the complex algorithm used to create and train the AI ​​model on how to interpret the data. This allows it to “learn” and make decisions autonomously and differentiates it from an engineering system that operates solely with the programming provided by the creator.

Is it AI or just smart engineering?

But not all systems that appear to be “intelligent” use AI. Many are examples of clever engineering used to train robots, either through explicit programming or by having a human perform the action while the robot records it. There is no decision-making process. Rather, it is automation technology that works in a highly structured environment.

The promise that AI holds for this use case is to allow the robot to operate in a less structured environment, really abstracting from the examples that have been shown. Machine learning and deep learning technologies allow the robot to identify, pick and transport a pallet of canned goods on a warehouse trip and then do the same to a television, without requiring humans to update their programming to account for the different product or location.

The inherent challenge in building any intelligent system is that its decision-making ability is only as good as the data sets used to develop it and the methods used to train its AI model.

There is no such thing as a 100% complete, unbiased, and accurate data set. That makes it extremely difficult to create AI models that aren’t themselves potentially incorrect and biased.

Consider the new Large Language Model (LLM) from Facebook and its parent company, Meta, recently made available to any researcher studying applications for natural language processing (NLP) applications, such as voice-enabled virtual assistants on smartphones. and other connected devices. A report by company researchers warns that the new system, OPT-175B, “has a high propensity for generating toxic language and reinforcing harmful stereotypes, even when provided with a relatively innocuous ad, and conflicting adverts are trivial to find.” “.

The researchers suspect that the AI ​​model, trained on data that included raw text taken from social media conversations, is unable to recognize when it “decided” to use that data to generate hate speech or racist language. I give full credit to the Meta team for being open and transparent about their challenges and for making the model available at no cost to researchers who want to help solve the bias problem that plagues all NLP applications. But it is further proof that AI systems are not mature and capable enough to operate independently of human intervention and decision-making processes.

If we can’t trust AI, what can we do?

So if we can’t trust AI, how do we encourage its development while reducing the risks? By adopting one (or more) of the three pragmatic ways of solving problems.

Option #1: Filter the input (the data)

One approach is to apply domain-specific data filters that prevent irrelevant and incorrect data from reaching the AI ​​model while it is being trained. Suppose an automobile manufacturer building a small car with a four-cylinder engine wants to incorporate a neural network that detects minor failures of the engine’s sensors and actuators. The company may have a complete data set covering all of its models, from compact cars to large trucks and SUVs. But you should filter out irrelevant data to ensure that you don’t train the AI ​​model of your four-cylinder car with data specific to an eight-cylinder truck.

Also Read:  Are large language models wrong for coding?

Option #2: filter the output (the decision)

We can also set filters that protect the world from bad AI decisions by confirming that each decision will have a good outcome and preventing it from taking action if it doesn’t. This requires domain-specific inspection triggers that ensure we trust the AI ​​to make certain decisions and act within predefined parameters, while any other decision requires a “sanity check.”

The output filter establishes a safe operating speed range in an autonomous car that tells the AI ​​model, “I’m only going to allow you to make adjustments in this safe range. If you are outside of that range and decide to throttle below 100rpm, you will need to consult with a human expert first.”

Option #3: Employ a ‘supervisor’ model

It’s not uncommon for developers to reuse an existing AI model for a new application. This allows the creation of a third barrier through the parallel execution of an expert model based on a previous system. A supervisor compares the decisions of the new system with what the old system would have done and tries to determine the reason for any discrepancies.

For example, the autonomous driving system in a new car incorrectly decelerates from 55 mph to 20 mph while traveling on a highway. Assume that the above system maintained a speed of 55 mph under the same circumstances. In that case, the supervisor could later review the training data supplied to the AI ​​models of both systems to determine the reason for the disparity. But right at the moment of decision, we can suggest this slowdown instead of making the change automatically.

Think of the need to control the AI ​​like the need to take care of children when they are learning something new, like riding a bike. An adult serves as a guardrail while riding alongside, helping the new cyclist maintain his balance and giving him the information he needs to make smart decisions, such as when to apply the brakes or yield to pedestrians.

Care and feeding for AI

In summary, developers have three options to keep an AI on track during the production process:

  1. Only pass validated training data to the AI ​​model.
  2. Implement filters to double check AI decisions and prevent it from taking incorrect and potentially dangerous actions.
  3. Run a human-built parallel model that compares the AI’s decisions against a similar legacy model trained on the same data set.

However, neither of these options will work if developers forget to carefully select data and learning methods and establish a reliable and repeatable production process for their AI models. Most importantly, developers need to realize that they are not required by law to build their new apps or products around AI.

Be sure to use a lot of natural intelligence and ask yourself, “Is AI really necessary?” Smart engineering and classic technologies can offer a better, cleaner, more robust and more transparent solution. In some cases, it’s better to avoid AI altogether.

Michael Berthold is Founding CEO of knife, a data analytics platform company. She has a PhD in computer science and has over 25 years of experience in data science. Michael has worked in academia, most recently as a Senior Lecturer at the University of Konstanz, Germany and previously at the University of California at Berkeley and Carnegie Mellon, and in industry at Intel’s Neural Network Group, Utopy, and Tripos. Michael has published extensively on data analytics, machine learning, and artificial intelligence. Connect with Michael at LinkedIn and in KNIME.

New Tech Forum offers a place to explore and discuss emerging business technology in unprecedented depth and breadth. Selection is subjective, based on our choice of technologies that we believe are important and of most interest to InfoWorld readers. InfoWorld does not accept marketing guarantees for the publication and reserves the right to edit all content contributed. Please send all inquiries to newtechforum@infoworld.com.

Copyright © 2023 IDG Communications, Inc.

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *