Generative AI models like ChatGPT are so surprisingly good that some now claim that AIs are not only equal to humans, but often more intelligent. They display beautiful works of art in a dizzying array of styles. They produce texts full of rich details, ideas, and insights. The artifacts generated are so varied, so seemingly unique, that it’s hard to believe they came from a machine. We are just beginning to discover all that generative AI can do.
Some observers like to think that these new AIs finally crossed the threshold of the Turing test. Others believe that the threshold has not been smoothly passed but has been blown to pieces. This art is so good that, surely, another batch of humans is already heading to the unemployment line.
But once the sense of wonder wears off, so does the star power of generative AI. Some observers have made a sport of asking questions the right way so that intelligent machines will spit out something silly or wrong. Some deploy the old logic bombs popular in elementary school art class, like asking for a picture of the sun at night or a polar bear in a snow storm. Others produce weird requests that show the limits of AI’s context awareness, aka common sense. Those who wish can count the ways in which generative AI fails.
Here are 10 disadvantages and flaws of generative AI. This list reads like sour grapes: the zealous scribbles of a writer who risks losing his job if the machines are allowed to take over. Call me a little human encouraging human team—In the hope that John Henry will keep pounding the steam drill. But shouldn’t we all be a little worried?
Plagiarism
When you create generative AI models like DALL-E and ChatGPT, you are really just creating new patterns from the millions of examples in your training set. The results are a cut-and-paste synthesis drawn from various sources, also known, when done by humans, as plagiarism.
Sure, humans learn by imitation too, but in some cases, the borrowing is so obvious it would alert an elementary school teacher. Such AI-generated content consists of large blocks of text that are presented more or less word for word. Sometimes, however, there is enough mixing or synthesis involved that even a panel of college professors might have trouble spotting the source. Either way, what’s missing is uniqueness. For all their brilliance, these machines are not capable of producing anything really new.
Copyright
While plagiarism is largely a problem for schools, copyright law applies to the marketplace. When one human being takes another’s work, they risk being taken to court that could impose fines of millions of dollars. But what about AIs? Do the same rules apply to them?
Copyright law is a complicated subject, and the legal status of generative AI will take years to establish. But remember this: When AIs start churning out jobs that look good enough to put humans in the employment line, some of those humans will surely spend their newfound free time filing lawsuits.
unpaid labor
Plagiarism and copyright are not the only legal issues posed by generative AI. Lawyers are already dreaming up new ethical issues for litigation. As an example, should a company that makes a drawing program be able to collect data on human user drawing behavior and then use the data for AI training purposes? Should humans be compensated for such use of creative labor? Much of the success of the current generation of AI stems from access to data. So what happens when the people generating the data want a piece of the action? What is fair? What will be considered legal?
information is not knowledge
AIs are particularly good at mimicking the kind of intelligence that takes years to develop in humans. When a human scholar can introduce an obscure 17th century artist or write new music in a nearly forgotten Renaissance tonal structure, we have good reason to be impressed. We know it took years of study to develop that depth of knowledge. When an AI does these same things with just a few months of training, the results can be stunningly accurate and correct, but something is missing.
If a well-trained machine can find the right receipt in a digital shoebox filled with billions of records, it can also learn everything there is to know about a poet like Aphra Behn. He might even believe that the machines were made to decode the meaning of Mayan hieroglyphs. AIs may seem like they’re mimicking the playful and unpredictable side of human creativity, but they can’t really do it. Meanwhile, unpredictability is what drives creative innovation. Industries like fashion are not just addicted to change, they are defined by it. Indeed, artificial intelligence has its place, just like good, hard-won human intelligence.
intellectual stagnation
Speaking of intelligence, AIs are inherently mechanical and rule-based. Once an AI analyzes a training data set, it creates a model, and that model doesn’t really change. Some engineers and data scientists envision gradually retraining AI models over time, so the machines can learn to adapt. But for the most part, the idea is to create a complex set of neurons that encode certain knowledge in a fixed form. Consistency has its place, and it can work for certain industries. The danger with AI is that it will forever be trapped in the zeitgeist of its training data. What happens when we humans become so dependent on generative AI that we can no longer produce new material for training models?
Privacy & Security
Training data for AIs has to come from somewhere, and we’re not always so sure what gets stuck inside neural networks. What if AIs leak personal information from your training data? To make matters worse, blocking AIs is much more difficult because they are designed to be very flexible. A relational database can limit access to a particular table with personal information. However, an AI can be queried in dozens of different ways. Attackers will quickly learn how to ask the right questions, in the right way, to get the sensitive data they want. As an example, suppose the latitude and longitude of a particular asset are locked. A clever attacker could ask the exact time the sun rises for several weeks at that location. An obedient AI will attempt to respond. Teaching an AI to protect private data is something we still don’t understand.
undetected bias
Even the early mainframe programmers understood the core of the problem with computers when they coined the acronym GIGO, or “garbage in, garbage out.” Many of the problems with AIs stem from poor training data. If the data set is inaccurate or skewed, the results will reflect this.
The hardware at the center of generative AI may be as logic-driven as Spock, but the humans who build and train the machines are not. Biased opinions and partisanship have been shown to find their way into AI models. Perhaps someone used biased data to create the model. Maybe they added overrides to prevent the model from answering particular burning questions. Maybe they put in hard-wired responses, which then become hard to detect. Humans have found many ways to ensure that AIs are excellent vehicles for our harmful beliefs.
machine stupidity
It’s easy to forgive AI models for making mistakes because they do so many other things right. It’s just that a lot of the bugs are hard to anticipate because AIs think differently than humans. For example, many users of text-to-image features have found that AIs get quite simple things wrong, like counting. Human beings learn basic arithmetic early in elementary school, and then we use this skill in a wide variety of ways. Ask a 10-year-old to draw an octopus and he’ll surely make sure he has eight legs. Current versions of AI tend to fall flat when it comes to the abstract and contextual uses of mathematics. This could easily change if the model builders devote some attention to the span, but there will be others. Machine intelligence is different from human intelligence and that means that machine stupidity will be different too.
human gullibility
Sometimes without realizing it, we humans tend to fill in the gaps in AI intelligence. We fill in the missing information or interpolate the answers. If the AI tells us that Henry VIII was the king who killed his wives, we don’t question it because we don’t know that story ourselves. We just assume the AI is correct, the same way we do when a charismatic presenter waves his hand. If a statement is made with confidence, the human mind tends to accept it as true and correct.
The trickiest problem for generative AI users is knowing when the AI is wrong. Machines can’t lie like humans, but that makes them even more dangerous. They can produce perfectly accurate paragraphs of data, then veer off into speculation, or even outright slander, with no one knowing what happened. Used car dealers or poker players tend to know when they’re cheating, and most have a clue that exposes their slander; AIs don’t.
infinite abundance
Digital content is infinitely reproducible, which has already put many of the economic models built around scarcity to the test. Generative AIs are going to break those models even further. Generative AI will put some writers and artists out of work; it also turns many of the economic rules by which we all live on their head. Will ad-supported content work when both ads and content can endlessly recombine and regenerate? Will the free part of the internet descend into a world of bots clicking ads on web pages, all created and infinitely reproducible by generative AI?
Such easy abundance could undermine every corner of the economy. Will people still pay for non-fungible tokens if they can be copied forever? If making art is so easy, will it still be respected? Will it still be special? Will anyone care if it’s not special? Could everything lose value when everything is taken for granted? Was this what Shakespeare meant when he spoke of the slings and arrows of outrageous fortune? Let’s not try to answer it ourselves. Let’s just ask a generative AI for an answer that’s funny, weird, and ultimately mysteriously trapped in some netherworld between good and evil.
Copyright © 2023 IDG Communications, Inc.
Be First to Comment