Press "Enter" to skip to content

AI still requires human expertise

From GitHub Copilot to ChatGPT-infused Bing search, AI is increasingly permeating our everyday lives. While it’s directionally good (machines do more work so people can focus their time elsewhere), you need a great deal of expertise in a given field to trust the results AI delivers. Ben Kehoe, former cloud robotics research scientist at iRobot, argues that people have yet to take ultimate responsibility for what the AI ​​suggests, which requires you to determine whether the AI’s suggestions are good.

Responsibility for results

We’re in the awkward toddler phase of AI, when it shows tremendous promise, but it’s not always clear what it will become when it grows up. I mentioned earlier that the greatest AI successes to date have not come at the expense of people, but as a complement to people. Think of machines running computationally intensive queries on a massive scale, answering questions that people could handle, but much more slowly.

Now we have things like “fully autonomous self-driving cars” that are the complete opposite. Not only are the AI ​​and software not good enough yet, but the law still doesn’t allow a driver to blame the AI ​​for a crash (and there are a lot of crashes, at least 400 last year). ChatGPT is awesome until it starts fabricating information during the public launch of the new AI-powered Bing, as another example.

Etc. This is not to disapprove of these or other uses of AI. Rather, it’s a reminder that, as Kehoe argues, people can’t blame AI for the results of using that AI. He emphasizes: “Many of the AI ​​shots I see claim that AI will be able to take over all the responsibility for a given task for a person, and implicitly assume that the person responsibility because homework will just… evaporate? People are liable if their Tesla crashes into another car. They are also responsible for what they choose to do with ChatGPT or copyright infringement if DALL-E misuses copyrighted material, etc.

Also Read:  The good and bad of 'zero-touch' cloud operations

For me, that responsibility becomes more critical when using AI tools like GitHub Copilot for work.

Watching the watchers

It’s not hard to find developers who benefit from Copilot. Here’s a developer who appreciated the quick API suggestions but found them otherwise “shaky” and “slow”. There are many other mixed reviews. Developers like how it builds boilerplate code, finds and suggests relevant APIs, and more. Developer Edwin Miller notes that Copilot’s suggestions are “generally accurate,” which is both a good thing and a bad thing. It’s nice that Copilot can be trusted most of the time, but that’s also the problem: it can only be trusted most weather. To know when your suggestions can’t be trusted, you need to be an experienced developer.

Once again, this is not a big problem. If Copilot helps developers save some time, that’s a good thing, isn’t it? It is, but it also means that developers must take responsibility for the results of using Copilot, so it may not always be a great choice for developers who are younger in their careers. What might be a shortcut for an experienced developer could lead to poor results for a less experienced one. It’s probably unwise for a newbie to try to take such shortcuts anyway, as it could stifle their learning of the art of programming.

So yes, by all means, let’s use AI to improve our driving, searching, and scheduling. But let’s also remember that until we have full confidence in their results, experienced people need to keep their proverbial hands on the wheel.

Copyright © 2023 IDG Communications, Inc.

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *