Press "Enter" to skip to content

ChatGPT and the ethics of AI

Like everyone and his brother, I’ve been playing with ChatGPT and asking him the essential questions of life. He tells me things with confidence, and some of those things are incredibly insightful. However, people and chatbots that are wrong often say things for sure.

batteries for the ocean IDG

I saw a recent answer from Google posted on Mastodon. In this “response”, Google suggested there were benefits to disposing of used car batteries in the ocean, including charging electric eels and powering the Gulf Stream. Bing then picked up on this response and also suggested the environmental benefits of illicit battery disposal. I wonder if Google realizes that you can use motor oil to fertilize your lawn? (Author’s note: Don’t dump batteries in the ocean or fertilize your lawn with motor oil.)

Presumably people looking for this are looking for the joke, but it raises an interesting question. As more people use ChatGPT and publish their results on the web, and more AI question answering systems collect their own and others’ generated answers and train on them, what will happen?

I did what anyone would do for a question they don’t know the answer to. I asked ChatGPT: “What will happen when people post the result of ChatGPT and you are trained on your own answers?”

The robot replied:

If people post the ChatGPT output and use it to further train the model in their own responses, this is known as self-training or self-monitored learning. In this scenario, the model will continue to learn from its own output and potentially improve its performance over time.

However, there are some potential risks to be aware of when using self-learning. One risk is that the model may overfit its own result, which means that it becomes too specialized to answer the types of questions and statements it has previously encountered. This could cause the model to perform poorly with new or unseen data.

Another risk is that the model may start generating nonsensical or inappropriate responses if not properly monitored and supervised during autotraining. This could be because the model detects patterns or biases in its own result and amplifies them over time.

Therefore, it is important to be careful when autotraining a language model like ChatGPT and carefully monitor its performance and output to ensure that it continues to generate high-quality responses.

That points to some really scary social consequences, as people seem to believe in ChatGPT and think it’s magically correct (as well as emotional). I wonder what the social consequences would be. I asked, “What could be the social effects of people relying on ChatGPT and it starts producing meaningless results because it’s trained on its own information?”

the robot said:

If people become too reliant on a language model like ChatGPT and it starts producing meaningless results because it’s trained on their own information, this could have several negative social effects.

First of all, people can get frustrated or confused when ChatGPT provides inappropriate or nonsensical responses to their questions or statements. This could lead to a communication failure and a loss of trust in the technology.

Second, if people rely on ChatGPT to provide important information or make important decisions, the consequences of the model producing meaningless results could be severe. For example, if a ChatGPT-based chatbot is used to provide medical advice or diagnose health problems, relying on meaningless answers could lead to serious health risks.

Third, if ChatGPT is used to generate content for social media or other online platforms, it could lead to the spread of misinformation or fake news. This could have serious social and political consequences, as false information could be used to manipulate public opinion or undermine democratic processes.

In general, it’s important to use language models like ChatGPT with caution and monitor their performance carefully to ensure they continue to produce accurate and appropriate responses. It is also important to recognize the limitations of these models and not rely on them as substitutes for human intelligence and experience.

Oh, so we should use technology responsibly. I wonder what ChatGPT thinks we should do about it. I asked, “Since humans have almost never used a new technology responsibly, what will be the likely outcome of this scenario?”

Also Read:  What is generative AI? The evolution of artificial intelligence

Answered:

While it’s true that humans have often struggled to use new technologies responsibly in the past, it’s important to recognize that each new technology is unique and can present its own set of risks and challenges. Therefore, it is difficult to predict with certainty what the outcome of a scenario involving a language model like ChatGPT would be.

With that said, there are steps that can be taken to minimize the potential negative effects of this technology. For example, developers and users of language models like ChatGPT can work to identify and address any biases or ethical concerns that may arise during the development and implementation of the technology. This could include putting safeguards in place to prevent the spread of misinformation or inappropriate content, and ensuring that the model is trained on diverse and representative data.

In addition, governments and regulatory bodies can play a role in ensuring that language models like ChatGPT are used responsibly and that appropriate security measures are in place to protect the public. This may include regulations on the use of language models in specific industries or applications, and requirements for transparency and accountability in the development and deployment of these technologies.

In summary, while there are risks associated with the use of language models like ChatGPT, it is possible to mitigate these risks through responsible development and implementation, as well as regulatory oversight and public awareness.

Wow, does ChatGPT think it should be regulated? I emailed its creator, OpenAI, to see if they agreed, and if not, what do they know their language model doesn’t agree? They have not responded as of this post.

We have long been self-taught to simply “Google the answer” and now we can start training ourselves to “ask ChatGPT”. However, it is important to remember that this tool is only sucking up a part of the Internet and regurgitating it. ChatGPT may be a decent research tool, but it’s rubbish on the inside and rubbish off.

Like all technology, this is BYOB. Bring your own brain and don’t feed car batteries to eels.

Copyright © 2023 IDG Communications, Inc.

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *