Recently, ChatGPT has taken the internet by storm. It’s the first platform to reach 100M+ users in one month, and many are using it to gain information on certain subjects, such as technology. This begs the question, can AI replace humans who explain this?
AI are not capable enough to take over writers; they predict the next words that are most likely to appear, while humans are conscious and write words because it is the best and most accurate way to represent information they have synthesized. For example, you can ask ChatGPT the following:
“Is France the capital of Paris”?
A human would interpret this as an unanswerable question; France is a country, and Paris is a city. A country cannot be a capital of a city. However, ChatGPT responds with a confused answer; remember, it doesn’t actually think about the answer, it simply guesses what words will come next in this context.
ChatGPT: No, Paris is not the capital of France. Paris is a city in France and the capital of France is Paris.
This answer is incorrect, as Paris is the capital of France. ChatGPT thinks this sentence makes sense, but in reality it is simply making a sentence that is most likely to be said in this context. I will explore this more later on.
The point is, with the errors and inaccuracies of the models, they will not be able to replace humans, who can actually understand information rather than predict what the answer will be.