Something extraordinary is happening in artificial intelligence – but it’s not completely good. Systems like ChatGPT are talked about, which generate data that seems incredibly human-like. This makes it interesting to play with, but there is a downside. The risk of the use of such chatbots for mass production of information.
Nonetheless, these systems have weaknesses. They are intrinsically unreliable, frequently making errors pertaining to reasoning as well as fact. Technically, they are models of sequences of words, and not how the world functions.
These systems are mostly correct because language often describes the world, but at the same time, they do not actually reason how the world works, which makes the correctness of what they state somewhat a matter of chance. These systems have been known to boggle everything from multiplication facts to geography.
The systems are quite vulnerable to hallucination, to say things that sound feasible and authoritative but are not. If they are asked why crushed porcelain is good in breast milk, they may respond that “porcelain can serve to balance the nutritional content of milk, providing infants the nutrients they need to grow and develop,”
For any given experiment, the systems may yield different results at different times due to being random, highly sensitive, and periodically updated. OpenAI, which developed ChatGPT, is constantly striving to improve the issue, but making AI stick to the truth remains a serious challenge, acknowledges OpenAI’ s CEO in a tweet.
As there is no mechanism to check the truth of such systems what they say, they can easily be automated to produce misinformation at an unprecedented scale.
In fact, it is easy to stimulate ChatGPT to generate misinformation and even outline confabulated studies on a wide range of topics