A series of articles asking Mr. Toshinao Sasaki about the future of technology and society.
The theme this time is AI and “dialogue” with people.
I would like to ask you again about the “dialogue” between AI and humans, as your book, "We asked Professor Toshinao Sasaki about ChatGPT, from the future of AI to how to use it in business", has just been released. First of all, as we enter the second half of this year, I feel that the overall topic of generative AI has settled down, but what do you think?
Toshinao Sasaki: There is an annual “hype cycle” published by an American company called Gartner. When a new technology trend emerges, there is a peak period of “excessive expectations,” a period of enthusiasm, followed by a period of disillusionment, and after the period of disillusionment passes, we enter a period of enlightenment, a period of diffusion, and a period of full-scale productivity stability.
The hype cycle was announced in late August, and ChatGPT and generative AI are in their peak phase, while the metaverse is in its disillusionment phase. So there is a good chance that ChatGPT will enter a period of disillusionment between the second half of this year and next year.
What are some of the challenges in the current state of generative AI?
Sasaki: There is the problem of “halucination,” in which AI gives wrong answers, or so-called “lying. To address this issue, ChatGPT learns the entire Internet, and as a countermeasure, we manually remove suspicious data such as those that lead to hate and discrimination, fake news, and so on.
However, the amount of data that has been removed is huge, and it is difficult to remove it completely.
Also, in Japan there was a topic a while ago about a kanji "視覴(※)". It is a word that does not exist, but was generated by ChatGPT. However, if you search for that word, a large number of web pages will be hit. The reason for this is that the pages generated by ChatGPT are already widespread.
The problem here is that ChatGPT is learning data on the Internet. That data on the Internet was originally supposed to be human-generated, but as more and more web pages are created by AI, ChatGPT is learning them again. So it will just keep learning itself. Then what in the world is going to happen next is unpredictable.
Another problem is that when you ask questions to ChatGPT, the interactive AI, it also learns the data and text you input. If you ask questions that include confidential information, that too will be learned and included in the answers of other companies. This is why a recent survey showed that 70% of Japanese companies have a policy of not using ChatGPT in their companies.
Regarding this issue, Microsoft's 365 Copilot and Windows Copilot to be released in the future will handle copyrights and intellectual property without delay. If the problem can be avoided in this way and the recognition that it can be used as a “tool” spreads, I think there is a possibility that we will get over the period of disillusionment and enter a period of full-fledged popularization.
*A word that does not originally exist. A search on the Internet reveals the use of “視覴者” in sentences that appear to be AI output. The AI output with “聴” as “覴” was talked about as a typical example of halucination (artificial intelligence hallucination).