What are the seven biggest risks companies face when using generative AI?
The evolution of generative AI technology is remarkable, and generative AI-related products and services are being announced one after another. However, when it comes to using generative AI for business purposes, there are various disadvantages and problems. In this article, we will consider the problems of generative AI and countermeasures.
The rapid development of generative AI, including ChatGPT, which can be said to be the "first year of the generative AI era," is bringing revolutionary changes to various industries. Although the reasons for its introduction vary from company to company, such as improving work efficiency in some companies and generating new ideas in others, various implementation cases have been reported in domestic companies, and it is not difficult to imagine that generative AI will become even more indispensable for companies in the future as an important tool for generating corporate profits.
In addition, according to a survey by McKinsey & Company in the United States, the economic potential of generative AI is estimated to bring an economic effect of 2.6 to 4.4 trillion dollars per year, even if it is estimated only from the perspective of use cases, and the market is expected to continue to expand steadily. In Japan, the market is expected to grow by an average of 47.2% annually from 2023 to 2030, reaching 1.7774 trillion yen in 2030, a figure that is just under 15 times that of 2023.
There are many examples of generative AI-related stocks making a big splash in the stock market. For example, the stock price of SoundHound AI (SOUN) began to move in early February, and rose by about 4.6 times in less than a month to its high on February 27. SoundHound AI is a story from the US stock market, but there are also a number of AI-related stocks whose stock prices are rising sharply in the Japanese market. There is no doubt that stock prices will rise as the market expands.
It is not difficult to imagine that new generative AI-related products and services will be released one after another in the future. It is believed that in the future, generative AI will become multimodal, allowing users to seamlessly link multiple information and systems through generative AI without users being aware of the complexity of the system. For example, generative AI will become the UI, and it will be possible to select and utilize systems according to user input. In fact, it is becoming possible to use generative AI to design UIs and generate source code. A representative example is the AI tool "Galileo AI" that instantly generates UIs. Since Galileo AI can be used immediately on a browser by anyone and can automatically generate UIs without coding, it has become a hot topic as it can significantly reduce the time required for UI production, which has been carried out using design software such as Figma and AdobeXD.
It is also expected that the use of AI agents will become more practical, and we will enter an era in which generative AI will move people and systems. Microsoft announced that its Copilot AI system can operate independently like a virtual employee, and OpenAI has released GPT-4 Omni, which can "see, hear, and speak," and the company's CEO Sam Altman said that AI agents have the greatest technical potential.
Though various companies are working to put AI agents to practical use in this way, it has also been pointed out that there are several challenges to overcome before AI agents can become widespread. For example, generative AI has a problem called "hallucination," which outputs erroneous information. When an AI agent uses generative AI, it may be influenced in some way to perform the wrong task or generate the wrong output. The fact that such AI agents do not always execute tasks correctly means that there is a high degree of uncertainty, making it a high hurdle to practical use.
Seven major risks for companies using generative AI
These issues are not limited to AI agents. It is well known that there are several issues and risks when companies introduce generative AI. The seven issues and risks that have been pointed out so far are as follows:
Risk of confidential information leaks
Risk of cyber attacks
Risk of copyright and trademark infringement
Deep fakes
Generation of ethically inappropriate output
Generation of incorrect output
Business mistakes due to overconfidence in generative AI
First, there is the risk of information leakage. Data entered into the generative AI is basically managed in the cloud. Therefore, if confidential information or personal information within the company is input, there is a possibility that confidential information may be leaked to the provider or user of the generative AI service. In fact, at Samsung Electronics in South Korea, an employee uploaded sensitive data to ChatGPT and accidentally leaked confidential information. In this incident, in which the company's source code was leaked to the outside via generative AI, Samsung formulated regulations prohibiting employees from using generative AI such as ChatGPT. In addition, according to Singaporean information security company Group-IB, ChatGPT login information was leaked from Japan and traded on the black market on the dark web. It seems that at least 661 cases have been confirmed to have been leaked from Japan. If you are dealing with generative AI, you must face the risk of information leakage.
Next is the risk of cyber attacks. Currently, the most dangerous is "prompt injection." Prompt injection is a server attack in which a malicious user inputs special instructions or questions into a conversational AI to extract confidential information or data that is not publicly available. This has actually happened in the case of a US university student who performed prompt injection into Microsoft's Bing-equipped generative AI search engine, resulting in the leaking of confidential information.
The problem of copyright and trademark infringement and deep fakes are also important issues for companies. Generative AI can use existing copyrighted works as training data without the permission of the copyright holder, but the usual copyright infringement consideration applies when publishing or selling content generated by generative AI. Naturally, if the generated content is found to be similar or dependent on existing content, it may be subject to claims for damages and injunctions as copyright infringement, and may also be subject to criminal penalties.
The problem is that companies do not know all the copyrighted works in the world, so content generated by generative AI without the company's knowledge may infringe copyright. Deep fakes can also be used for fraud and the spread of fake news, and such misuse may lower the company's value.
Ignoring such risks will lead to the generation of ethically inappropriate output. In fact, a financial officer of a multinational company in Hong Kong was tricked into transferring about 3.8 billion yen in a video call that exploited DeepFake. The output of generative AI depends heavily on the content of the training data, so if the content is biased, there is a risk that it will promote discrimination or hatred regarding race or sexuality. Of course, there is a possibility that ethically inappropriate output, including the aforementioned copyright infringement and violation of portrait rights, will be generated.
Regarding the issue of copyright infringement by generative AI, the New York Times has actually filed a lawsuit against OpenAI and Microsoft, seeking billions of dollars in damages. In addition, authors such as George R. R. Martin, the author of "Game of Thrones," have also filed a lawsuit against OpenAI for copyright infringement.
Case [2024 Latest] 4 Problems Caused by Generative AI
661 ChatGPT Accounts Trading on the Dark Web
In June 2023, Singaporean security company Group-IB announced that over 100,000 ChatGPT accounts were being traded on the dark web. At least 661 of them were confirmed to have been leaked from Japan. ChatGPT's standard settings save prompts and their answers, so if the answers contain confidential information, there is a risk that they can be spied on if the account is obtained.
It is strictly prohibited to input confidential information or personal information into generative AI
Confidential source code leaked to the outside via generative AI
In 2023, Samsung Electronics announced that an employee had uploaded confidential source code to Ch atGPT and it was leaked. In response to this incident, Samsung Electronics notified employees that they were prohibited from using generative AI such as ChatGPT. Samsung said it was unclear how serious the information leak was, but was concerned that the shared data would be stored on the servers of other AI service operators. In response to this incident at Samsung, Amazon has also notified its employees not to share any company code or confidential information on ChatGPT.
The need to set clear guidelines for employees and improve employees' AI literacy
More than 10 authors, including R.R. Martin, file copyright infringement lawsuit
More than 10 authors, including George R.R. Martin, known as the original author of the HBO program "Game of Thrones," and the National Authors Guild, a professional organization, have sued OpenAI for copyright infringement. In addition, the New York Times is also suing OpenAI in a separate lawsuit. The New York Times has also sued Microsoft for copyright infringement, and these lawsuits have sparked new discussions about the use of AI and copyright protection.
Permission is now required when using copyrighted material as training data for AI
Massive fraud case involving deepfakes
In May 2024, Arup Group, a British consulting firm on urban design and engineering, announced that its Hong Kong branch had been defrauded of a huge sum of HK$200 million (approximately 4.1 billion yen) through deepfakes. This case became a hot topic as a massive fraud case using deepfakes. In response to this incident, OpenAI, the platform operator, suspended the accounts of five organizations based in Russia, China, and Israel for violating the terms of service.
The importance of verifying sources and fact-checking
Problem of hallucination
In the first place, the problem of hallucination, which is a barrier to the practical application of AI agents, is a risk that always accompanies the use of generative AI. At present, it has been pointed out that hallucination tends to be common in answers and quantitative data extraction and calculations in fields that require high levels of expertise. Therefore, it is important to take into account the possibility of hallucination when using generative AI.
In addition, since generative AI does not have human ethics or judgment abilities, the information it provides is not always accurate. Therefore, it is not omnipotent in all situations. Since generative AI functions based on input data, if the data is incomplete or biased, the generated content will naturally contain incorrect information. For example, when creating contracts that include legal grounds or public documents such as warranty certificates, there is a possibility that they may contain incorrect information or be created with incorrect legal grounds. This is not a risk of using generative AI, but rather a mistake on the part of the human using the generative AI, but since it is handled by humans, it is necessary to take into account the risk of such human error.
The "AI will take away jobs" opinion
The above are the risks of introducing generative AI that have been pointed out at present, but even before that, the "AI will take away jobs" opinion is still being debated in the media, including online. The debate is that if AI is introduced from the perspective of so-called business efficiency, there will be many jobs in which humans will be forced into a disadvantageous position in the future. It is true that jobs with many tasks that can be patterned, such as accounting work such as accounting audits, reception and counter work, medical administration, shipping and delivery work, data entry, etc., are likely to be taken over by AI.
However, an important part of this debate is that it leaves out the perspective that some jobs will be eliminated by AI and some new jobs will be created. Throughout history, humanity has developed technologies that affect the entire system and incorporated them into society, and humans have adapted to society with new systems. In the past, there was the Industrial Revolution, and in recent years, the Internet. Naturally, there will have been jobs that have disappeared with the advent of the Industrial Revolution and the Internet. To compensate for that, new jobs that did not exist under the old systemization will have been created.
Of course, there was no provider business before the advent of the Internet, and the fact that such a business exists means that new demand has been created. The same is true for AI. Since it is humans who handle AI, the profession of AI engineer has been born.
It is a profession that did not exist before the advent of AI. This also applies to annotators who actually perform machine learning, which is essential for the development of AI, and build systems for AI to operate. In addition, jobs related to the body and mind, and jobs that require creativity, such as doctors, caregivers, curators, physical therapists, hairdressers, makeup artists, and sports instructors, are said to be unlikely to be taken over by AI.
In addition, AI is not good at creating something from scratch. Therefore, flexibility, thinking ability, leadership, and other areas related to communication and relationship building are still areas that humans are responsible for. Rather than such meaningless debates, the view announced by Gartner Japan Inc. in April 2024 that "excessive reliance on generative AI will cause customers to leave" is more interesting for companies. According to the company's survey, "By 2027, 80% of companies and organizations that introduce technology to promote innovation without a clear purpose will achieve no results and will be forced to abandon the initiative," and the reason given for this is that "in innovation that involves a transformation of the business model, it is recommended that a promotion system be established and that activities be conducted directly by management, but in a survey conducted by the company in 2023, more than half of the companies working to promote innovation responded that friction has already arisen between the promotion department and the business department, or will occur in the future." It is this kind of point that companies should pay close attention to.
Generative AI is not suitable for tasks that generate new ideas
According to Hajime Tamura of Nomura Research Institute, there are tasks that are suitable for current generative AI, which is convenient but also has various issues.
According to him, "Generative AI is suitable for tasks such as organizing miscellaneous information and summarizing large amounts of text, but since it generates answers based on pre-trained data, it tends to generate answers that you have heard somewhere before, and is not suitable for tasks such as creating new ideas."
Managers are proactive about using generative AI, but employees who are not
So how should companies respond to each risk? First, it is important to understand the characteristics of the current generative AI, which has clear strengths and weaknesses, and to appropriately set the scope of use of generative AI to maximize results and minimize risks. This will prevent inappropriate information generation and unexpected legal issues.
Next, it is necessary to select whether the AI tool to be used meets the company's requirements in terms of functionality, performance, and security measures. Selecting and introducing the AI tool that is optimal for each company's situation and purpose is important for safe and efficient AI use, and can also minimize the company's risks. These two points are essential to reduce the risk of information leakage and hallucination. Naturally, when introducing generative AI, it is extremely important to ensure the accuracy, lack of bias, and confidentiality of data in order to minimize risks. Generative AI operates based on input data, so the quality of data management is directly linked to the quality of the AI's output. Therefore, implementing appropriate data management can ensure the quality of data and reduce the risk of information leakage and inaccurate information generation.
And since generative AI is handled by humans, it is also important to minimize the possibility of the risk of human error. To do this, companies will need to set clear usage rules and manuals for employees, specifically the purpose of use within the company, the scope of use, ethical guidelines, and how to handle data. In addition, it is important for companies to improve the individual literacy of employees, that is, their understanding and skills regarding AI. It will be necessary to implement training programs and practical training within the company to ensure that employees understand the basic knowledge of generative AI, the appropriate way to use it, and the associated risks, and then create an environment where it can be used efficiently and responsibly.
As digital transformation is being called for in the world, employees will have a hard time even if managers only put up a formality of digital transformation. In order to introduce and utilize AI, collaboration with the field is necessary. If it is introduced without checking the situation, it may lead to situations such as "difficult to use," "complex work," and "no response flow when a malfunction occurs," and there are cases where it is not used and fails even after it is introduced. The stance of the management is required to clarify the problem, decide the purpose of introducing AI, clarify the work of AI and humans, and proceed in collaboration with the field.
6 Examples of Use of Generative AI by Companies
Asahi Steel: Improves manufacturing sites systematically using generative AI
Instead of managing improvement activities on an individual basis, Asahi Steel uses generative AI to systematize improvement methods by utilizing shared know-how. This allows internal knowledge to be shared to every corner of the field, making it possible to carry out improvement activities with higher productivity.
Seven-Eleven: Reduces time required for product planning by one-tenth using generative AI
Seven-Eleven has begun using generative AI to significantly reduce product planning time. This will reduce product planning time by up to 90%, and it is expected to be able to provide new products that quickly respond to market trends and customer needs.
Parco: Releases Ad video created with generative AI
Parco has released a video ad called "HAPPY HOLI DAYS Campaign" that makes full use of generative AI. This ad is made up of prompts from the characters to the background, and the narration and music are all created by generative AI.
Asahi Breweries: Improves employee internal information search efficiency with generative AI
Asahi Breweries is working on developing an internal information search system that uses generative AI, mainly in the research and development department, with the aim of making it more efficient to summarize and search technical information related to beer brewing technology and product development.
LINE: Reduces work time by 2 hours a day using generative AI
LINEYahoo has fully introduced GitHub's "GitHub Copilot" into software development, automatically generating the code required for the functions and operations that engineers want to implement, reducing engineers' work time by about 2 hours per day.
Mercari: AI assistant suggests product names that are likely to sell
Mercari has started offering the "Mercari AI Assist" function, an AI assistant that analyzes information on products that have already been listed and automatically generates and suggests product names and descriptions to improve sales. It is expected to contribute to the revitalization of transactions.
Future Points of AI
With advanced natural language processing, diverse data processing capabilities in real time, and content generation capabilities according to individual needs, more personalized services will be provided in various fields.
AI agents that understand the situation and take optimal actions will become widespread. Conventional UIs will be replaced by conversational UIs through agents, and it may become an entry point for various services such as business and daily life.
Human creativity and the processing power of AI will work together to create new value, and ideas and solutions that were previously unthinkable may be born.
Currently, generative AI technology and services are evolving every day, so new ways of using it and new processes for using it are emerging each time. It is quite possible that new risks will arise accordingly. It is also important for each company to regularly review the system and usage method even after introducing generative AI based on a promising grand design, and to always consider measures such as updates if necessary. To do this, it will be necessary to always keep up with the latest trends in generative AI both in Japan and abroad.
Interview Iolite FACE vol.10 David Schwartz, Hirata Roi
PHOTO & INTERVIEW "Yukos"
Special feature "Trends in the cryptocurrency industry in Japan", "Trump vs. Harris: What will happen to the cryptocurrency industry?", "Was the reputation economy a prophecy?"
Interview: Simon Gerovich, Metaplanet Co., Ltd., Kim Dong-Gyu, CALIVERSE
Series Tech and Future Sasaki Toshinao...etc.
MAGAZINE
Iolite Vol.10
November 2024 issueReleased on 2024/09/29
Interview Iolite FACE vol.10 David Schwartz, Hirata Roi
PHOTO & INTERVIEW "Yukos"
Special feature "Trends in the cryptocurrency industry in Japan", "Trump vs. Harris: What will happen to the cryptocurrency industry?", "Was the reputation economy a prophecy?"
Interview: Simon Gerovich, Metaplanet Co., Ltd., Kim Dong-Gyu, CALIVERSE
Series Tech and Future Sasaki Toshinao...etc.