AI

Japan's Past and Present: Once at the Cutting Edge of AI Research and Development

2024/12/05Editors of Iolite
SHARE
  • sns-x-icon
  • sns-facebook-icon
  • sns-line-icon
かつてAIの研究・開発の最先端だった日本の過去と現在

The Japanese pioneered the development of artificial intelligence

Did you know that when Geoffrey Hinton and John Hopfield received the Nobel Prize in Physics in 2024, some media outlets were making a fuss saying, "Why not Amari?" The "Amari" in this case refers to Shunichi Amari, professor emeritus at the University of Tokyo.

In 1972, 10 years before Hinton and Hopfield began their research in the 1980s, Amari published a paper with almost the same content as the neural network mechanism they discovered. In fact, Hinton has said in various places that "Amari was the first to discover the content of my research."

There was another Japanese who researched neural networks and deep learning in the 1970s. There was Kunihiko Fukushima. Fukushima came up with the concept of deep learning in AI, and in 1978, he devised and established the theory for its realization.

Japan was undoubtedly a pioneer in neural network research, which is the basis of the concept of AI. Why has it come to be seen as a backward country in AI today?

First, we need to briefly understand what it means for AI to learn. In the nervous system of living organisms, there are cells called neurons. These neurons are connected to each other by organs called axons and dendrites, and transmit information received by nerve cells. The part where the axons and dendrites are connected is called a synapse.

The strength of the connection of these synapses, which transmit information between neurons, changes periodically in response to external stimuli. For example, when a nerve cell first senses the stimulus of touching something hot, a synapse is formed to transmit that stimulus to other neurons.

The first time, the synapse is only thin, but if you touch something hot for the second and third time, the synapse becomes thicker and thicker, and eventually you will react to touching something hot and feel pain or fear. This is how living organisms learn.

If the part that stimulates the synapse is done by turning on/off an electrical signal, and learning by repeated stimuli is replaced by the weight of the number of times the computer turns on/off as a function value, perhaps it would be possible to reproduce human learning in a machine.

This theory had been proposed since the Second World War, but it was considered an armchair theory and did not attract much attention. However, in 1967, the structure of the neural circuit network of the cerebellum was discovered, and it was found that the model that was thought to be an empty theory was not so different from reality. Suddenly, the enthusiasm for machine learning research increased worldwide.

Thus, AI research became a global boom, but it was later proven that the inferences used at the time had a computational limit to learning, and by the 1970s, the boom in machine learning research had subsided. However, Japan was different.

While the world was giving up on machine learning, Japanese researchers such as Amari and Fukushima were desperately researching ways to overcome the barriers that were thought to be computational limits. The researchers themselves may not have expected that the results of discoveries such as "stochastic gradient descent" and "neocognitron" would become the basic technology behind the birth of modern AI.

AI theory from Japan that leads to generative AI

Stochastic Gradient Descent

Gradient descent is a method for finding the gradient of a function to be minimized, and searching for the minimum value of the function based on that gradient. For an empirical input A, machine learning can derive a variety of answers, from answers that are far off the mark to close answers like A'.This method has come to be used to find the value closest to the correct answer (minimizing the error) while updating the input parameters. Stochastic gradient descent uses gradient descent based on a small amount of data, so the amount of calculation required is small.

Neocognitron

Humans recognize the shades of color of objects that come into view, and even if the object's position changes slightly, they recognize it as such because the shades are similar to those of objects they have seen before. Neocognitron is a mechanism invented by Professor Fukushima that allows machines to perform this function.By repeatedly testing and expanding the recognition range by alternating between learning layers that extract the features of the input image and learning layers that tolerate positional deviations, the system is able to recognize what the image depicts. It is the basis for image generation technology in current generative AI.


The Generative AI Revolution

In the 1980s, computer performance began to improve dramatically. It became possible to teach AI, which had only been able to do simple things up until then, to perform specialized reasoning and learn knowledge. In the midst of this, the researchers who received the Nobel Prize this time began their research into AI.

Their research was based on the results of Japanese researchers. In Japan, too, AI research was conducted with a budget of nearly 50 billion yen, mainly by the Ministry of International Trade and Industry (now the Ministry of Economy, Trade and Industry).

However, this was an era before the Internet. Big data had to be input manually, and the specifications of computers were not yet high enough to process the large amounts of data that were input, so the boom quietly died down.

However, overseas, the Nobel Prize winners continued to develop the machine learning methods that Amari and Fukushima had established as the theory during the second AI boom. As explained, machine learning is a method in which a machine learns by itself using a neuron network.

Since there is no need for humans to teach things manually, it was possible to continue development with a small number of people even after the end of the second AI boom. While AI research in Japan has waned, overseas research has continued to produce remarkable results since the 2000s, leading to the current generative AI revolution.

Now, the fact that generative AI has developed mainly overseas has put a major strain on Japan. Large-scale language models (LLMs) used in machine learning are mainly English. As a result, mainstream generative AI, including ChatGPT, tend to be poor at expressing Japanese, making them difficult to use in business.

Therefore, development of LLMs using Japanese is progressing at a rapid pace in Japan. Japan has ample groundwork for AI research. If a Japanese-based LLM is created, generative AI that is easier to use in Japan will be born.


COLUMN

Notable generative AI from Japan as of November 2024

ELYZA

The University of Tokyo startup ELYZA has developed a generative AI platform technology that is highly compatible with Japanese, based on Meta's large-scale language model "Llama2". It has advantages in writing in Japanese and in information extraction, and is strong in dealing with ambiguous expressions in Japanese.

tsuzumi

One of the problems with generative AI is the large amount of power consumption required for learning. To address this issue, NTT developed "tsuzumi," an LLM specialized for Japanese language processing. By narrowing down the parameter size, they succeeded in reducing the weight and cost, and commercial deployment has begun.


Related articles

The advent of the AI ​​era

The advancement of AI into society as seen from the successive Nobel Prizes awarded to AI researchers

SHARE
  • sns-x-icon
  • sns-facebook-icon
  • sns-line-icon
Side Banner
MAGAZINE
Iolite Vol.11

Iolite Vol.11

January 2025 issueReleased on 2024/11/28

Interview Iolite FACE vol.10 David Schwartz, Hirata Michie PHOTO & INTERVIEW Nakamura Shido Special feature: "Unlocking the Future: The Arrival of the AI ​​Era," "The Ishiba Cabinet is in chaos with hopes and fears intersecting. What will happen to Japan's Web 3.0 in the future?" "Learn about the tax knowledge necessary for cryptocurrency trading! Explaining the basics and techniques that can be used even now" Interview: SHIFT AI Kiuchi Shota, Digirise's Chaen Masahiro, Bybit's Ben Zhou, Monex Group Inc. Zero Office Head/Monex Crypto Bank Bandai Atsushi and Asami Hiroshi, Kaoria Accounting Office Representative and Active Tax Accountant Fujimoto Gohei Series Tech and Future Sasaki Toshinao...etc.

MAGAZINE

Iolite Vol.11

January 2025 issueReleased on 2024/11/28
Interview Iolite FACE vol.10 David Schwartz, Hirata Michie PHOTO & INTERVIEW Nakamura Shido Special feature: "Unlocking the Future: The Arrival of the AI ​​Era," "The Ishiba Cabinet is in chaos with hopes and fears intersecting. What will happen to Japan's Web 3.0 in the future?" "Learn about the tax knowledge necessary for cryptocurrency trading! Explaining the basics and techniques that can be used even now" Interview: SHIFT AI Kiuchi Shota, Digirise's Chaen Masahiro, Bybit's Ben Zhou, Monex Group Inc. Zero Office Head/Monex Crypto Bank Bandai Atsushi and Asami Hiroshi, Kaoria Accounting Office Representative and Active Tax Accountant Fujimoto Gohei Series Tech and Future Sasaki Toshinao...etc.