language switch

Open letter No,1-12

AI
この記事は約41分で読めます。

Artificial Intelligence

Artificial intelligence is not the same as an electronic calculator. It is not an electrical substitute for some of the functions related to human intelligence.

Let us take an example of its use in the field of dentistry. Teeth have a three-dimensional shape. The tooth shape is sculpted by nature, and various organisms have teeth. Why did each organism acquire such a unique and distinctive shape? In many cases, it may have to do with food habits.

Human teeth also have completely different shapes in anterior tooth and molars. It may be that the anterior teeth have different functions than the posterior teeth. Computers are good at measuring shapes and capturing data into computers. The area in which computers currently excel is in converting three-dimensional shapes into data and performing simple edits that remove noise from the data. Currently, deforming and modifying the tooth shape to suit the purpose is done manually by humans.

What is required for the function of artificial intelligence? It is to measure the three-dimensional shape of teeth with a three-dimensional scanner, capture the data, and then use a computer alone to think with algorithms similar to those used by humans and re-edit the data to suit the purpose.

In the conventional method, a computer and a human being were in charge of processes related to the creation of dental prosthesis and the creation of three-dimensional materials for diagnosis, with each taking charge of his or her own area of expertise and sharing the responsibility. Nowadays, there is a demand for artificial intelligence to take over all of these tasks. The reason why this kind of functionality is required is that all the editing has to be redone manually, just by making small changes to the editing conditions. This is a very time-consuming process. For example, it was not easy for dentists to create diagnostic data in three dimensions while performing treatment. I believe that such a demand exists in dentistry today. I think it is important to create a system that enables dentists to do such things.

How can an artificial intelligence recognize the shape of a tooth, a natural object, in four dimensions? This expression four-dimensional is because the teeth of the mandible move. The tooth shape, which functions only when the mandible moves, is a three-dimensional shape, but it contains four-dimensional data.

What is necessary for artificial intelligence to carry these conditions? I believe that the key words are “Table of Consciousness and Reason,” “time administrator,” and “spatial administrator.” That is what I believe.

This is from the web site. (https://ecclab.empowershop.co.jp/archives/69332)Translated from Japanese to English.

It is all the bright prospects we hear about AI (Artificial Intelligence). It may come as a surprise that researchers are completely divided in their approach as to how the field should develop. There is apparently a split between proponents of traditional logic-based AI and enthusiasts of neural network modeling. In a brief survey of the controversy, computer scientist Michael Wooldridge, professor of computer science at Oxford University, describes it as “Should we model the mind or the brain?”

AI has its historical roots in a thought test known as the “Turing Test” published by the English mathematician Alan Turing. The test seems to have been intended to provide a criterion for determining whether human intelligence, in other words, the mind, has been successfully modeled.

For decades, successfully modeling intelligence has been the main goal of AI. The term “symbolic AI” refers to the commonly accepted assumption that human intelligence can be replaced by logical descriptions and captured by symbolic logic. This approach has enabled great advances in AI by dealing with distinctly localized areas of human intelligence by clearly defined rules. That includes, of course, mathematical computation and the well-known game of chess. The problem is that much of human thinking has failed to clearly demonstrate those rules, even though they underlie the human thought process.

Traditional AI lags behind in pattern recognition and cannot understand images. The same is true for creating a set of rules for skills like hitting a ball or riding a bike. Humans learn to do, or not to do, a behavior without learning a set of statements describing the required behavior.

Traditional AI has been implemented by replacing human intelligence with logical descriptions. Often, it has been misrepresented as being based on modeling the networks in the human brain.

An alternative approach to new AI draws inspiration from how human neural networks work. Large artificial networks of “nodes” trained on large data sets learn to recognize statistical relationships in the data, and feedback loops between node layers create the potential for self-correction. This approach is given the name “deep learning” because the scale at which it can be processed is extremely large and the nodes are divided into multiple layers. It is precisely that scale that has been an obstacle to the development of deep learning approaches. Until relatively recently, there was not enough data or computer power to make deep learning practical and cost-effective. But things have changed, and in recent years, for example, we have seen rapid improvements in AI image recognition.

The downside of deep learning, especially when it comes to understanding text, is that this very powerful engine essentially operates without recourse to anything: AI recognizes vast amounts of correlations in a given set of data and reacts accordingly. It does not understand the data intellectually, so errors, biases, etc., can become deeply embedded, unless humans correct the problem. Simply put, a deep learning system with enough processing power to absorb the entire Internet world will absorb a lot of nonsense, some of it malicious.

Roger Shank, an American artificial intelligence scholar, as well as a cognitive psychologist, learning scientist, educational reformer, and entrepreneur, writes “. Questions like, “Can a computer feel love?” are not critical. We certainly understand quite a bit about what we know about humans. More importantly, the ability to feel love is independent of the ability of a computer to understand.

Professor Rosalind W. Picard is the founder and director of the Emotional Computing Research Group at the Massachusetts Institute of Technology’s Media Lab. She argues that The brain has trillions of neurons, each of which has relationships with approximately 10,000 neighboring neurons. The number of ways in which neurons can be connected to each other is greater than the number of atoms in the universe. Signals between neurons are not digital signals, but are encoded in continuously variable properties, such as electrical potentials or the frequency of oscillations that stimulate neurons. New knowledge from neuroscience will no doubt influence the design of future computers. But we should not underestimate the differences or discrepancies between computers and the brain.

Dynamic Core Hypothesis (Consciousness)

Since we are talking about artificial intelligence, we still need to know more about the function of the brain. Therefore, we present the following text regarding knowledge of human consciousness and reason, which are important functions of the brain.

Malcolm Jeeves & Warren S. Brown, authors of “Neuroscience Psychology and Religion,” remark in their book.

Since conscious thought is of foremost importance in understanding the relationship between neuroscience, psychology, and religion, it is important to understand this process more deeply. The most important, and perhaps the most distinctive, thing in human beings is to be conscious.

In our view, the most helpful model of consciousness that modern research has produced is called the “dynamic core hypothesis.” It is well supported by the experimental literature, and it clarifies the difference between the conscious control of behavior and behaviors that are more unconscious and automatic.

Gerald M. Edelman is a scientist who won the Nobel Prize in Physiology or Medicine in 1972 for his “studies on the chemical structure of immune antibodies. After winning the prize, he changed his research focus and introduced an evolutionary perspective to brain science, proposing the ” Neuronal cell group selection theory = Neural Darwinism” in 1987. The other, Giulio Tononi, is an American psychiatrist and neuroscientist from Trento whose research focuses on consciousness and sleep.

This model has been ably presented by neuroscientists Gerald Edelman and Giulio Tononi in their book A Universe of Consciousness: How Matter Becomes Imagination (2000) (a book not yet translated in Japan).

In describing consciousness, Edelman and Tononi suggest a two-part model. Primary (or basic-level) consciousness is evident in the ability of many animals to “construct a mental scene,” but this form of consciousness has limited semantic or symbolic content. Higher-order consciousness is “accompanied by a sense of self and the ability, in the waking state, to construct explicit past and future scenes. It requires, at minimum, a semantic capacity and, in its most developed form, a linguistic capacity.”

What is most noteworthy about the dynamic core hypothesis is its specification of the most likely neurophysiological basis of conscious awareness. Edelman and Tononi argue that a state of consciousness and its content (whether primary or higher-order) is a temporary and dynamically changing process within the cerebral cortex that is characterized by a high degree of functional interconnectedness among widespread areas.

According to Edelman and Tononi, dynamic cores (and thus consciousness) are characteristic of the mental life of all animals to the degree that the cerebral cortex has sufficiently rich recurrent interconnections. The higher-order consciousness that is distinctive in human beings comes into play when symbolic representations and language are incorporated into dynamic cores, including the ability to represent the self as an abstract entity and to use symbols to note time (past, present, and future). Since language and other symbolic systems are learned, higher-order consciousness is a developmental achievement dependent on social interactions and social scaffolding.

In the early learning of difficult tasks or behaviors, the performance must be incorporated in and regulated by the dynamic core (that is, by consciousness). However, once the behavior is well learned (and automatic), it can go forward efficiently based on the activity of a smaller subgroup of cortical neurons (and subcortical connections) that do not have to be incorporated into the current dynamic core. For example, during normal adult speech, the basic lexical and syntactic aspects of language processing can go on in the background, while the dynamic core embodies the ideas that one is attempting to express.

It is not just having a cerebral cortex that forms our humanness, but the organization of the cerebral cortex. The highest level of the control hierarchy (in the polymodal cortex and prefrontal cortex) is not only relatively larger in humankind but slower to develop, allowing maximal opportunity for the richness of human society and culture to influence the networks of functional connections.

We believe it is no longer helpful or reasonable to consider mind a nonmaterial entity that can be decoupled from the body. The mind is an active process by which we constantly modulate our action in the world (including the world of human society and culture). Out of continual experiences of action and feedback, the mind becomes formed as a functional property of our brain and body.

For any of us to accept that our “I” is not a separate inner agent—like the captain of a ship—is a very hard task, counterintuitive to all we know. In other words, the mind is embodied.

All functions of the mind and brain are determined by brain physiology and neuronal activity and are explainable by those activities.

Dynamical Systems Theory (Emergence)

Roger Sperry is an American neuropsychologist who, along with David Hubel and Torsten Wiesel, was awarded the Nobel Prize in Physiology or Medicine in 1981. The Nobel Prize was awarded for his work in split-brain research. Roger Sperry stated the following.

Thus, by the 1970s, Roger Sperry argued that there had been a shift in the scientific status and treatment of conscious experience, a shift that would have far-ranging philosophic and humanistic, as well as scientific, implications. He argued that these “mentalistic revisions” in the understanding of human nature “invoke emergent forms of causal control that transform conventional scientific descriptions of both human and nonhuman nature.”

By believing that the causal role of cognitive processes cannot be “reduced to” isolated brain activity, researchers were not at all restricted in their localization studies of memory, speech, and mental planning, for example.

They also expanded on their holistic interpretation of the brain’s higher-mental causes. They viewed them as “emerging” from an ensemble of brain networks, not just a single node or module.

The concept of emergence refers to the possibility that complex entities (like organisms) can have properties that do not exist within the elements (such as molecules) that make up the complex entity. Thus, even an amoeba, as a complex organization of molecules, has properties that do not exist in the molecules themselves. The activity of the amoeba is governed by the current state of the organization of these molecules, not the properties of the molecules themselves. Hence, the activity of the amoeba is an emergent property. Another term for emergence is dynamical systems theory.

It attempts to explain how new causal properties (whether the behavior of amoebas or humans) can emerge in complex systems that are characterized by a high level of nonlinear interactions between their elements. A perfect example is the human cerebral cortex. Its millions of neurons and massive number of interconnections are ideally suited for a dynamical system. From the countless separate pieces of human neurobiology, the cerebral cortex produces the high-level (and nonreductive) cognitive properties of a whole person.

The ant colony is another analogy for how complex dynamical systems produce new, whole-system properties. Of course, an ant colony cannot support the emergence of something like human cognition. But that is not only because ants are “mindless”; it’s because the complexity of ant social interactions are vastly less complex than that of neurons in the brain! Still, we can imagine individual ants as analogous to individual neurons. That makes the colony something like a brain, where the emergent properties exceed the ability of individual ants.

Ant colonies (as colonies) show various forms of “intelligent” behavior. They manage to locate the trash pile and the cemetery at points where they are closest to each other and also at points where both are closest to the ant colony itself. Hence, the ants have solved a spatial mathematical problem. Colonies also solve the problem of the shortest distance to a source of food. They prioritize food sources.

But who is doing the solving? The solution is beyond the capacity of individual ants. Most interestingly, colonies modify their behavior over time. Colonies as a whole go through stages, progressively changing their colony-level behavior. Young colonies are more persistent and aggressive, but also more fickle, than older ones.

Each individual ant, however, operates by a set of simple rules of responding to information from the social and physical environment. A great deal of work has gone into describing these rules. The question is whether the rules governing individual ant behavior are sufficient to explain all of the colony behavior or whether there are properties of colony behavior that are emergent and cannot be reduced to the rules governing individuals ants.

Dynamical systems theory gives us a way to understand how both complex whole-ant-colony behavior and higher-order human cognition can emerge from the interactions of less complex elements (ants or neurons).

When environmental change pushes complex dynamical systems (such as ant colonies or human brains) away from equilibrium, they self-organize (and progressively reorganize) into new interactive patterns to deal with the new environment. These new patterns form as the interactive elements (individual ants or neurons) constrain each other’s activity. Individual elements start working in a coordinated manner, and the probability of each element’s doing one thing or another is altered by its interactions with all of the other elements.

Hence, an aggregate of individual elements (ants or neurons) becomes a new dynamical system (a colony with particular colony-wise properties or a brain with cognitive properties). Once this system is organized, its lower-level properties (rules of individual ant behavior or of neuron firing) interact bottom-up with the top-down relational constraints. This bottom-top interaction creates higher-level patterns (colony coordination or whole-brain functioning) without any change in the physical laws at microlevels (within individual ants or neurons). In doing so, it does not alter in any way the physical laws at the microscopic level within individual ants or neurons.

By adapting to a changing environment, these dynamic systems embody what we can call meaning. That is, the state of organization of the system carries forward a “memory” of previous interactions with its environment embodied in its current organization. On the basis of previous organizations and reorganizations in response, the system is more adequately prepared to deal with similar situations in the future.

These constant reorganizations of the system do more than just adapt to a changing environment: they create increasingly more complex forms of organization. Multiple smaller systems can be reorganized into a larger system. The process creates a nested hierarchy of more and more complex emergent functional systems. Paradoxically, the constraints that lower-level elements (ants) put on each other help produce greater freedom at the higher level of the system as a whole (colony). The system develops a substantially greater number of possible interactions with its environment than it had in each preceding step of self-reorganization. The most interesting property of complex, nonlinear, dynamical systems is that they manifest novelty.

The most interesting property of complex, nonlinear, dynamical systems is that they manifest novelty. Even in small-scale mathematical models of dynamical systems, no two runs of the same system model ever come out exactly the same. Considering all these features of dynamical systems, they become perfect models for our understanding of the human brain.

We can imagine how the physical brain produces truly causal emergent properties that cannot be explained by the lower operation of physics, chemistry, and neurons.

Top-down & Bottom-up Discussions

Given this debate between top-down and bottom-up advocates, where does the physicalist view of human nature stand? Although the physicalist stance aims for a unitary and embodied understanding of the mind, it does not necessarily presume that mental life must be reduced only to chemistry and physics. Instead, it supports a range of theories that operate under the heading of nonreductive physicalism. In this view, while humans are taken to be entirely physical, the brain is seen as complex enough to support the emergence of mental properties and experiences that have a real influence on behavior. A similar view, but with a different emphasis, is dual-aspect monism. The term monism means, in this context, essentially the same thing as physicalism.

The modifier dual-aspect emphasizes the fact that an adequate description of human nature must entail at least two levels (or aspects)—a physical description provided by neuroscience and a mental description as represented in our subjective experiences and studied by psychology.

There is a view called emergent dualism. Here, the physical reality is taken as first and primary but then from it emerges a completely new entity—a mind or soul. This might seem like it circles back to the dualism of Descartes, but it is actually different: it gives the physical side precedence.

One neurologist has made the provocative comment that if our behavior is governed by whether or not our brains are working well, doesn’t that mean we humans don’t have as much free will as we think we do?

More than a century of accumulating evidence has revealed one thing: no matter how much we examine the brain in detail, it is still an organ of the mind.

Deep Learning

I have no knowledge or experience with artificial intelligence technology, so I will list below some information that I have obtained from web sites, but which may be relevant to artificial intelligence. The term “deep learning” is mentioned, and I think this deep learning means the reorganization of programs by the computer itself to be able to respond to individual cases.

Deep learning is a central technology for artificial intelligence, but according to artificial intelligence experts, deep learning is viewed as a natural phenomenon rather than engineering.

Programmer Ryo Shimizu’s book, “Deep Learning Programming for the First Time,” describes it as follows. Professor Yutaka Matsuo, a well-known researcher in artificial intelligence at the University of Tokyo, has stated that “he sees deep learning as an invention since the agricultural revolution.”

The text that follows is an English translation of a copy from the website. https://wirelesswire.jp/2016/06/54115/)

Now, on the other hand, some argue that deep learning is not a good subject for research. This is because there are so many reasons why it works, and the reasons are not well understood. As a programmer myself, I feel that there are many things in machine learning, not just deep learning, that we cannot understand in theory. However, now that deep learning has shown that machine learning can be used “far more practically than conventional methods,” it would be a loss not to use it. At the “Super AI Emergency Countermeasures Conference,” a panel discussion at the Nico Nico Super Conference held at Makuhari Messe in April 2016, Professor Masahiko Inami of the University of Tokyo made the following statement. That is, “even if artificial intelligence advances, it should be viewed as “newly discovered nature.””

# “Nico Nico Super Conference” is a participatory event that calls itself “a meeting of offline and online” and is held at Makuhari Messe, hosted by Dwango Inc.

Indeed, machine learning is more aptly thought of as a natural phenomenon than as an engineering technique. Although the principles of machine learning have not been elucidated, the idea that machine learning can be used because it can be controlled from an engineering perspective is not limited to artificial intelligence. Even physical phenomena have not yet been fully understood, and we do not yet know why they occur. The basis of engineering is to accept the preconditions of reproducible phenomena as a lower layer and construct an upper layer. Deep learning can also be engineered and controlled and used in any number of ways if we treat it as a black box and view it as “a component that judges images” or “a component that recognizes voice”. Right now, there are not that many engineering applications for deep learning. It is unknown what deep learning can actually do and how far it can go. I believe that as we accumulate engineering applications, we will come to understand deep learning as a natural phenomenon. Creating a good deep neural network requires a good teaching strategy. At the moment, that part of the process still has to be figured out by humans. In the time that will eventually come, deep learning will be studied as a natural phenomenon. Because deep learning is similar to a natural phenomenon in many respects.

Continue to Open letter No,2-1

タイトルとURLをコピーしました