Conciousness

A little bit of Consciousness

Ilya Sutskever is a senior scientist at OpenAI with a certain knack for PR. With a casual tweet, Sutskever revived a long-closed discussion: The question of whether artificial intelligence can ever possess a consciousness. Sutskever suggested that the large neural networks of today "have a little bit of consciousness".

OpenAI, which Sutskever founded with Elon Musk and its current CEO Sam Altman, is an organization whose stated goal it to create "Artificial General Intelligence," an artificial intelligence human-like that serves the greater good. OpenAI, on the other hand is a company that must make money. OpenAI's products, like the large language model GPT-3 aren't undisputed, but they are also subject to considerable competition. At the very minimum, the company will be able to get a lot more attention by bringing back the conscious AI debate.

Leading AI experts immediately criticized the statement. Yann LeCun (a deep learning pioneer) tweeted that the claim was "not true for large neural networks" and "a bit of awareness". This is because awareness requires "a certain form of macro architecture" which doesn't exist in any of the neural networks.

Others, like Toby Walsh, a University of Sydney computer scientist, were concerned about the public impact of such discussions. He wrote that "each time such speculative remarks are made public it takes months to steer the conversation back to the more realistic possibilities and threats of AI."

Tamay Besiroglu, a MIT computer scientist, defended Sutskever. Besiroglu cited a preprint study by him and his colleagues that showed that machine learning models' intelligence had nearly doubled in six months, since 2010. He drew a line to illustrate this progression in a diagram and claimed that there could be a little conciousness along the line.

Interestingly, the question of "consciousness" is not a part of the current discussion. The phrase, "a bit conscious", fits reasonably well with the theory of "phenomenological consciousness". The theory states that the former is a result of the interaction between an autonomous agent and its environment. "Access consciousness" on the other hand, enables the agent reflect on their sensory impressions and experiences and is something that a general public is probably more likely to consider as consciousness.

Philip Goff, a philosopher and panpsychist believe that consciousness is an integral component of physical reality. It is the basis of "Integrated Information Theory", which describes consciousness using a mathematical formula. The theory's basic idea is that consciousness is a state of mind that is able to be understood in any form of highly integrated matter. Christof Koch, a brain researcher, is convinced that this theory is valid. This approach also allows for the measurement of consciousness.

Marcello Massimini, a brain researcher, developed a method to calculate the consciousness of coma patients using recorded brain waves. This number, which is between 0 to 1, is the Pertubation Complexity Index (PCI), and is meant to indicate whether someone is conscious. Researchers calculated a limit value at 0.31 which was used in a 2016 study to distinguish between conscious and unconscious states in healthy and brain-damaged subjects. This is the most precise measurement of consciousness ever made in medicine.

But neural networks do not have brain waves and thus cannot be measured like humans. If we can find a comparbale measurement for neural networks, we might be able to search for consciousness in neural networks. At the time being, we can only speculate and discuss interesting ideas.

Photo by Mohamed Nohassi on Unsplash

Source:

Heise.de

DIGITAL TRENDS

Stay up to date with digital trends

We cover latest trends all things digital with news and infos about developments for your online business.

Scroll to top