Scientists say super-intelligent artificial intelligence will be uncontrollable

protection click fraud

Superintelligent systems are already being created and disseminated across the globe due to the technological race. For this reason, some questions arise: is it possible control artificial intelligence how far? How ready are we to fight a AI that works beyond human capacity? How ethical is it to create a program whose consequences and impacts cannot yet be measured? See now more information about these questions asked by scientists.

Read more: 7 applications of artificial intelligence: some already replace human labor

see more

Alert: THIS poisonous plant landed a young man in the hospital

Google develops AI tool to help journalists in…

Artificial intelligence with superintelligence

It sounds like a science fiction movie plot, but the truth is that an artificial intelligence can indeed create autonomy to learn commands that can later turn against humans. It is not guaranteed, for example, that they will follow rules such as “do not cause harm to humanity”, since that they will be programmed to have autonomy and can circumvent the limits imposed by programmers.

instagram story viewer

Alan Turing halting problem

Called the “halting problem”, this problem gives two alternatives for a command given to a computer: either it will come to a conclusion and either respond to it by stopping, or it will continue in a loop trying to find a solution – until find her. For this reason, it is impossible to know whether the artificial intelligence will stop, or whether it will find and store all possible conclusions in its memory.

The problem with this is that it is not possible to measure, after a while, which solution or alternative the machine will take for a given problem. This way, its behavior will be unpredictable and it will be pointless to create a containment algorithm.

Divergences between opinions of scientists

Some scientists argue that all advances in technology must be designed to promote the common good in a safe and controlled way. In this way, a supermachine capable of storing any and all types of intelligence, with inefficient containment powers and a still unmeasured capacity for destruction, seems like something dangerous.

With that in mind, computing researchers claim that superintelligent machines may have harms that outweigh the benefits, and their construction should be avoided. Many programmers constantly deal with the fact that their machines have learned to do things they never could. taught or created commands for them to learn - and this, depending on what they learn, can symbolize a risk.

ethical issues

Limiting superintelligence capabilities may be a viable alternative. To do this, it is possible to cut it off from parts of the internet or from certain networks. These would be examples of containment mechanisms. However, it is not yet known whether these alternatives are viable – or whether the machine would have access to these contents in a way that has yet to be revealed.

Therefore, if humanity continues to advance with artificial intelligence, it is necessary to be very careful with the extent to which it must be autonomous.

Teachs.ru
Extreme points in Brazil: what are they?

Extreme points in Brazil: what are they?

You extreme points of Brazill correspond to those localities located on the northern, southern, e...

read more
Laurentius (Lr): obtaining, precautions, history

Laurentius (Lr): obtaining, precautions, history

THE Laurentius is the chemical element of atomic number 113 of the Periodic Table. Because it is ...

read more
Lutetium (Lu): acquisition, applications, history

Lutetium (Lu): acquisition, applications, history

THE lutetium, symbol Lu and atomic number 71, is a chemical element of the Periodic Table belongi...

read more
instagram viewer