How to do some restrictions on Artificial Intelligence in the future?

As artificial intelligence technology is increasingly involved in our lives, to do some restrictions on artificial intelligence become nessasory for us.

Will artificial intelligence lose control?

Preventing possible risks while using them has always been a constant strategy for humans to tame the “technology” beast. So we have a very complicated power protection system, detailed traffic rules and countless traffic safety equipment, and also produced a huge Internet security industry.

In fact, we will not decide to power off the entire city because of the danger of electric shock. Instead, we will limit it to one safe layer and let technology serve humanity safely. This situation is being apeare in front of the AI today. Just like the panic when humans faced the flame for the first time, the sci-fi culture of more than one hundred years made the public when face the AI the first thing that comes to mind is the fear of robots ruling the earth. In fact, I think this possibility is like a planet impact the Earth, it is a hypothesis that it can happen but no one knows how long it will happen.

However, with the rapid development and wide application of AI, the dangers and uncertainties of this new technology are gradually becoming obvious. So where are our “insulation tapes” and “air switches” for AI?

Not long ago, DeepMind revealed such a message in the blog that the AI model may show confusion and loss of control. They are preparing to develop an “AI insurance mechanism” that can completely shut down the AI in an emergency. That is to say, once the AI’s malicious tendency is discovered, the system will actively terminate all activities of the AI.

Of course, research in this field is still more on directional exploration, and the subsequent series of questions are worthy of our consideration. If there is a device like AI fuse switch, under what circumstances does it stop AI work? Is there any other way to try to ensure the security of AI?

Which AI risks need to be guarded against?

Just like human beings use fire, it may be the most destructive technology application in human history, but at least no one will deliberately blame “the evil of fire” or “the original sin of Prometheus”. But AI is a little different from fire. The complexity of deep neural networks causes the AI running logic to be unexplained or unpredictable under certain circumstances, which is the AI black box problem that is widely concerned.

Howard Phillips Lovecraft

"The greatest fear of mankind is the fear of the unknown."

AI makes humans feel mysterious and terrible, so what is the danger of AI in the application?

AI has proven to be able to learn rudeness and racial discrimination, which is what was previously article about career and racist encounter AI bias. For example, in March 2016, Microsoft launched a chat bot called Tay, but less than a day after the launch, Tay changed from a cute 19-year-old “girl” to a bad language and racist speech “AI madman”, so Microsoft urgently removed this product.

The essential reason for this phenomenon is that AI will learn and absorb the dialogue data on social networks. However, the data itself contains some language of prejudice and discrimination, so AI learns bad things and bring them together into the behavior mode. How can we let AI learn only what we think is good? It seems there is currently no good answer to this question.

People can not only teach bad things to AI, but also use AI directly to do evil, and this is not uncommon. In 2015, the United Kingdom already discovered the use of AI models to imitate user tone to do mail and telecom fraud.

In addition, many hackers have demonstrated the ability to use AI to steal passwords and crack security. In countries like China, many lawless elements have begun to use AI technology to falsify e-commerce accounts and trading orders, thereby defrauding investors from continuing investment.

As a computer algorithm, its cognition is not based on human common sense, but many times, both ordinary people and researchers will ignore this point. The famous case is that DeepMind trained AI in a rowing game and found that the deep learning model ultimately concluded that it was not the route may chosen by the average human player. This situation is very necessary to take everyone’s attention, assuming that in the unmanned scene, AI began do not think according with human traffic rules, it may fly directly from the viaduct to the ground or choose retrograde to get better efficiency.

This is not an alarmist. Current research has found that as long as a little damage on the road signs, it can cause a lot of interference with computer vision. After all, the idea that the machine sees the wrong sign is not a “think” like humans.

What can we do to restrict AI?

The loss of control of AI itself may be different from the risk of any other technology in human history. AI has learned a lot of data and complicated internal transformations, so the difficulty left for humans is that there is not a simple security law like electricity, it will bring elusive hidden bugs.

So how do we make some restrictions on AI? You can see that there are several ideas in the industry. It should be noted that this is not a discussion that can lead to a single conclusion. In my opinion, it is necessary to need a comprehensive solution for work together when actually doing some restrictions on AI.

The topic goes back to the DeepMind we mentioned at the beginning. The AI security technology they are developing, it is can be understood as behind the complex AI mission, there standing on an “AI executioner” on call. The principle is to re-develop an AI system with powerful functions and a set of own security logic and to monitor the work of other AI models at any time based on the reinforcement learning mechanism. Once other AIs are found to have the risk, the activities of the AI program will immediately terminated by the executioner.

In fact, the concept of “interruptible” has always been the core concept of DeepMind in the field of AI security. In December 2017, they released a research report called “Safe Interruptible Smart Agent”, showing how to ensure that the performance of the smart agent will not be affected if it is restarted after the interruption.

Letting the AI monitor the AI is technically feasible, but it will also cause some problems. For example, in the face of increasingly complex deep neural networks, the problem traceability model can consume labor costs that are difficult to bear. In addition, who can monitor this “AI executioner”?

Whether it is discrimination or behavior beyond cognition, it can be attributed in nature to the black box features of deep learning. So is there a way to perspective the black box and let the human developer find the error point of the problem AI, so as to correct it instead of recklessly interrupting it? I think that making the black box safe and controllable is the main direction of the AI security field.

There are currently two main ways to explain the black box.

  • One is to use AI to check and track AI. For example, we can use the attention mechanism, a neural network model is specially designed to copy and track the trajectories of other AI models, so as to find out the training source of the wrong training results and help the developers to correct.
  • The other is to make the structure of the deep learning model visible through some tools, which means that the black box becomes transparent. In this way, when the AI fails, the R&D staff can relatively easily check the training process of each layer and find the problem.

However, whether it is an AI prosecutor or a human prosecutor, today these black box interpretative technologies generally only deal with less complex deep learning models. At the same time, these programs generally require a large number of people to participate in it. What is more troublesome is that the manpower it consumes must also have a fairly high level of technology.

In many ways, stopping AI from doing evil is not just a technical issue today. For example, whether the data of the training is biased depends largely on whether the data provider itself is biased. Similarly, many AI discrimination issues are caused by the desire of developers to improve business efficiency and this is also a moral issue. Furthermore, whether it is possible to restrain the desire to develop AI weapons and AI surveillance tools will be a social and international responsibility issue.

In order to prevent the proliferation of these problems, the restriction on AI should not be done only from a technical point of view, but should introduce a wide range of social mechanisms. Earlier this year, 14 organizations and universities such as OpenAI, Oxford University, and Cambridge University published a research report titled “Artificial Intelligence Malicious Use.” The report pointed out that we should admit that the research results of artificial intelligence today are a double-edged sword and in order to control the risks brought by artificial intelligence. Policy makers should work closely with technicians to investigate, prevent and mitigate the possible malicious the way use of artificial intelligence. And a normative and ethical framework should be prioritized in the field of artificial intelligence, and the range of stakeholders and experts covered in discussing these artificial intelligence challenges should be expanded.

It can be seen that restricting AI from the fields of technology, law, ethics, and research habits has become a consensus of the international community. Obviously, this is easy to see, but it can be very difficult to do.

No matter which scheme is used to limit AI, in the end, we must face a philosophical problem: the essence of human nature is contradictory, but we must use the principle of unity to demand artificial intelligence that mimics human beings.

Who is going to endorse the plan to limit AI?

When AI needs more and more training data produced by human society, human value judgments will also be passed to AI, including some ethical obstacles in human society.

In order to prevent AI from doing evil, we first need to define the boundary between good and evil. Is this boundary absolutely right? Is the person responsible for defining the boundary able to meet the requirements of not doing evil? We all know that Google AI once identified blacks as orangutans, which is obviously a form of discrimination.

Further, should the AI’s restriction rules between countries be consistent? Today, more and more AI companies and international industrial organizations, and even government organizations have begun to call attention to the ethical and ethical issues of AI and to formulate an internationally unified AI code of ethics. But will uniform AI regulations violate the customs and habits of certain countries? Will it hinder AI research in some countries and regions? For example, is the EU’s AI research privacy protection policy really suitable for the whole world?

These AI ethical aspects are almost paradoxical issues. Even in the longer-term future, is human behavior judgment really better than AI? At some point, using technology to interrupt unpredictable machine learning behavior, is it actually revealing human weakness or ignorance? Or has it stopped the new possibilities of creating technology with technology?

Problems and solutions are always moving forward in an alternating state.

You may also like...

Leave a Reply