Top Stories
Who’s afraid of AI?Published : 6 years ago, on
Elon Musk is an inspiring entrepreneur. As a self-made person who established himself from nothing, he is considered today to be one of the leading entrepreneurs in the world. He heads Tesla, which has transformed the world of electric cars, and which professes to be the first to operate autonomous vehicles on a commercial level, and also leads SpaceX, which has set a goal of sending a human to Mars by 2030. Today, every one of his statements finds many attentive listeners and a growing fan base.
As an enthusiast of ground breaking technologies, one might have expected that he would embrace the significant progress made in the field of artificial intelligence – that many claim will change our world – but here too, Elon Musk surprises us.
In the past few years, he has consistently claimed that the greatest threat lurking over humanity is the tremendous advancement in the technological developments of artificial intelligence. In his recent speech before United States governors, he called upon them to actively promote regulation that will address, monitor and limit the technological developments in this field, before it’s too late.
Elon Musk is not alone. The late Prof. Stephen Hawking, Bill Gates and other opinion leaders have begun warning of the dangers of AI. In apocalyptic scenarios, one can imagine AI robots storming mankind uncontrollably in the streets and wreaking havoc and destruction, just like in Hollywood films, but the expected dangers of AI are much more complex and require plenty of thought and planning on behalf of decision makers -certainly more than is being done today.
The great beauty of AIis found exactly in the place that scares Musk and others: the ability to produce unexpected results exactly in such places where human beings do not necessarily have the abilities (or the resources) to do this by themselves. It is especially evident where decisions made are based on large, unstructured set of data, when it is difficult to show the rational connection between the data and the decision. It is especially disturbing when such decision led to harms way to human being or to property. In legal language, we will say that it will be difficult to show the causal connection between the act which caused the harm and the person who committed the act, and this might undermine the foundations of the legal perception upon which modern society exists.
Take, for example, a doctor treating a patient.T he doctor consults with an AI utilising computer, like IBM’s Watson. The computer determines that at a high probability the patient’s symptoms are evidence of disease X. The doctor, being familiar with Watson’s success rates, is not convinced, but elects to rely on the program, and executes a treatment, which in retrospect is erroneous. In the normal world, we would say that in a medical negligence lawsuit, the plaintiff must prove that the doctor deviated from accepted common medical practices, and if so decided, the plaintiff will be awarded damages. However, in an AI world, quite quickly, the accepted common practice will be to rely on Watson, and consequently, it will be required to determine whether Watson was negligent in forming the opinion. Now this is not a simpletask. First, because Watson is programmers, researchers, operators and managers -and each one in this chain could have been negligent. Second, because it is very difficult to examine decision making systems reviewing an abundance of data, which human beings cannot do. The conclusion then is that the existing legal system will have great difficulty dealing with such cases in the future, and one might expect these cases will increase as more AI Watson like systems will be part of the medical decision-making system.
This is only one risk. In an age when everything is connected to a computer, and every computer is connected to the web, and the web is connected everywhere, it is possible to see how AI will turn into a dominant force, roaming the web and using its resources, almost uncontrollably. For example, it has recently been reported that two AI systems communicating with each other, had begun to develop a language of their own, unknown to humans. In a different case, where AI based hacking programs were used; the program transformed itself autonomously and hacked computers. In Switzerland, an autonomous robot purchased drugs independently via the Agora platform. It used Bitcoins of course. And there are many more examples.
Musk is right that AI poses ethical, legal and practical challenges unlike any we have ever faced before. The same is true for the widespread phenomena of Human Enhancement and gene manipulation, Nano-technology, Blockchain, robots, autonomous cars and more. All of these require the attention of decision makers and regulators. The problem is that technology is developing rapidly – oftentimes too fast for regulators. And the risk lies in the growing gap between the two. This is the reason why a clear statement by a technological pioneer like Musk made to legislators is significant. What will regulators do with this? We will have to wait and see.
Authored by Advocate, Roy Keidar, a special consultant in the field of emerging technologies in Israeli law firm, Yigal Arnon & Co.
-
Finance3 days ago
Phantom Wallet Integrates Sui
-
Banking4 days ago
Global billionaire wealth leaps, fueled by US gains, UBS says
-
Finance3 days ago
UK firms flag over $1.4 billion in labour costs from increase in national insurance, wages
-
Banking4 days ago
Italy and African Development Bank sign $420 million co-financing deal