This post is to share with you the article “What’s the deal with Artificial Intelligence killing humans?” appeared on Slate and greatly appreciated by the Futurist Hub audience. It’s the first of a new section titled Quick Reading, which is going to host remarkable sources selected by the readers.
Jacob Brogan’s post is attractive since the first second, with a picture of zeros and ones, where the binary regularity is broken by a one having the shape of a knife. Jolly good. It explains pretty well how machines can be dangerous to humans. Machines are defined like computer systems and algorithms that can form conclusions and determine their actions without direct human intervention. So no Terminator-like robots. They can be dangerous because faulty or because they might think about humans as obstacles to achieve their goals. Artificial intelligence might become scary, because we don’t know what to expect from it.
The reality is a sentence that is good to remember, because it’s in line with the spirit of the Futurist Hub: “the problem isn’t about what computers will do with humans; it’s about what humans will do with computers.”
|Please share this post, if you don’t mind, it’s only a click away|
The post then enters in a quick description of some proposed solutions, like Yudkowsky idea of creating a friendly artificial intelligence, other suggestions like having machines that simulate social intelligence or grow the machine like a baby. The Futurist Hub dedicated a full post to this exploration and you can have a look at “Heartificial or artificial intelligence? How to program a friendly AI” if you want to investigate more about it.
Brogan’s conclusion is that there are no short term risks about artificial intelligence killing humans, unless somebody begins to invest money in developing weaponized or killer machines.
What I can add here is an aspect that has not been debated enough and increases the level of risk, at least in my opinion. It’s not only about humans and machines, or machines and humans, it’s also about machines and machines. Humans, countries and populations have basically three kind of relationships: cooperation, conflict or indifference. When machines will have autonomy in making decisions and acting as a consequence of their decisions, we will have a matrix which contemplates also the relationships between machines. Currently, artificial intelligence systems are quite segregated one another, so in the indifference status. There’s almost no link between the Facebook AI describing pictures to blind people and Google’s algorithms to keep Gmail free of spam. But at some point artificial intelligences developed by different parties will collide and potentially decide to cooperate or open some kind of conflict. In conclusion, we just don’t simply need rules to manage the relationship between machines and humans, but also between machines, which make the scenario even more complex and risky.
Newsletter: because there’s more than artificial intelligence killing humans here!
The Futurist Hub Newsletter is the greatest thing after the Big Bang. Once per month, only the news, free of spam. And with a free ebook as a bonus.