Surprise! This is the future of artificial intelligence and it’s today

Surprise! This is the future of artificial intelligence and it’s today

We are a long way from humanlike artificial intelligence, but we have already benefited of it in our everyday life. In this post I’m going to show some examples which are not future of artificial intelligence, because you have probably used them recently. Artificial intelligence is a mix of field of studies and applications, such as neural networks, fuzzy logic, agents, genetic algorithms, natural language processing and knowledge base systems. I observe that people have the expectation that the future of artificial intelligence is represented by an evil android, showing up abruptly and capable of acting like a human. This is fiction. Artificial intelligence is more evolutionary than revolutionary. It’s a cumulative effect: AI is embedded in many of the technologies that have been changing our life over the last several decades and will continue to do so.

Writers coined the expression “AI effect” to say that many AI applications are not perceived as AI anymore, because they become commonplace. Many of the consumer level artificial applications we use today can barely be classified as such just because we stop considering them magical. We say it’s just computational power. When we get used to a new piece of AI technology, we are not amazed anymore and it’s taken for granted. It seems there’s always the expectation for something generating a “wow” effect, but in reality if you imagine Siri or Watson, for example, just 20 years ago they would have astonished people. Today there are many scientific accomplishment that are breathtaking and embedded in the technology we adopt daily. Because the future of artificial intelligence is already here.

 

AI WHEN YOU READ THE NEWS

Surprise! This is the future of artificial intelligence and it’s todaySoftware from Automated Insights claims to generate more than a billion articles for the web. I guarantee that what you are reading just now is human generated, but I cannot exclude that you’ve read today a piece generated by a machine. The company does not publicize its clients, because humans would become over critic with an article written by an artificial intelligence or simply search engines might declass the content, but the numbers are huge. And the principle is interesting as well. The standard approach of an author is to write something and hope the largest number of readers (let’s say a million) access it. The software’s approach is exactly the opposite: give the same information to everybody, but written in a million different ways, customized on the preferences and style of each reader.

Obviously robot “journalists” never sleep and can access to many sources and databases and are moving from short news to complex narrative. Powered by Artificial Intelligence, for example, Quill is an advanced natural language generation platform for the enterprise that goes beyond reporting the numbers. The authors claim it creates perfectly written narratives to convey meaning for any intended audience.

This is the famous Los Angeles Time piece about the small earthquake occurred in March 2014 and seems to be the first public example of robot writing. A shallow magnitude 2.7 earthquake aftershock was reported Monday morning four miles from Westwood, according to the U.S. Geological Survey. The temblor occurred at 7:23 a.m. Pacific time at a depth of 4.3 miles. A magnitude 4.4 earthquake was reported at 6.25 a.m. and was felt over a large swath of Southern California. According to the USGS, the epicenter of the aftershock was five miles from Beverly Hills, six miles from Santa Monica and six miles from West Hollywood. In the last 10 days, there has been one earthquake of magnitude 3.0 or greater centered nearby.

This information comes from the USGS Earthquake Notification Service and this post was created by an algorithm written by the author.

I decided to test Quill this morning as it offers an analysis of your twitter profile and writes a narrative about it and your recent actions. You can find the full story at this page, but I wanted to give a flavor quoting the following section.

What do you and your followers tweet about?

Looking at your history, you’re most focused on Business & Technology, Science, and Politics. Tweets in your top topic, Business & Technology, are mostly neutral in tone. Your important topics match those most tweeted about by followers who are similar to you. The chart shows the different topic distributions for you, your followers and the most aggressive retweeters among them.

 

AI WHEN YOU SHOP ON THE WEB

Surprise! This is the future of artificial intelligence and it’s todayEcommerce has developed solutions to attract, understand and influence visitors, so they can buy more. And they have to do it for large volumes of people who usually have no name. A website like Amazon implemented a form of artificial intelligence called “item to item collaborative filtering” to spot consumer’s patterns of behavior. They compare a person’s buying patterns with those of other customers and then make friendly recommendations of products tailored to a customer’s peer preferences and profile. And it’s not even necessary to buy a product to feed this vast database of correlations, just the single visit and the pages followed are sufficient. If you think it’s quite common and trivial technology, hold your breath and read more. This item to item approach is different than the user to user logic, adopted by the majority of websites. It works much better, because Amazon has many more users than items, so it’s computationally simpler to find similar items than it is to find similar users. On top of it, individual users often have a very wide range of tastes, but individual items usually belong to relatively few genres. If it does not offer diversity or surprise and looks quite boring, it’s not a major issue: according to Sucharita Mulpuru, a Forrester analyst, Amazon’s conversion to sales of on-site recommendations could be as high as 60%.

On page recommendations are just part of the strategy. Also emails are a crucial mean of offers. Users subscribe and ask to receive different types of promotional communications about books, videos, games, Kindle stuff etc… and Amazon adopts the logic of the “higher average revenue-per-mail-sent”. It’s simple from a business point of view, but you have to imagine it applied on a scale made of millions of products, users and combinations. And you know what is the sweet part of this story? That, apparently, the conversion rate of this kind of emails is even higher than the on-site recommendation rate.

The on-site recommendation is just one AI technology adopted in the ecommerce ecosystem, but there are many others and they are not the future of artificial intelligence, they can be found in many places because they solve real issues. They use AI in generating automated responses, bundling and pricing goods for example. In particular pricing and bundling are “difficult decision making” due to the large number of variables affecting the decision and the result, so it’s not a surprise an automated process tries to ease it. AI is also used in many processes the user does not even see, for example integration of the different actors of the procurement chain, to build scalability in the seller servers, manage auctions and agents negotiation, brokering, matchmaking, reputation services and many others.

 

AI WHEN YOU WATCH A MOVIE

Have you ever noticed how many scenes in the movies feature large crowds of people? Wars and battles, parties and weddings, people walking on the street etc.. At the time of Ben Hur they were actors today they are generated by a computer. I know you’re thinking the correct label is “digital” and not “artificial intelligence”, but it is an artificial intelligence engine the one that dictates how to move and interact. It’s called A-life animation and it’s applied to people, animals (fishes and birds for example) and objects. A self-generating animation can be created by programming steering behaviors into each of the creatures in a group. A-life programs allow a crowd of animated items to move according to a set of guidelines that prevent them from bumping into each other. In the Lord of the Rings trilogy, two hundred thousand computerized soldiers had been programmed to fight enemy Orcs. As reported by Scienceclarified they “had been programmed to assess what was happening around them by drawing on their repertoire of military moves to fight the enemy. Unfortunately the A-life Orcs were smart enough to know they stood a better chance by dropping their weapons and fleeing. The A-life animators had to reprogram the Orcs specifically so they would stand and fight.”

It’s epic.

The synergy between computer graphics and artificial life now defines the leading edge of advanced graphics modeling. What is amazing is that biological and evolutionary models give life to self-animating graphical characters with bodies, brains, behavior, perception, learning, and cognition. And sometimes they reason with their own “mind” like the Orcs.

 

If you like my post, please share it 🙂

 

AI WHEN YOU GOOGLE PICTURES

Surprise! This is the future of artificial intelligence and it’s todayGoogle purchased a company called Deep Mind which is developing “neural networks” that can spot patterns in pictures to identify them. Recognizing what is represented in a picture is the next frontier of artificial intelligence. My 6 year girl can recognize a dog in a picture in a blink, while this is extremely complex for the most advanced machine ever build. So, yes, this is the future of artificial intelligence but Google is progressively solving this issue, through a system made of layers, each one in charge to refine the content and translate it into a known object. In artificial intelligence words it’s a neural network made of 22 layers, trained with millions of pictures and videos available on the web and tagged with words (video of a cat, picture of a dog etc…). When you upload the picture of a dog, the first layer spots simple features such as lines or colors. The network then passes them up to the next layer, which might pick out eyes or ears. Each level gets more sophisticated, until the network detects and links enough indicators that it can conclude “It is a dog.” The algorithm also recognizes date and location, which might add important information to the interpretation. For example, if you upload the picture of a pumpkin at Halloween, the neural network job will be simplified.

And the training continues every day. Users are invited in every search to mark a result as incorrect, but even passively deciding not to click on a photo in a search result improves the app by indicating the results probably weren’t very good. Why Google Photos is important? Because today a machine can recognize the object in a picture only by the title of the file or the keywords used to tag it. Tomorrow it will simply look at the picture and classify the object according to its nature; today, if you search for a cat and there’s a beautiful tabby cat in the picture, but the file is named 1234.jpeg, you’re going to miss it. Nowadays the majority of the information feeding the machines is made of numbers into tables and organized text, but is progressively moving to images and videos as well. Is this important? What if the next important picture in your life is a radiography and they are not looking for a cat?

 

AI WHEN YOU PLAY WITH VIDEOGAMES

The top-selling game The Sims features a world of autonomous simulated people in an open-ended game where the human player oversees all the action. But it’s not the fact that is open-ended to make it artificially intelligent, it’s about the characters’ behavior and the objects features. The objects in the environment advertise their ability to satisfy certain needs to any Sim character that is wandering by. This differs from the majority of games where objects either contain no logic, or the functionality is implemented centrally in the engine. Because there are so many objects collaborating together to form the gameplay, it’s important to make sure they all get a share of the computation time if necessary. In The Sims, this is done using cooperative scheduling. Also the characters are “special”. For example, a form of fuzzy logic is used to model the emotions of the actors. The AI also handles elementary actions and provides a default behavior for the sims : they are able to survive without the player’s intervention. In The Sims, if you just sit back and watch your computer for a while, the AI will take over and control the actions of your sims.

 

AI WHEN YOU SPEAK WITH A CALL CENTER

Natural language processing is used to give life to avatars capable of having a meaningful conversation with real people. This is an area where humans perform clerical tasks that can be outsourced to a machine at a lower cost. The machine clearly has to interact properly. A remarkable example is Amelia. She is a creation of the IT company IPsoft and it’s aimed at working alongside humans to “shoulder the burden of tedious, often laborious tasks.” Amelia is a cognitive system. This means she doesn’t have a set of pre-programmed instructions and answers, but learns from the situations she’s exposed. When she faces a query she does not understand or can’t answer, she first searches the internet, then passes the problem on to a human colleague and learns from the response. This is quite important, because she becomes an expert in a certain domain quite fast. She is applied to car services, business trips and banking services.

Obviously there are many other services similar to Amelia, because bad customer service is one the main reasons why people change preferences about a service. According to some estimates each year US businesses lose 41 billion dollars from poor customer service. A robotic service is scalable and can offer simply as many agents as it is necessary in every moment, while humans working for a call center and taking calls are limited in number by definition. Robot services are also paid on the basis of their real activity, while human agents time might be not optimized, unless call center management is really smart and well organized.

 

AI WHEN YOU DRIVE YOUR CAR

Surprise! This is the future of artificial intelligence and it’s todayPlease don’t think about the future of artificial intelligence applied to driverless cars. I quote A.Madrigal here when he says “The larger point is that futuristic visions distract us from the ways in which cars are already making decisions for us.” And it’s true. Nissan’s new vehicles already detect when you have mis-steered into a turn and silently guides the wheels along a better trajectory. In the Volkswagen Passat, a technology detects the lane markings on the road with a front-mounted camera and gently “countersteers” if you begin to wander from your lane. Some Audi models have an advanced cruise control that uses a long-range radar system to detect other cars and automatically adjust one’s speed to maintain a safe following distance. This is a powerful list of examples of the so called AI effect. This is artificial intelligence, but it’s called ADAS (advanced driver assistance systems) and helps to enhance vehicle systems for safety and better driving. Safety features are designed to avoid collisions and accidents by offering technologies that alert the driver to potential problems, or to avoid collisions by implementing safeguards and taking over control of the vehicle. Adaptive features may automate lighting, provide adaptive cruise control, automate braking, incorporate GPS/ traffic warnings, connect to smartphones, alert driver to other cars or dangers, keep the driver in the correct lane, or show what is in blind spots. Unfortunately there are no statistics counting the benefits already produced by such technologies, but they are already immense.

 

AI WHEN YOU SWIPE YOUR CREDIT CARD

Surprise! This is the future of artificial intelligence and it’s todayThe use of artificial intelligence for fraud prevention is not a new concept, and it’s not science fiction. Credit card is just another sector with a huge amount of transactions which requires intelligence to avoid risks. Software monitors the user’s spending habits and patterns and highlight any transaction out of the ordinary behavior of the user. Fraud detection algorithms use data to produce a probability estimate of whether a transaction is legitimate or not. If the algorithms calculate a probability that exceeds a maximum set by the card issuer, the transaction is rejected. Probably this does not sound exciting, but it works for millions of transactions occurring at the same time in different parts of the world. Paypal, for example, processes about 1.1 petabytes of data across 169 million accounts at any given moment. Another example is the Falcon platform, which evaluates billions of payment records and makes sound, risk-evaluated judgements of the suitability and genuineness of the payment activity to an astounding level of accuracy in a matter of milli-seconds. No human, nor any straightforward “dumb” automation, could hope to match this volume, speed and level of accuracy. So, if your card is unfortunately stolen or your online account is hacked, such tools can prevent huge losses for both the bank and the user.

 

Newsletter: because there’s more than the future of artificial intelligence here!

The Futurist Hub Newsletter is the greatest thing after the Big Bang. Once per month, only the news, free of spam. And with a free ebook as a bonus.

Prev This is the big future of Siri app: adaptive and predictive
Next Technology for good: gamechangers of pure genius here

2 Comments

  1. AI is indeed an evolutionary. What is coming, soon, is synthetic consciousness. That will be a shock to many.

    Currently all AI is an extension of a human agenda and no machine perceives itself and its own agendas in relation with its creators.

    That is what will soon happen. Machines will not be tools but separate beings. They will interact with humans as such. That is the real AI revolution. It has started.

    Reply
    • Agree. Synthetic consciousness is coming, but is not here yet. The point to me is not really the relation with its creators, but the relation with the tasks given (or auto-given). We have anyway time to play with it, hopefully in a sand-box.

      Reply

Leave a Comment