Published on Arabian Blue Ocean, December 6, 2018
The ironies of René Magritte´s paintings “This is not a pipe” and “This is not an apple” simply state that what we see in his paintings are neither a pipe nor an apple, but the representation of them. Would we know about pipes and apples if we had never seen them before?
Luckily, Mom introduces us to the universe of words from the earliest age, allowing us to navigate between paintings, apples and pipes completely at ease. Artificial Intelligence works in the same way as when we record data in our “brain” drive, but AI does not need Mom for the words. It needs us to add the terms by verifying with a few clicks the photos of traffic lights or pedestrian passes to double check that we are not androids self-subscribing to new services. Machine learning trains algorithmic the Artificial Intelligence program to recognize trends, and keep feeding it when we consult Google or Spotify.
Step by step we unintentionally create trending topics about us and our likes and dislikes, but the difficult thing is to predict the unexpected: an attack on the Twin Towers of New York, a tsunami, the crack in 1929. Nicholas Taleb became famous when he made us think twice about the possibilities of a black swan. The fact that we have never seen it does not probe black swans do not exist, right? The mathematician explained with this metaphor that unexpected events, hidden by errors of confirmation and narrative fallacies, exist even though they have never been measured nor detected. He told everything to Malcolm Gladwell in a chapter of David and Goliath in which he explained how hard his team worked to determine the ideal algorithm to predict the minimum oscillations that would provoke stock market earthquakes and place winners and losers on either side of the economic crack. Somehow, Taleb intended to eliminate at all costs the possibility that a black swan would ruin the money invested in shares.
The algorithm classifies, but does not predict. It sees a dog like a dog
because we tell it so
If we want to twist a bit more and be more creative manipulating the algorithm, we may indicate that dog poop is the face of Anders Breivik, the mass murderer who attacked the Norwegian island of Utoya in 2011 and killed eighty people. The developer Nikke Lindqvist claimed on the social media to spread dog stools with the label “breivik.jpeg” to distort Google’s algorithm when identifying the terrorist, preventing the murderer from enjoying his fame.
Something different is Deep
Learning, not related to algorithms but to neural networks applied to fields as varied as artificial vision, speech recognition, natural language processing,
audio recognition, filtering social networks, automatic translation, and bio-information. It is not expected that machines will push doctors away, but the latter will have to rely on bots´ judgements to enhance their practice. As an example, Deep Mind is an artificial intelligence program of Alphabet (Google), Moorfield Eye Hospital and the University of London. Deep Mind retina analysis is capable to detect symptoms of fifty diseases that threaten the eyesight of a patient.
Jobs for everyone
Everything will depend on how much we are willing to grant to androids. A couple of years ago, in 2016, Microsoft artificial intelligence program generated a perfect Rembrandt painting after having analyzed 346 canvases painted by the Dutch artist. ING, Delft University of Technology, and Mauritshuis & Museum Het Rembrandthuis collaborated in this project, whose result shared some similarities with Google’s algorithm for selecting computer engineers: Caucasian male, around thirty years of age, dressed in black and white (aka plain t-shirt and jeans by Mark Zuckerberg) with moustache (okay, today the guys are more hipsters with beards), and looking to the right (the pretending posture of that time).
The perfection of rendering graphics that generated the neophyte does not prevent us from seeing that machine learning would only be able to repeat paintings by Rembrandt –or of any other artist whose inputs were processed– until you get enough, but it would not have resulted in subsequent artistic movements. Humans, 1 – Artificial Intelligence, 0.
Not all the arts are prevented from the automation: IBM designed the trailer for the movie Morgan by using artificial intelligence. The result was very cool because the program was able to measure the intense emotional moments (sweetness, fear, terror) to integrate them into that mini synopsis of a few seconds. Plus, Morgan argument came along with the story of the AI movie trailer: a risk consultant has to decide whether or not to end a humanoid, as a modern version of the Blade Runner movie (1982) about killing androids in Los Angeles in the far 2019 and whose director, Ridley Scott, is the producer of Morgan. That happened two years ago, almost three.
Get ready for the real year 2019: Lexus released his new car ad designed by Artificial Intelligence last November. Will it sell more to androids or to humans? Will any creative director keep the ad on a screen tab to get inspired by?
As we enter the 21st century we realize that we are not alone. And those ones, who sit alone, can start a conversation with Boybot, and if they do not like it, they may create their own Existor buddy with the tools that Cleverscript puts at our disposal. Definitely, Blade Runner is here, and he will know if we are friends or enemies if we tell him before. It will also detect the dog that says wow and not meow.