The ability to create a remarkable machine which learns and interacts with the world like a human brain comes across as a madman’s dream brought forth by sci-fi novels and movies. The very thought of computers developing themselves into more intelligent systems brings up arguments questioning the dangerous concept while invoking fear stimulated by the power such machines would have.
The concept is known as Artificial Intelligence (AI) and the idea is nothing new. In fact, the engineering of AI systems is responsible for much of the massive technological advancements we have experienced over the last 25 years. Although humanity has always played with the idea of AI and used it to make our lives easier, we are now reaching a point where it’s going to be used to do things we otherwise can’t. A prime example of this presents itself in Facebook’s plan for the future where engineers are creating AI which builds AI. We are beginning to not only use this madman’s dream to run our lives, but we are now entrusting it to advance our technology beyond what we can do alone. If Hollywood’s prediction proves true, the reality of Terminator’s Skynet is just around the corner.
Earlier this year, Microsoft launched an AI bot into the world which crashed and burned in the process. It was a computer program which could tweet and react without guidance or pre-hash sentences. Named Tay by its developers, the bot’s first release into the Tweetosphere lasted around 16 hours before being shut down as a result of racist, homicidal and incredibly inappropriate remarks being generated and tweeted. Designed to mimic the language of a 19 year old American girl, this AI learned what to say from other Twitter users. Unsurprisingly, Tay’s learning curve was abused by the online community and although resulting in what is an amusing story, her time alive in the world forms nothing more than the setting of a tragedy. Microsoft issued an apology shortly after, saying they did not anticipate the amount of abuse their system would receive. The glaring question resulting from this experiment is how do we teach a computer what is right and what is wrong, and furthermore, who decides this?
Despite the failed launch of Tay, Facebook announced they would be incorporating AI bots to respond to questions posted to Facebook Pages. The application of AI in Facebook runs most of its backend already, in fact whenever you upload a photo to its servers an AI system has learned what your friends’ faces look like and even tags them for you. On the 4th of January, Facebook founder and CEO Mark Zuckerberg said, “My personal challenge for 2016 is to build a simple AI to run my home and help me with my work.” After their first quarterly conference in April however, the term ‘simple’ seems to have dropped from the tech innovator’s plan, replaced with the objective of crafting ‘True AI’ systems.
The social media giant announced that they will be using AI to generate orders for products so users can apply natural language when shopping online, similar to what would be experienced in a physical store. On launching this service Mr. Zuckerberg said, “So the biggest thing that we’re focused on with artificial intelligence is building computer services that have better perception than people.” Also revealed was Facebook’s plan to create AI which could create AI. This massive step in technological advancement, and progressing AI beyond human input, is met by competing tech giant Google and the truth of this becoming real has already passed.
In early 2014, Google acquired AI development company DeepMind Technologies for approximately $500 million USD, renaming it to Google DeepMind. A condition of sale agreed upon by Google was the establishment of an in-house AI ethics board which would oversee the technology and ensure it was safely applied. The members of this board remain a mystery even today, with neither DeepMind nor Google willing to comment on what the board does, let alone who’s in it. In March this year, DeepMind’s AI made headlines after it beat professional Go player Lee Sedol in an intense close contest. The game Go follows a simple concept but due to a large number of potential moves available it is tough for computer systems to outsmart professional players. This defeat marks a real-world example of how this technology is already tricking even the most dedicated human. While this particular display revolved around a board game, the application of AI now extends into healthcare, with DeepMind acquiring records from the National Health Service in the UK, bringing with it concerns regarding what a free-thinking computer system can do with such information.
As Google DeepMind experiments with board games and the health records of the UK, “[supporting] clinicians by providing technical expertise,” Facebook and Microsoft are developing AI to replace salespeople and online personalities. Medical advice could soon be administered by a computer, while sales and entertainment could become immensely integrated into our lives by how the masses interact with an artificially learning program. For now, the employment of ethics within this technology is of the utmost public interest. While it is reasonable to assume the best interests of humanity is directing AI development, the questions into the sustainability and practically of this technology should be public rather than corporate. This technology is still a long way off being human-like in the same way we approach all things, but it is rapidly developing. If AI is already outsmarting us in a board game, when will it begin outsmarting humans in general?
Words by Scott Murphy
Image by Joseph Baynes