One of the most interesting and potentially important trends right now is the accelerating development of digital technology in all its various forms. This includes everything from new games to artificial intelligence (AI) to robots. This trend will have a growing impact on our daily lives, and possibly present serious new threats as well.
Driven by Moore’s Law (that says that computer processing power doubles every 18 to 24 months) and fast-growing investment from the military, corporations and financial concerns who see ever faster and more capable digital systems as the key to competitiveness, advances in such technologies are about to achieve break-out status. It is probably already too late to put this genie back in the bottle, and we really have little idea what it all means for humanity and the way we will live in coming years.
The promise in terms of more effective military power, cheaper industrial production, more pervasive communications and even domestic robot helpers is clear, but already luminaries like Stephen Hawking, Elon Musk and Bill Gates have warned that there are potentially dangers ahead. This piece considers some of the background to recent developments in digital technology and their significance.
For over a century the idea of robots taking over the human race has been of interest to various authors, film-makers and fans of science fiction. The notion that one-day robots that become fully autonomous and even conscious might supersede human capacities in various ways so as to eventually take over the world and assert their artificial dominance was thought of as pure science-fiction and beyond the bounds of possibility.
However, according to current research and expert opinion, there is a high probability that within a few decades artificial intelligences (AIs), most likely in the form of robots, will achieve genuine sentient thought. By whatever means this might happen, either by deliberate action or accident, this genesis of machine consciousness could be expected to have an extensive and significant impact on human society and culture.
Such a fundamental change has been called a ‘singularity’, the point where a graph of the exponential growth of information processing power goes vertical. It is a term first coined in 1958 by John Von Neumann and Stan Ulam. They suggested that such a change would mean an elemental change in the nature of human civilization and technology, to the point where the future becomes totally unpredictable. A concept taken up by science fiction authors (such as Vernor Vinge and William Gibson) and the futurist Ray Kurzweil, the singularity is perceived as a defining moment that could well occur with decades, not centuries.
In the last decade there has been a series of highly pertinent publications focusing on artificial intelligence, robots and their impact upon human culture (for example: Ashrafian 2015; Searle 2016; Geist 2015; Smith 2009; Singer and Sagan 2009). In addition, since 2015, there has been a dramatic increase in the publication of academic papers, news articles, websites, blogs, and other varying formats relating to artificial intelligence in the form of robots and their impending impact on our world.
The commentary on this prospect remains mixed, particularly on the topic of whether robots pose a threat. In particular, opponents of the idea of the ‘humanisation’ of AI argue that any concerns about robots living amongst humans are ‘grossly exaggerated’ (for example: Geist 2015). Certainly, the immediate future of robot-human interaction looks rather mundane since robots operating as shop-bots and cleaning-bots will be largely reliant on ongoing human operational control.
However, much more sophisticated robots will appear, and with them questions concerning their intelligence and even their basic existential status. Here opinions vary. Wesley Smith for instance, argues that “robots will never be people and should never have rights.” He argues that robots would only be computers with very sophisticated software and “would be no more capable of being harmed, as distinguished from damaged – than the toaster”.
Smith disputes the view put by others that robots may develop actual feelings, as argued in Peter Singer and Agata Sagan’s 2009 book When Robots Have Feelings. Singer and Sagan are worried not so much about robots harming humans but rather humans harming robots. They argue that robots may have the capacity to have feelings similar to those of humans, an idea based in the notion that the human brain is just a ‘very complex machine’ that can be simulated, programmed and copied.
This idea of machines with feelings has been around for a while in science fiction movies, at least as far back as the computer HAL in 2001: A Space Odyssey (1968) and more recently in the Japanese anime movie Ghost in the Shell (1996) and last year’s Ex Machina. It is an interesting idea because it focuses on the central subject of what these feelings actually are in both creations like robots but also in ourselves.
The biggest concern of immediate relevance is the use of robots as weapons. Modern militaries are all moving towards autonomous robot soldiers, aircraft and ships. They can operate more quickly and agilely, they have no fear, their destruction does not evoke popular revulsion, and they are cheaper and more capable all the time. But they can be hacked (as the Iranians did with US drones recently) and reprogrammed. So such powerful weapons with their own emotions is a truly scary idea.
Ultimately, relationships between robots and humans may be based in the already existing debate about human diversity. We humans have always struggled with variation in our own species, and as a consequence do not always afford full rights to even our own kind. For example, historically indigenous peoples, those with disabilities of various kinds, and individuals’ identifying as LGBQTI have had their rights violated constantly and consistently. When we struggle to include all humans in decent society, how would we go if completely different forms of intelligence start staking claims to basic rights?
These matters are but some of the thorny issues implicit in the rise of the machines. Whether they constitute a real threat, become great helpers or develop to become our equals is uncertain right now. Hopefully we have as a species the wisdom to deal with it all with a maturity we have sometimes lacked in the past.