The concept of Artificial Intelligence (AI) has long captured human imagination, eliciting both excitement and apprehension. Alongside the promise of economic, social, and environmental progress, come deep concerns about the potential moral implications. Can ancient theories help us unpack these modern ethical challenges?
According to Nilsson (1998), ‘Artificial intelligence, broadly (and somewhat circularly) defined, is concerned with intelligent behaviour in artifacts. Intelligent behaviour, in turn, involves perception, reasoning, learning, communicating, and acting in complex environments’. AI is no longer a future prospect, but a present reality.
The aforementioned ‘artifacts’ now include popular virtual assistants such as Alexa and Siri, ready to answer (almost) any question posed to them; the powerful algorithms underpinning websites such as Netflix and Amazon Prime, which offer up tailored suggestions for your next binge-worthy boxset; or the home thermostat Nest, which learns to anticipate and adjust the ambient temperature to meet your needs (and, for which, Google paid $3.2 billion in 2014).
That these now seem almost mundane perhaps reflects the fact that tasks once considered to require ‘intelligence’ are frequently removed from the scope of AI once they have entered in to common use in a phenomenon known as the ‘AI effect’. This has led to the suggestion by Larry Tesler that ‘AI is whatever hasn’t been done yet’.
So, what potential business benefits can AI offer? According to the Harvard Business Review, these may include enhancing the features, functions, and performance of products; improved decision making; refinement of business operations; freeing up of workers to focus on more creative tasks; pursuing new markets; and the optimisation of external processes, such as sales and marketing.
Perhaps unsurprisingly, this has led to fears that the rise in AI will lead to significant job losses. While new employment opportunities will undoubtedly be created, it has been argued that there will not be enough to offset those eliminated. For example, what career options will exist for truck drivers if their vehicles are automated within the next few years? Something leading automotive companies such as Volvo are on target to achieve.
Some contend that we have seen this kind of ‘creative destruction’ before, most notably during the Industrial Revolution. However, others counter that the rapid pace of change is unprecedented. Beyond boosting profitability, businesses are also using AI to address environmental challenges, such as sustainable food production.
For example, The Yield, an Australian agri-tech company, uses sensors, data, and AI to assist farmers in making informed decisions in relation to ‘how, when and where to best plant, irrigate, protect, feed and harvest their crops’. This, the founder claims, can help increase efficiency, which is beneficial for both the producer and the planet.
In addition to private sector companies, a wide range of public actors, including individual governments and international organisations, are looking at AI as a tool for addressing large-scale economic, environmental, and social challenges.
In the United Kingdom (UK), the AI Sector Deal published in 2018 states that ‘creating an economy that harnesses artificial intelligence (AI) and big data is one of the great opportunities of our age’. However, the UK government is also mindful of the moral dilemmas that may arise and have established a Centre for Data Ethics and Innovation to analyse these.
At a supranational the United Nations (UN) has developed a platform – AI for Good – that facilitates dialogue and acts as a catalyst for projects aimed at tackling sustainability issues, such as those addressed by the UN Sustainable Development Goals. Current initiatives are focusing on issues such as the impact of plastic pollution on ocean life; the relationship between health, sleep, and nutrition; and the potential for personalised education for children and young people.
Click here to view the full document.