2A) The ethics of drone warfareSci-fi writer Isaac Asimov created the fictional three laws of robotics to govern relations between humans and robots, the first of which stated
“A robot may not injure a human being or, through inaction, allow a human being to come to harm”.In our present-day world, we are well past the idea of Asimov’s idealistic vision. Aerial drones (though not fully autonomous) are already playing a major role in modern warfare. They are used to observe enemy positions, help calibrate artillery fire, or even directly engage with enemy combatants.
As the technology behind these drones advances, they constantly require less human oversight. Should the trend continue, we may soon see fully-autonomous ‘Terminator’-style military drones that do not require a human operator and can decide who to kill on their own.
Drone warfare is very popular because it allows political leaders to project force without the political costs of lengthy ground wars. Drones can also provide a more ‘discriminate’ way of killing combatants as the aerial drone of an advanced military can detect potential targets from a distance and, in theory, provide more time to determine if they should or should not be fired upon.
Example questions:- Are autonomous combat drones ethical?
- Can the decision to kill a human being be determined by an algorithm?
- How would you design a global policy on robotic warfare?
2B) Existential risks of an artificial superintelligenceAdvanced AIs tend to be very skilled in a single task or set of related tasks, but weak in most other domains. The next major challenge for AI researchers is to create an artificial
general intelligence (AGI) that is more or less as capable as a human being in nearly all domains.
However, the quest to create an AGI, particularly a very advanced one, presents an existential risk to humanity. A very advanced self-learning AI, or a
superintelligence, will inevitably overcome its creators in terms of intelligence. Should we try to shut it down, it could perceive humanity as a risk to its continued existence and decide to wipe us out.
Philosopher Nick Bostrom has warned of this threat: “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct”. Others, including the European Commission, have made efforts to increase regulation for this rapidly growing industry, while Elon Musk proposed that governments create institutions for the sole purpose of governing the creation of AI.
Example questions:- What kind of government policy would you propose for AI development?
- How can a policy on AI be effectively enforced?