“A ban on killer robots is the ethical choice”, Ottawa Citizen, July 31, 2015 C9
This opinion editorial, published by the Ottawa Citizen, describes the recent “Open Letter from AI & Robotics Researchers”, calling for a ban on offensive autonomous weapons and explains why I am a signatory. In addition to concerns about a global AI arms race, I argue that the decision to ban killer robots is the ethical choice because delegating life-or-death decisions to machines crosses a fundamental moral line. I further argue that playing Russian roulette with the lives of others can never be justified merely on the basis of efficacy. In the end, the decision whether to ban killer robots is not only a fundamental issue of human rights; it goes to the core of our humanity.
This editorial first appeared in the Ottawa Citizen on April 26, 2016. The published version can be read here.
Given the perceived military success of unmanned drones and other semi-autonomous weapons, many proponents of robotic warfare are pushing for the next phase of development: fully autonomous weapons. Once developed and deployed, these weapons — killer robots, as they have become known — would be able to select their own targets and fire on them, without human intervention.
The Jeopardy! winning machine creates only the illusion of intelligence, writes Ian Kerr. But maybe that’s the point.
If Facebook were truly committed to protecting privacy, it would start with the assumption that people want less access to their information, not more
Amazon's ironic decision to delete Kindle users' copies of 1984 shows the old rules about copyright, ownership and privacy don't apply to today's technology
A little over a year ago, in one of the most important privacy cases ever heard by the Supreme Court of Canada, Justice Ian Binnie sought to allay concerns that we are sleepwalking into a surveillance society with the following remark: "On these occasions, critics usually refer to 'Orwellian dimensions' and 1984, but the fact is that 1984 came and went without George Orwell's fears being entirely realized, although he saw earlier than most the direction in which things might be heading."
Like most judicial pronouncements with staying power, I still haven't quite figured out what he meant by this. Was the judge simply saying that the worries expressed by privacy advocates are sometimes overblown? Or was his clever, lawyerly use of the word "entirely" a tongue-in-cheek expression of genuine concern?
We can reasonably be suspicious of sliding standards for subjecting Canadian citizens to searches by sniffer dogs -- or the next detection technology
Last Friday, the Supreme Court of Canada released two important privacy-related decisions, both addressing an increasing trend in which Canadian law enforcement agencies use police dogs to conduct random searches of public spaces.
In the coming years, dog searches are sure to be supplemented by electronic noses, sensor networks, artificial intelligence and other highly automated systems that can operate much more conspicuously and effectively than snoop dogs. If they are subject to the same legal standards set out by the majority of the Supreme Court last Friday, it will be the state and not its subjects who will be engaging in "an elongated stare."
Do oscar pistorius's prosthetic legs make him faster? that probably depends on whether you take two-leggedness as the baseline
In a few minutes, it will be midnight. I am sitting on the balcony of my rented san juan apartment. I just finished reading the IAAF report thwarting the olympic ambitions of oscar pistorius, the south african sprinter whose spirit has captured the imagination of the 24 students I am here to teach.
We started our three-week exchange seven days ago in Ottawa, where 12 of my University of Ottawa law students hosted 12 students from universidad de puerto rico. Together, these two dozen outstanding students are enrolled in a course that I call "building better humans?" (please note the question mark in the title.)
One of the goals of this interdisciplinary course is to illuminate the murky line between therapy and enhancement in a world that seems to be drifting from "natural selection" toward what bioethicist John Harris calls "deliberate selection."
What happens to people when science and technology are aggressively used to alter the human condition? What does the future hold for health and humanity as we move from Darwinian evolution to self-directed enhancement medicine?
Amid all the hype about south korea's proposed robot charter, let's not forget the more important question of whether robots should assume human roles in the first place
A few months ago, as part of its bid to put a robot in every household by 2020, the south korean ministry of commerce, industry and energy announced its intention "to draw up an ethical guideline for the producers and users of robots as well as the robots themselves ..." Responsible computer programming, corporate accountability and consumer protection in the electronics sector -- these are all good things.
Pause. Rewind. Replay. What? An ethical guideline for the robots themselves?
This article was first published in the Globe and Mail on January 12, 2004. The published article can be read here.
When Samuel Warren and Louis Brandeis published their landmark Harvard Law Review article "The Right to Privacy" in 1890, they talked about privacy as "the right to be let alone."
At that time, they were responding to the arrival of the camera in society. They could not have imagined the challenges to privacy that exist in the wired (and soon-to-be-wireless) world that we live in today. For example, how are our rights affected when cameras or computer chips are implanted into our bodies? Who are we, actually? Where do "we" end, and the machines begin?