I just got done watching the “Unknown: Killer Robots” documentary on Netflix and it left me unsettled, so I thought I’d make a post here and see what y’all think. A couple of highlights that got my mind racing:
-
“when does the cost of developing AI too fast become too high? At some point, that cost can never be recouped if things get out of hand.”
-
An ML program designed to generate new life-saving drug compounds generated over 40,000 viral compounds that were “many times more lethal than the VK nerve agent” in about 6 hours after a single bit flip.
-
“If you always think about what bad actors may do with a given technology, it will probably never be developed. However, it’s usually the bad actors that get a hold of it first. Every system can be corrupted.”
Now, the tone of this documentary was decidedly militaristic and focused on AI-piloted machines that have kill capabilities. I think that the most terrifying thought for me was the “without any human intervention” phrase that kept coming up in the discussions. I don’t believe that AI should ever be deployed in any context where it might be allowed to maim or kill a human being. Asimov tried to address this with his famous Laws of Robotics, of which one is “no robot shall cause, by direct action or omission of action, any human being to come to harm.” And yet, the world’s largest militaries want to achieve AI dominance first to retain superiority over their enemies. Some of them will say that it’s really about saving soldiers’ lives and keeping humans off the battlefield, but that’s a crock if you ask me. They don’t talk about collateral damage at all, and their assumption is that AI is a) infallible and b) controllable. This was the most frightening aspect of the documentary to me by far: our military and scientists are developing this technology as fast as they can to make sure that someone else doesn’t get it first. Even if we wanted to do it for humanitarian and utilitarian uses, the military wants to get it first because other governments are working on it as well to be the first ones. Each country is goading the other into developing AI as fast as possible without concerns for the consequences once it “goes live”. And then it might well just be too late to stop it.
So what can be done in this theater of operations? Here are some thoughts that I had about restrictions I’d like to see placed on real-world AI that directly impacts human lives (whether through combat applications or things like driving or surgeries):
-
AI is great for being a super-powered research assistant. It can do many things much, much faster than many humans can do them, so having an AI on your side to analyze situational information and recommending actions is a worthwhile use, I believe. However, no AI should be given the authority to cause a human to “come to harm” in whatever form that means. The buck should always stop with some human being who is accountable for acting upon the AI recommendations.
-
AI weapons should only be able to combat other AI weapons. Drones can only disable or destroy other drones. AI should be prevented from attacking any humans as well as any vehicles that have onboard human pilots. Remotely piloted vehicles would be fair game.
-
AI weaponry should be developed as defensive weaponry first and foremost, with any aggressive tendencies being only directed at other acts of aggression that are not dissuaded by warnings or any other form of deterrence. Aggression should only be tolerated in defense, and never be used for offensive maneuvering. This would provide the deterrent of having “AI on the battlefield” to stalemate or prevent AI autonomous attacks but with less collateral damage and ethical issues with aggression and taking territory that doesn’t already fall under your jurisdiction.
I think all this boils down to the fact that in my opinion, AI should only be used to augment humanity’s capabilities and not replace us. The decision to take a life or harm another human being is one that is uniquely human and should remain so.
What do y’all think? Am I way off base here or being too naive? I’d love to hear your thoughts!