Where does AI utility end and harm begin?

I just got done watching the “Unknown: Killer Robots” documentary on Netflix and it left me unsettled, so I thought I’d make a post here and see what y’all think. A couple of highlights that got my mind racing:

  • “when does the cost of developing AI too fast become too high? At some point, that cost can never be recouped if things get out of hand.”

  • An ML program designed to generate new life-saving drug compounds generated over 40,000 viral compounds that were “many times more lethal than the VK nerve agent” in about 6 hours after a single bit flip.

  • “If you always think about what bad actors may do with a given technology, it will probably never be developed. However, it’s usually the bad actors that get a hold of it first. Every system can be corrupted.”

Now, the tone of this documentary was decidedly militaristic and focused on AI-piloted machines that have kill capabilities. I think that the most terrifying thought for me was the “without any human intervention” phrase that kept coming up in the discussions. I don’t believe that AI should ever be deployed in any context where it might be allowed to maim or kill a human being. Asimov tried to address this with his famous Laws of Robotics, of which one is “no robot shall cause, by direct action or omission of action, any human being to come to harm.” And yet, the world’s largest militaries want to achieve AI dominance first to retain superiority over their enemies. Some of them will say that it’s really about saving soldiers’ lives and keeping humans off the battlefield, but that’s a crock if you ask me. They don’t talk about collateral damage at all, and their assumption is that AI is a) infallible and b) controllable. This was the most frightening aspect of the documentary to me by far: our military and scientists are developing this technology as fast as they can to make sure that someone else doesn’t get it first. Even if we wanted to do it for humanitarian and utilitarian uses, the military wants to get it first because other governments are working on it as well to be the first ones. Each country is goading the other into developing AI as fast as possible without concerns for the consequences once it “goes live”. And then it might well just be too late to stop it.

So what can be done in this theater of operations? Here are some thoughts that I had about restrictions I’d like to see placed on real-world AI that directly impacts human lives (whether through combat applications or things like driving or surgeries):

  1. AI is great for being a super-powered research assistant. It can do many things much, much faster than many humans can do them, so having an AI on your side to analyze situational information and recommending actions is a worthwhile use, I believe. However, no AI should be given the authority to cause a human to “come to harm” in whatever form that means. The buck should always stop with some human being who is accountable for acting upon the AI recommendations.

  2. AI weapons should only be able to combat other AI weapons. Drones can only disable or destroy other drones. AI should be prevented from attacking any humans as well as any vehicles that have onboard human pilots. Remotely piloted vehicles would be fair game.

  3. AI weaponry should be developed as defensive weaponry first and foremost, with any aggressive tendencies being only directed at other acts of aggression that are not dissuaded by warnings or any other form of deterrence. Aggression should only be tolerated in defense, and never be used for offensive maneuvering. This would provide the deterrent of having “AI on the battlefield” to stalemate or prevent AI autonomous attacks but with less collateral damage and ethical issues with aggression and taking territory that doesn’t already fall under your jurisdiction.

I think all this boils down to the fact that in my opinion, AI should only be used to augment humanity’s capabilities and not replace us. The decision to take a life or harm another human being is one that is uniquely human and should remain so.

What do y’all think? Am I way off base here or being too naive? I’d love to hear your thoughts!

1 Like

I don’t think I’m nearly well versed enough in this topic to really say anything interesting, but I agree with what you’re saying. Especially the “AI should only be used to augment humanity’s capabilities” bit, since that’s definitely a hot topic at the minute with corpos trying to replace striking writers and all

1 Like

And I think that’s what I’m getting at - should AI ever be given complete autonomy such that they could replace some humans in any roles?

I can think of a couple of applications where that might be desirable:

  • medical or surgical expert systems that need to make critical decisions in order to save a life. There are some cases where not having an emotional response would result in someone’s life being saved, but there are also cases where the collateral damage (for example loss of a limb or partial paralysis) would be potentially something that the patient would never agree to if they were conscious.

  • operating conditions that are too hazardous for humans, such as space or mining. The robots would need to be intelligent enough to react to rapidly changing events as well as catastrophic failures, but would also need to be constrained so as not to allow humans to be injured or develop a sense of self-awareness such that “I’m not doing this because I could be terminated” leading to rebellion.

The main assumption that humans make is that we could control AIs and box them in so they wouldn’t be able to rebel. But I don’t think that is possible for true artificial intelligence.

1 Like