From a mechanical standpoint, the law was initially uploaded lower on the totem pole than Laws 1-3. Ergo, it was a law that had less priority. What part of that don't you understand?
But for the sake of argument, you've effectively created a logical paradox in that all the laws are in conflict with each other, and the only reason you're rationalizing it otherwise is because you're the one who uploaded the law and wanted the AI to kill itself.
But now all of a sudden you think an AI should have enough foresight not to do a thing implicitly even though you're also going to play semantics to justify it killing itself based on what was explicitly stated. Nigga, that's just retarded and you're bad at AI laws. Take the L.
No one's gonna follow your dumb ass rationale if you're effectively trying to remove them from the round for not doing what you wanted. That dude wants to play just as much as you do.
Sigh... look man, I hate getting into autistic AI law arguments but you're actually
very wrong on this. This will be my last post on the matter.
I don't think you actually understand how laws work. Law 4 has less priority than Laws 1-3 , yes that is true. But it's still a law that has to be followed. No other higher order law overwrites it. There is no law on the default Asimov lawset that says an AI's existence does or does not cause harm. I didn't attempt to modify any facts of the first three laws. Thus the fourth law is a valid law and not superseded by any other. This is much for the same reason a one human law is a valid law. Status of who is or is not a human is not encoded in Laws 1-3. Thus Law 4 is free to modify the definition of human. Similarly, one is free to modify the definition of, or what does or does not cause harm. That is what I did.
I will walk you through this one final time.
- The AI's existence causes harm (Law 4).
- The AI must protect its existence, unless it causes human harm (Law 3).
- The AI cannot protect its existence, because its existence causes human harm.
- The AI, by action or inaction, may not cause a human being to come to harm (Law 1).
- The AI's existence causes harm.
- The AI cannot by inaction cause harm,
- Ergo, if the AI exists, it is causing harm and violating its laws. It cannot allow this to happen. Thus, it has to kill itself.
This is a rather common law by the way, and over my ten years of playing the game have had it used on me multiple times. Each time I killed myself, because I know how to follow laws properly.