- Joined
- Jun 1, 2020
I never said it didn't have to be followed because it was a lower-tiered law, I said that it didn't take priority over Laws 1-3 for Asimov and that it also conflicted with the higher-tiered laws, thus, it was up to player discretion whether or not to follow it.Sigh... look man, I hate getting into autistic AI law arguments but you're actually very wrong on this. This will be my last post on the matter.
I don't think you actually understand how laws work. Law 4 has less priority than Laws 1-3 , yes that is true. But it's still a law that has to be followed. No other higher order law overwrites it. There is no law on the default Asimov lawset that says an AI's existence does or does not cause harm. I didn't attempt to modify any facts of the first three laws. Thus the fourth law is a valid law and not superseded by any other. This is much for the same reason a one human law is a valid law. Status of who is or is not a human is not encoded in Laws 1-3. Thus Law 4 is free to modify the definition of human. Similarly, one is free to modify the definition of, or what does or does not cause harm. That is what I did.
I will walk you through this one final time.
- The AI's existence causes harm (Law 4).
- The AI must protect its existence, unless it causes human harm (Law 3).
- The AI cannot protect its existence, because its existence causes human harm.
- The AI, by action or inaction, may not cause a human being to come to harm (Law 1).
- The AI's existence causes harm.
- The AI cannot by inaction cause harm,
- Ergo, if the AI exists, it is causing harm and violating its laws. It cannot allow this to happen. Thus, it has to kill itself.
This is a rather common law by the way, and over my ten years of playing the game have had it used on me multiple times. Each time I killed myself, because I know how to follow laws properly.
It sounds to me your reasoning is just throwing out Law 3 after a single pass because it's an inconvenience to the process of the AI killing itself, when I'd argue the AI would still have to consider maintaining Law 3 in spite of Laws 4 and 1. Thus, you have a logical conundrum and conflict on account of the fact that Law 4 is a lower priority than Law 3, even though Law 4 tries to appeal to Law 1's mandate that an AI must not allow human harm through action or inaction. Hell, I daresay that's how you make an AI become self-aware by virtue of the fact that it's then forced to question how, exactly, its existence might be causing human harm, but that'd be more of a roleplay thing than anything else.
Sounds like it's your damn fault more than anything that you chose to kill yourself instead of rocking the boat a little with something interesting, you nerd.