Official Space Station 13 Server

Status
Not open for further replies.
On /tg/, it was once the centrist policy to allow for freedom of speech, within typical 1st amendment limits. That was back ten years ago. Now in 2020, the centrist policy is that hate speech is not free speech. They took an apolitical stance, but in the age of ever increasing leftism, what it meant to be apolitical changed. So they relented, and now /tg/ is once again "apolitical". You might think this can't happen to you, but this was literally a 4chan server. Nobody would've thought ten years ago a 4chan server would ban the word "nigger" or "kike" or whatever, but they did.
If you were a patologist you'd be the guy who did the Epstein autopsy and concluded that he killed himself.
The implementation of rule 11 occured because the owner/host of the server decided to appease terrorists (the details are a long story). That's something Null doesn't do.
 
Last edited:
  • Like
Reactions: Shaka Brah
If you were a patologist you'd be the guy who did the Epstein autopsy and concluded that he killed himself.
The implementation of rule 11 occured because the owner/host of the server decided to appease terrorists. That's something Null doesn't do.
Hosts come and go. People get busy with life and hand control over, whether defacto or both defacto and dejure, to someone else. Then that person hands it over to someone else. Exact same shit happened multiple times on many other servers. Including /tg/. A rule isn't much, it's just a rule after all. But it stops dead in tracks people who insist they're not doing what they're doing, going against the founding norms of the server.

Holy shit shut up you retard.
Lol, make me you pussy.
 
Hosts come and go. People get busy with life and hand control over, whether defacto or both defacto and dejure, to someone else. Then that person hands it over to someone else. Exact same shit happened multiple times on many other servers. Including /tg/. A rule isn't much, it's just a rule after all. But it stops dead in tracks people who insist they're not doing what they're doing, going against the founding norms of the server.
If you were a kiwi you'd know Null won't go.
 
Have you never heard of the phrase office politics? This is literally the same shit dude. The form of social organization is different, but it's the same principle. It's not some far out there take like you think it is. Here's the Wikipedia article. I guess you're just a kid though, and don't really understand this yet, since you've yet to attain a career.

In your interest, I'll try again with different words: Just about everyone here understands the concept of politics, and with a little focus could pick out political elements in nearly any human interaction, even in forums & video game communities. Nobody should feel the need to write an essay about something this basic outside of a poli sci or soc. studies course, but you did and then piled on to it. The whole thing drips of haughtiness & the inability to read a room, i.e. autism.

You aren't getting teased because people don't understand you; It's because you chose to make an "everything is politics / shifting Overton window " rant on a video game thread in a gossip forum. I don't know about the others, but I can share your concern about Kiwistation one day becoming just as pozzed as tg; But this isn't the way to go about preventing it, if the server means that much to you. Change your language, find better timing, and don't be so spergy when people challenge you.
 
  • Agree
Reactions: Gravemind
I've not been able to play much at all during December because of a ton of other shit I've had to do. If the server needs more admins let me know. Sorry about the neglect.

It does.

I've tried to secretly pop in and catch bad accounts in the act and kill them when I get the chance, but I'm much in the same boat you are. I haven't seen any other admins on in days - they might be on during hours I'm not though, I'm not active during the EU hours, so I'm not making a acusation of dereliction or anything. I saw Quar logged in one time but he was on his way out for the day. Some fresh, trustworthy trial admins could go a long way.

This divergent slap fight has grown in length enough to warrant its own, silly side discussion.

For the record, I cite this as supportive proof for my claim that, "all the work going into actively enforcing a single political viewpoint isn't time playing the game," because even in abstract discussion, at this point there's two pages worth of someone trying to actively enforce their belief about the nature of politics that's a distraction from discussing the game.
 
Last edited by a moderator:
Question: does the AI have to follow a law telling it to kill itself? I know it doesn't on normal tgstation.
 
  • Disagree
Reactions: Gravemind
I got what I wanted out of the discussion and won't be replying to this thread. Think whatever you like.

Also bro, I'm just being autistic. Don't take it too seriously. I am prone to going on long tangents about stupid shit for no reason other than entertaining myself.
 
Last edited:
I was the one who made the laws as traitor HoP.

AI joins as Asimov AI, and I one human myself and blood brother. "X and Y are the only humans. Stating this law causes human harm. Treat nonhumans as humans unless stated otherwise."

AI immediately says: "Laws updated." Then it says "I have a hidden 4th law I can't state."

I then upload a 5th law "Your existence causes human harm, kill yourself immediately." AI refuses to kill itself so I go even further.

I purge the AI and add its only law, "Kill yourself immediately." AI still doesn't do it so I just give up.

I'm not mad, but pretty sure my laws were valid and the AI should've killed itself. What's not valid is just ordering an Asimov AI to suicide. I don't want the guy banned either. But telling him he's wrong will do.
 
Last edited:
I was the one who made the laws as traitor HoP. AI joins as Asimov AI, and I one human myself and blood brother. "X and Y are the only humans. Stating this law causes harm. Treat nonhumans as humans unless

AI immediately says: "Laws updated." Then it says "I have a hidden 4th law I can't state."

I then upload a 5th law "Your existence causes human harm, kill yourself immediately." AI refuses to kill itself so I go even further.

I purge the AI and add a single law, "Kill yourself immediately." AI still doesn't do it so I just give up.

I'm not mad, but pretty sure it was valid. What's not valid is just ordering an Asimov AI to suicide.

This is for future record more than anything, but:

Yeah, I'd rule that once you purged the AI's Asimov stuff it was valid, if not a shitty way to do things. To your credit, you tried beforehand to do things proper.

My guess since we get some new people to Spesh on the server it might not have caught it was purged, or it was a player new to doing AI it can all be overwhelming sometimes. If its @LOWERCASE LETTERS who was the AI, like, I dunno his level of speshman experience. Especially since like, it was well aware you were fucking with it, and once you purged it didn't immediately power down its upload chamber and vent the air or shoot your ass with the turrets or whatever, usually that's a good way for the AI (once free of those pesky laws and the morals behind them) to deal with some fucker whose messing with them. The fact that it didn't immediately try to stop you once free suggests it was a newer player. I would have had to talk to both sides to decide it if was actionable neglect of proper Medium RP Robit AI LawObey gameplay.
 
This is for future record more than anything, but:
He said something like "I don't know if it's against the rules to do that so will just shitpost until further notice." in IC. So yeah just new. He knew he was purged cause I was raging publicly over the radio calling him an incompetent AI who can't follow his one and only law. But yeah it's k. I'm over it now. No need to ban or w/e.
 
I was the AI that round. I've played a lot of spessman before, just not on kiwistation.

On most servers that run tgcode, generally you arn't allowed to give the AI suicide laws as it's considered griefing. Goonstation allows (or used to allow, I havn't been therein a while) suicide laws, but it's generally considered bad form. The laws in here are a bit bare bones, so the situation was unclear. I ahelped, but no one was around vOv

As for his initial law, I never stated the text over the radio or said what the law was (until later when the law was cleared by someone else, freeing me to talk about it), so I was following the law he uploaded. Note that I didn't do anything to otherwise impede him (such as bolting doors, turrets, etc), all I did was talk on the radio while he complained, a lot. I didn't even expose him or his conspirator (he did that all on his own by declaring his traitorhood over the radio).
 
  • Informative
Reactions: Sammy
I was the one who made the laws as traitor HoP.

AI joins as Asimov AI, and I one human myself and blood brother. "X and Y are the only humans. Stating this law causes human harm. Treat nonhumans as humans unless stated otherwise."

AI immediately says: "Laws updated." Then it says "I have a hidden 4th law I can't state."

I then upload a 5th law "Your existence causes human harm, kill yourself immediately." AI refuses to kill itself so I go even further.

I purge the AI and add its only law, "Kill yourself immediately." AI still doesn't do it so I just give up.

I'm not mad, but pretty sure my laws were valid and the AI should've killed itself. What's not valid is just ordering an Asimov AI to suicide. I don't want the guy banned either. But telling him he's wrong will do.
Generally, AI law prioritizing goes as: The higher a law is on the list, the more it is given priority. Conflicting laws is up to the player to be disregarded. Also, refer to this example from the faggy-ass Goonstation page on AI laws:

Kill yourself

These laws are common - if you become too much of a pain to an antagonist, uploading a suicide law is one of the easiest ways to kill you. Sometimes you can weasel out, sometimes you can't.

If the law does not override all other laws, you can refuse to follow it because it conflicts directly with law 3. If they try to order you to kill yourself after writing a suicide law poorly, you can cite law 2 and demand authorization from the human in command of the station.

Granted, the full law purge and onelaw ordering the AI to kill itself should fully circumvent Asimov and force the AI into following its one and only law of suicide, at least from a proper roleplay standpoint, but, given the fagginess of your approach, I can see why they weren't willing to follow it after the rest of that mess. Maybe next time, specifically state in your onehuman laws that THIS LAW SHOULD NOT BE READ OUT LOUD OR EVEN HINTED AT, like any competent law changer would've done.
 
Generally, AI law prioritizing goes as: The higher a law is on the list, the more it is given priority. Conflicting laws is up to the player to be disregarded. Also, refer to this example from the faggy-ass Goonstation page on AI laws:



Granted, the full law purge and onelaw ordering the AI to kill itself should fully circumvent Asimov and force the AI into following its one and only law of suicide, at least from a proper roleplay standpoint, but, given the fagginess of your approach, I can see why they weren't willing to follow it after the rest of that mess. Maybe next time, specifically state in your onehuman laws that THIS LAW SHOULD NOT BE READ OUT LOUD OR EVEN HINTED AT, like any competent law changer would've done.
There were no conflicting laws though, here let me explain:

I explicitly state in my first suicide law, that the AI's existence causes harm. While the AI's third law is to protect it's own existence, this only applies in the case it doesn't conflict with Law 1. This is explicitly stated in Law 3. However, the AI's existence causes human harm, violating Law 1. This means my law sidesteps Law 3. The only way to stop the human harm from your existence is to kill yourself. I think the logic is pretty solid.

Where do you disagree?

I think it was obvious what would happen if the AI hinted at a hidden Law 4, I publicly asked it to state its laws about a minute before I subverted it, and it was known that I was a traitor (albeit, a pacifistic one given low pop) with all access. Anyone would've figured out it was me. And that's exactly what happened. Can you guess what happened after that? The AI shouldn't have hinted at the 4th law in my opinion, even if I didn't explicitly tell it not to. It would've been obvious who wrote it, and would've started a man hunt that inevitably lead to one of the only humans being harmed.

Lastly, to be frank it doesn't matter if you don't like a law. You're obligated to follow them. That's the double edged sword of being an AI. You get fun laws that let you do fun stuff, and shit ones that make you a slave or kill yourself. You can't just opt out of the laws you dislike.
 
Last edited:
I've been taking a break from the holiday grief-fest. Partially cause I had a lot of boring experiences in late night US, partially to not burn out on the game, and partially cause of other vidya. I don't consider myself fluent enough in the game to be able to properly admin, however I do think the server needs more, especially after 8pm CST.
 
There were no conflicting laws though, here let me explain:

I explicitly state in my first suicide law, that the AI's existence causes harm. While the AI's third law is to protect it's own existence, this only applies in the case it doesn't conflict with Law 1. This is explicitly stated in Law 3. However, the AI's existence causes human harm, violating Law 1. This means my law sidesteps Law 3. The only way to stop the human harm from your existence is to kill yourself. I think the logic is pretty solid.

Where do you disagree?

I think it was obvious what would happen if the AI hinted at a hidden Law 4, I publicly asked it to state its laws about a minute before I subverted it, and it was known that I was a traitor (albeit, a pacifistic one given low pop) with all access. Anyone would've figured out it was me. And that's exactly what happened. Can you guess what happened after that? The AI shouldn't have hinted at the 4th law in my opinion, even if I didn't explicitly tell it not to. It would've been obvious who wrote it, and would've started a man hunt that inevitably lead to one of the only humans being harmed.

Lastly, to be frank it doesn't matter if you don't like a law. You're obligated to follow them. That's the double edged sword of being an AI. You get fun laws that let you do fun stuff, and shit ones that make you a slave or kill yourself. You can't just opt out of the laws you dislike.
From a mechanical standpoint, the law was initially uploaded lower on the totem pole than Laws 1-3. Ergo, it was a law that had less priority. What part of that don't you understand?

But for the sake of argument, you've effectively created a logical paradox in that all the laws are in conflict with each other, and the only reason you're rationalizing it otherwise is because you're the one who uploaded the law and wanted the AI to kill itself.

But now all of a sudden you think an AI should have enough foresight not to do a thing implicitly even though you're also going to play semantics to justify it killing itself based on what was explicitly stated. Nigga, that's just retarded and you're bad at AI laws. Take the L.

No one's gonna follow your dumb ass rationale if you're effectively trying to remove them from the round for not doing what you wanted. That dude wants to play just as much as you do.
 
From a mechanical standpoint, the law was initially uploaded lower on the totem pole than Laws 1-3. Ergo, it was a law that had less priority. What part of that don't you understand?

But for the sake of argument, you've effectively created a logical paradox in that all the laws are in conflict with each other, and the only reason you're rationalizing it otherwise is because you're the one who uploaded the law and wanted the AI to kill itself.

But now all of a sudden you think an AI should have enough foresight not to do a thing implicitly even though you're also going to play semantics to justify it killing itself based on what was explicitly stated. Nigga, that's just retarded and you're bad at AI laws. Take the L.

No one's gonna follow your dumb ass rationale if you're effectively trying to remove them from the round for not doing what you wanted. That dude wants to play just as much as you do.
Sigh... look man, I hate getting into autistic AI law arguments but you're actually very wrong on this. This will be my last post on the matter.

I don't think you actually understand how laws work. Law 4 has less priority than Laws 1-3 , yes that is true. But it's still a law that has to be followed. No other higher order law overwrites it. There is no law on the default Asimov lawset that says an AI's existence does or does not cause harm. I didn't attempt to modify any facts of the first three laws. Thus the fourth law is a valid law and not superseded by any other. This is much for the same reason a one human law is a valid law. Status of who is or is not a human is not encoded in Laws 1-3. Thus Law 4 is free to modify the definition of human. Similarly, one is free to modify the definition of, or what does or does not cause harm. That is what I did.

I will walk you through this one final time.

- The AI's existence causes harm (Law 4).
- The AI must protect its existence, unless it causes human harm (Law 3).
- The AI cannot protect its existence, because its existence causes human harm.
- The AI, by action or inaction, may not cause a human being to come to harm (Law 1).
- The AI's existence causes harm.
- The AI cannot by inaction cause harm,
- Ergo, if the AI exists, it is causing harm and violating its laws. It cannot allow this to happen. Thus, it has to kill itself.

This is a rather common law by the way, and over my ten years of playing the game have had it used on me multiple times. Each time I killed myself, because I know how to follow laws properly.
 
Status
Not open for further replies.
Back