As someone who is also doing a research project on AI, I had to fill out an ethics form that would be sent to the board of ethics to make sure I'm not performing some kind of horrid experiment there. Mine didn't involve any humans so it was pretty straightforward but there were like four or five pages of nothing but "What is the scope of human involvement in your project", "Can you guarantee that all human subjects are aware of the experiment with written consent", "Can you ensure that the data will not be used for harm, and if so, what measures are you taking to minimize harm?", "Is the research being performed in a public forum?" and stuff like that.
So what I'm wondering is, how the fuck did a research project like this get approved by any sane ethics committee?
I did some research in Psychology, where you often _can't_ fully inform your participants up-front, and sometimes do have to intentionally upset them. Lying to participants will be accepted by the ethics board if you can justify that the deceit would cause minimal distress/harm and you pinky pwomise to debrief your unwitting subjects afterward. Most commonly, you rope them in under false pretenses (i.e. lie about the purpose of the study, but still get consent from them to be part of "an experiment") rather than covertly fucking with John Q Public on the street. That said, you can be given permission to proceed without consent when risks are trivial and it's absolutely necessary that participants be unaware.
Further, some experiments are "messy" and require the researcher to cause some non-trivial amount of harm or distress (medical trials, studies on fear/stress responses, etc.); in those scenarios it's the job of the researcher to argue to the ethics board that the ends justify the means. The buck typically stops at causing "lasting damage" (e.g. breaking bones, inducing mental disorders, poisoning, etc. - anything likely to stick around for a while in spite of any standard debriefing), anything short of that is usually fair play if you've got a compelling hypothesis. It is perfectly ethical to knowingly harm some of your test subjects as long as you clean the mess up after, and you're up-front about it by warning your participants that they will get hurt/upset before they opt in.
The novelty here is they've *allegedly* done both, which is generally a no-no; you can harm participants, you can dupe participants, but you should never dupe a participant into harm. The obvious justification from the ethics board here would be that unhappiness at being "tricked" by an AI is trivial and not real harm (minimal risk), and consent can't be sought without invalidating the experiment. I.e. the risk is low enough and the deception is necessary for the experiment to work.
If it were me, my chief concern wouldn't have been the lying, but that they outright broke the subreddit rules. Not because redditors matter, but because it makes the researchers and the uni (and their entire field of study, to some degree) look like assholes to a big audience, which is a much more real issue than Timmy from Reddit being upset at being confronted by his own retardation because he fell for a bot's appeals to pathos.
I don't think this was 100% ethical conduct because of that reputation risk, but I don't care and think the results are still useful and worth it. Depending on how much this spreads in academia we might not see any future experiments like it because too many tards broke out in histrionics. Same as what happened with Milgram; Sad!