A generation on autocorrect

I don't really think so. Proprietary autocorrect/autocomplete software is a privacy breach by definition. The way a proprietary autocorrect programs work is faulty. You constantly send data to a server, and various bots (how many? what do they do exactly?) read your data. The only non-intrusive autocorrect program one can have is a local, open-source program.

Exactly. It's a breach by definition.

As for 911, there is no proof that the US government conspired against the American people. Maybe it did, maybe it didn't. I don't know. All I know is that there hasn't been evidence presented as far as I am concerned.
Therefore, 911 and proprietary malware are two separate issues. The former is a conspiracy theory, and the latter has been proven to be true.

As you say: "How many bots? What do they do exactly?"
So you don't have the full picture. You have evidence, but you don't have proof of wrongdoing. They may just be geared towards giving you the best possible autocomplete and nothing else. Now I don't find that a very reasonable position to take. There is such tremendous value in the data and it's so hard to oversee what bots do for other entities besides the people that made them, that it's unreasonable to think there is nothing untoward going on.

It's the same thing with 9/11. You don't need the full complete proof to have sufficient evidence to know some foul play is going on. Now, of course there is a lot of intentionally bad evidence of 9/11, because that's one of the ways how information agencies do damage control and prevent the spread of true information.

Meanwhile the BBC reported that building 7 has collapsed... even while it was still visible in the background and standing.



The building that wasn't hit by a plane, was built to withstand being hit by a plane, but still fell down in freefall. The building of which the owner said he told the firemen to "pull it", the building which was kept out of most of the original reports, then not said to be in freefall, until finally admitted that it did experience a period of freefall.

And yet with each of these "mistakes", would it be reasonable to completely dismiss some form of foul play, because you don't have full complete proof?

If the requirement is full proof, you don't have proof either that these autocorrect things are leaking any of your personal data. We both know it's most likely the case.
 
  • Informative
Reactions: 1 person
As you say: "How many bots? What do they do exactly?"
So you don't have the full picture. You have evidence, but you don't have proof of wrongdoing.
You are right, but I never claimed that there is a wrong doing. What I said was that a proprietary autocomplete program is potential malware, which is true by definition. If I had claimed that all proprietary autocomplete is malware, that would have been a conspiracy theory, because I simply cannot back it up.
Some autocomplete software has been proven to be malware, but I cannot speak for all. Therefore, I am not a conspiracy theorist, I am merely stating the facts.
It's the same thing with 9/11. You don't need the full complete proof to have sufficient evidence to know some foul play is going on.
If you say "911 could have been carried out by the US government", then that's not a conspiracy. That's a fact. 911 could indeed be a governmental tool.
But if you say "911 was definitely carried out by the US government", while not providing conclusive proof for that claim, then that's a conspiracy theory. See the difference, friend?

Anyway, we should probably drop it since we are getting off-topic.
 
It’s pretty shit, but I don’t turn it off because I’m too lazy to capitalize “I” and put apostrophes in contractions without it
 
@neurotypical

Look. I haven't said 911 was definitely carried out by the US government, just like you haven't said autocomplete/autocorrect was invented and included everywhere to spy on you.

That would be strawmanning your position.

---

I think considering the evidence, one should expect there to be wrongdoing with autocorrect.

To illustrate, Coursera, an online platform that offers bachelor and master's courses and degrees, uses typing rhythm as one of the components to verify if you are really taking the test yourself.

I'm pretty sure that the autocorrect thing of the more sophisticated platforms isn't just to detect what you're typing, but also who is typing.
 
Last edited:
  • Like
Reactions: neurotypical
The worst thing about autocorrect is when it tries to tell me that the number I entered isn't actually what I wanted, instead, it says I wanted some 12 digit completely random string of numbers that I absolutely did not want.
 
@neurotypical
I think considering the evidence, one should expect there to be wrongdoing with autocorrect.

To illustratra, Coursera, an online platform that offers bachelor and master's courses and degrees, uses typing rhythm as one of the components to verify if you are really taking the test yourself.

I'm pretty sure that the autocorrect thing of the more sophisticated platforms isn't just to detect what you're typing, but also who is typing.
That's true. I also know first-hand that people on IRC servers use special loggers to determine the identity of a user. So, even if you change an account, or connect through a proxy, your identity can be found based on how you type. Just an FYI for those who use IRC regularly, which is probably no more than 5 people in Europe and 10 in the US :|
 
  • Informative
Reactions: Lemmingwise
Autocorrect is whatever. I just assume someone is reading my texts and listening to me speak even when my phone is off so I throw shit around to fuck with them.
 
I use a more primitive autocorrect just for its utility, but really having blind faith in the machine isn't that much different from having blind faith in a person editing your work. Autocorrects cannot find every little mistake, and yes while you can argue with another person about your words and spelling, autocorrect lets you just ignore the issue and push forward if you don't feel like challenging it. I too hate when technology gets uppity and thinks its smarter than I am, but the truth of the matter is that a simple automated algorithm pretty much is smarter than like the vast majority of the human race. Its not like they were going to say anything profound in the first place, autocorrect/autosuggest or not. Its pretty depressing actually.
 
Exactly. It's a breach by definition.



As you say: "How many bots? What do they do exactly?"
So you don't have the full picture. You have evidence, but you don't have proof of wrongdoing. They may just be geared towards giving you the best possible autocomplete and nothing else. Now I don't find that a very reasonable position to take. There is such tremendous value in the data and it's so hard to oversee what bots do for other entities besides the people that made them, that it's unreasonable to think there is nothing untoward going on.

It's the same thing with 9/11. You don't need the full complete proof to have sufficient evidence to know some foul play is going on. Now, of course there is a lot of intentionally bad evidence of 9/11, because that's one of the ways how information agencies do damage control and prevent the spread of true information.

Meanwhile the BBC reported that building 7 has collapsed... even while it was still visible in the background and standing.



The building that wasn't hit by a plane, was built to withstand being hit by a plane, but still fell down in freefall. The building of which the owner said he told the firemen to "pull it", the building which was kept out of most of the original reports, then not said to be in freefall, until finally admitted that it did experience a period of freefall.

And yet with each of these "mistakes", would it be reasonable to completely dismiss some form of foul play, because you don't have full complete proof?

If the requirement is full proof, you don't have proof either that these autocorrect things are leaking any of your personal data. We both know it's most likely the case.
Autocorrect did 9/11.
Spellchecking is an inside job.
 
  • Like
Reactions: Spedestrian
Smartphone/computer autocomplete has supposedly tanked Chinese language literacy among the younger generation of Chinese. They need to know how to read and write about 2000 characters to achieve even the lowest level of functional literacy, but autocomplete has spoiled them when it comes to the writing part. They can still recognize and read all the characters, but they are so accustomed to autocomplete that they frequently can't recall how to write many characters from memory. They only know the first few brush strokes or the phonetic spelling and rely on the computer/phone to pull up the complete character based on their partial input.
 
Back