PubCows - Obviously horseshit journal articles and the journals that publish them

  • 🏰 The Fediverse is up. If you know, you know.
  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
STEM is just as infected (Comp sci and Mathematics as huge examples).
I'd be interested in hearing some examples from mathematics. I've never really heard anything egregious out of the field - the most I've seen is the media overhyping some media-savvy grad student's results because they're a Preferred Demographic.
 
I'd be interested in hearing some examples from mathematics. I've never really heard anything egregious out of the field - the most I've seen is the media overhyping some media-savvy grad student's results because they're a Preferred Demographic.
 
This wasn't published, someone just posted it on ResearchGate but I think it still fits here

DOI: 10.13140/RG.2.2.30181.50404/1
Screenie_4.png

Under summary:
Screenie_5.png
 

Attachments

If scientists with PhDs don’t understand what Google results are and what they can tell us, what hope do the rest of us have?
Using date range commands, the author claimed to compare searches for five months in 2019 to the same five months in 2020 — pre-pandemic to mid-pandemic. They did so not by turning to Google Trends data — the method promoted by the scholars they cited — but by inputting their search phrases with date delimiters using Google search. They reported the number of hits Google displays at the top of the page as the number of searches made for that search string. These numbers were exceptionally high because, well, the search phrases were not enclosed in quotation marks. All this in an article arguing for the value of tech-enabled “rapid response” research. A culture of “move fast and break things” is common in Silicon Valley, but academics typically work a bit more slowly and carefully to avoid these kinds of errors.
How did this article get through peer review? Like the author, the journal’s reviewers and editors seemed to have been glamoured by the shine, tech fetishism, and naive empiricism of even the most poorly executed digital methods — without the methodological humility to work together with colleagues from information science, or at least check in with someone familiar with the basic workings of tools like Google. If we want to catch scientific missteps like these, we must recognize that good science takes time. And this mishap shows how desperately we need more robust digital literacy education at all stages of life — because if PhDs don’t understand the basics of what Google returns are and what they are telling us, what hope do the rest of us have?
 
Back
Top Bottom