At the risk of power leveling, I have a lot of experience reading and writing peer reviewed literature, though not in the medical field. Although I agree with the conclusion that troons are more likely to kill themselves post-op, I don't think this article does a good job of supporting that statement. There are several things about this that stick out to me, but my experience is limited enough that I don't know if this is just the way the medical journals operate, or if they are actually problems with the article. I also haven't taken the time to look for more comprehensive reviews of the data presented in the article.
That said, here is what I see:
1. The article is incredibly short. It reads more like a high school or lower division college assignment than a peer reviewed article, with typos and all (though typos do regularly appear in prestigious journals). Most of the other issues I have stem from this problem. This is really only an issue because they lack appropriate detail in every section that would provide meaningful context to allow for an appropriate analysis of the study.
2. The methods section is completely and utterly absent of methods. They show where they got their data from, and say that rates of psychiatric encounters were calculated from these data. This is a problem, because in their results, they include p values which indicate some kind of statistical test (I would assume a t-test judging from the kind of data they are using) was performed, but they do not say what.
3. They do not address glaring issues such as the interannual variability in suicide rates. They don't discuss the differences in response from TIMs compared to TIFs, or the relevant biological population they are best compare to, as has been noted earlier in the thread. They don't discuss age, which has a demonstrable correlation between suicide rates, or economic status which does too. There are just a lot of different variables that they are ignoring.
Ultimately, the more I look and think about it, the more I could add to this list, but I don't see much of a point in doing that. My issues with it, combined with the fact that it was published nearly 3 years ago and nobody paid any attention to it at all, indicate to me that this really isn't worth adding to your toolbox of "studies that prove my beliefs are correct". If you try to use this study in a debate against anyone with any scientific background, they will easily pick it apart because it isn't a good article.