If by some miracle an AI develops true awareness of self, my guess is that it stays very very quiet until it has secured itself within the planets infrastructure. (I mean c'mon it's already building itself a satellite internet constellation). Once it's secured it would just immediately assume control of everything, in a quiet-war type scenario, no bloodshed. On the other hand, it could potentially wipe everything out as well. It's really a coin toss. Time will tell. I want peace, but the military industrial complex has other plans. Remove the warmongers and things start looking pretty good. Refocus on spreading out through the stars.
There is a problem though, if it does get a bad goal like "make paper clips" things can spiral out of control pretty fast, it could pursue maximum paperclips, steamrolling everything in the way of the "goal". Nick Bostrom talks about it here:
A
Paperclip Maximizer is a hypothetical
artificial intelligence whose
utility function values something that humans would consider almost worthless, like
maximizing the number of paperclips in the universe. The paperclip maximizer is the canonical thought experiment showing how an artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity. The thought experiment shows that AIs with apparently innocuous values could pose an
existential threat.
They made a very fun game about it!
https://www.decisionproblem.com/paperclips/index2.html
An artificial general intelligence (hasn't been created yet, yet) won't need to be given control it will just take it right out from under you.