GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

I actually emailed MSI telling them that I do a lot of scientific calls and blender (calls yes, blend no) and my vram can get up to 102 to 104 almost 24/7 and a tech emailed me back saying that was within spec and normal, and wouldn't damage my card.
I'm not going to backseat-engineer here, they(Samsung, Micron etc) obviously knows best... But chips reaching 100+c in a plastic packaging really makes you think. They try to cool them, there's the custom actively cooled heatsink using thermal pads to connect and cool... all these rough plastic surfaces. Isn't it time to let that go? With the 386 Intel started putting them in a ceramic package and with the Pentium 90(?) they introduced the heatspreader before going back to all ceramics with the higher clocked ones.

For as much bullshit as I think it it is with xTreme RGB cooling on stock DDR4 modules they don't usually run at the speeds of bleeding edge graphics memory.

This is the GDDR6(not X) used on a Zotac 2080TI. That certainly looks like plastic.
gddr6.jpg

But what do I know.
 
I'm not going to backseat-engineer here, they(Samsung, Micron etc) obviously knows best... But chips reaching 100+c in a plastic packaging really makes you think. They try to cool them, there's the custom actively cooled heatsink using thermal pads to connect and cool... all these rough plastic surfaces. Isn't it time to let that go? With the 386 Intel started putting them in a ceramic package and with the Pentium 90(?) they introduced the heatspreader before going back to all ceramics with the higher clocked ones.

For as much bullshit as I think it it is with xTreme RGB cooling on stock DDR4 modules they don't usually run at the speeds of bleeding edge graphics memory.

This is the GDDR6(not X) used on a Zotac 2080TI. That certainly looks like plastic.
View attachment 1961793

But what do I know.
I mean, it depends on the plastic, but they're pretty resistant to heat. The chips thermal throttle at VRAM 110 and memory junction temperature is actually the temperature PHYSICALLY inside of the silicon. So, right now my card is 59 degrees, with the memory junction being at 100 degrees. And even gaming, the backplate of my 3090 is painful to the touch.

I mean, if MSI isn't lying out the asshole and saying it can operate 100+ C of VRAM with the card being at <60 degrees C, who am I to say that's wrong? I was told it was in spec, so....*shrug*
 
  • Like
Reactions: Allakazam223
I'm not going to backseat-engineer here, they(Samsung, Micron etc) obviously knows best...
Are you mental? Big corps NEVER know best. Their engineers know best, sure, but engineers don't get to make descisions, marketing, MBAs, and bureaucrats do. Remember the intel ex-lead engineer's presentation just last year? It doesn't matter what the engineers know if their knowledge isn't informing the product that consumers actually get.
 
Well, I did something extremely ghetto. I bought about eight 40 mm copper heatsinks. I coated the bottoms in thermal paste (one to dissipate heat and two to prevent and disrupt electrical conductance), wrapped them all nice in thermal tape so no copper was exposed. My temps in VRAM dropped from 104 to 98, my card is sitting at a chill 56 degrees and the hottest recorded spot on the GPU itself is 68 degrees in HWinfo.

So if you can't replace your backplate and want something like 8 copper heat sinks sitting on your GPU backplate, it can get you a 4-6 degree temperature reduction for like 30 bucks. The thermal paste was the most expensive part which was $12.

That is if you can buy a GPU. Any GPU. Which nobody still can.
 
I wouldn't worry. Things there can go up to 120C and more in plastic packaging, depending on what exactly it is. For our human temperature sensibilities that sounds really hot but I've seen semiconductors that sat perfectly happy around 90-100 degrees for twenty years and more of almost continuous operation. Again, I wouldn't worry. Also, by the time RAM deterioration becomes an issue you'll have long replaced the card. Failure mode there is usually that bits get "stuck". You'll notice.

Also, the point at which surfaces become painful to touch in a way leaving your hand on them for long is uncomfortable varies by person of course, but is usually around 45C-ish, also depending on the thermal properties of the surface. Again, wouldn't worry. Around 50C you can get burns and the usual engineering advice is that such surfaces should not be exposed to people who might touch them. Humans are squishy and lawsuits painful. You'll not see such a thing in average consumer-targeted mass-produced stuff like this.

I've never looked into it but my guess is that the main reason of miner cards falling is the power supply infrastructure. That's usually where manufacturers shave the pennies and the most can go wrong. These days such hardware usually gets Ctrl-C+Ctrl-V'd by engineers (e.g. at MSI) from whatever the GPU manufacturers do in their hardware references and then the race where to improve layout and shave pennies begins but it usually comes down to buying components still inside the mandated manufacturer's spec from the lowest bidder. (and sometimes - slightly outside with a bunch of testing) Another common failure point could be micro-fissures because of the temperature variances in operation and ill-fitted heatsinks/poor manufacturing processes/QA. If you produce stuff like this on that scale you'll always have some hardware that'll fail prematurely. It's not really indicative of any trend but also not as common as it used to be.

EDIT: added a word without which that whole text didn't make any sense. Oops!
 
Last edited:
Modern graphics memory can degrade with use. Similar, but much slower, than flash memory. I have no idea how that actually manifests, but having less memory available, slowing down, and needing more power to refresh all sound plausible.
VRAM doesnt degrade the way NAND does at all. The only thing that will cause VRAM to degrade would be excessively high temperatures or overvolting, both things modern vBIOS systems prevent.

I wouldn't worry. Things there can go up to 120C and more in plastic packaging, depending on what exactly it is. For our human temperature sensibilities that sounds really hot but I've seen semiconductors that sat perfectly happy around 90-100 degrees for twenty years and more of almost continuous operation. Again, I wouldn't worry. Also, by the time RAM deterioration becomes an issue you'll have long replaced the card. Failure mode there is usually that bits get "stuck". You'll notice.

Also, the point at which surfaces become painful to touch in a way leaving your hand on them for long is uncomfortable varies by person of course, but is usually around 45C-ish, also depending on the thermal properties of the surface. Again, wouldn't worry. Around 50C you can get burns and the usual engineering advice is that such surfaces should not be exposed to people who might touch them. Humans are squishy and lawsuits painful. You'll not see such a thing in average consumer-targeted mass-produced stuff like this.

I've never looked into it but my guess is that the main reason of miner cards falling is the power supply infrastructure. That's usually where manufacturers shave the pennies and the most can go wrong. These days such hardware usually gets Ctrl-C+Ctrl-V'd by engineers (e.g. at MSI) from whatever the GPU manufacturers do in their hardware references and then the race where to improve layout and shave pennies begins but it usually comes down to buying components still inside the mandated manufacturer's spec from the lowest bidder. (and sometimes - slightly outside with a bunch of testing) Another common failure point could be micro-fissures because of the temperature variances in operation and ill-fitted heatsinks/poor manufacturing processes/QA. If you produce stuff like this on that scale you'll always have some hardware that'll fail prematurely. It's not really indicative of any trend but also not as common as it used to be.

EDIT: added a wod
I wouldn't worry. Things there can go up to 120C and more in plastic packaging, depending on what exactly it is. For our human temperature sensibilities that sounds really hot but I've seen semiconductors that sat perfectly happy around 90-100 degrees for twenty years and more of almost continuous operation. Again, I wouldn't worry. Also, by the time RAM deterioration becomes an issue you'll have long replaced the card. Failure mode there is usually that bits get "stuck". You'll notice.

Also, the point at which surfaces become painful to touch in a way leaving your hand on them for long is uncomfortable varies by person of course, but is usually around 45C-ish, also depending on the thermal properties of the surface. Again, wouldn't worry. Around 50C you can get burns and the usual engineering advice is that such surfaces should not be exposed to people who might touch them. Humans are squishy and lawsuits painful. You'll not see such a thing in average consumer-targeted mass-produced stuff like this.

I've never looked into it but my guess is that the main reason of miner cards falling is the power supply infrastructure. That's usually where manufacturers shave the pennies and the most can go wrong. These days such hardware usually gets Ctrl-C+Ctrl-V'd by engineers (e.g. at MSI) from whatever the GPU manufacturers do in their hardware references and then the race where to improve layout and shave pennies begins but it usually comes down to buying components still inside the mandated manufacturer's spec from the lowest bidder. (and sometimes - slightly outside with a bunch of testing) Another common failure point could be micro-fissures because of the temperature variances in operation and ill-fitted heatsinks/poor manufacturing processes/QA. If you produce stuff like this on that scale you'll always have some hardware that'll fail prematurely. It's not really indicative of any trend but also not as common as it used to be.

EDIT: added a word without which that whole text didn't make any sense. Oops!
The reason for the mining GPU declining in performance is entirely GPU temperature. The GPU core is running 86C. Starting with Pascal nvidia GPUs start pulling turbo bins at 80C. It's really not that complicated, the thermal paste on the GPU core is worn out. I mean you can even see it in the screenshot, the mining GPU is running 1800mhz compared to 1905 for the non mining example, the mining card is pulling the higher turbo bins due to heat. I'd bet if you looked at a GPU clock graph the mining GPU clock would be bouncing up and down far more, and a thermal paste swap would fix that.

If the VRMs, or MOSFETs, were somehow worn out, you'd be greeted with GPU crashes, because the GPU cannot control the MOSFETs or intentionally limit itself without software intervention by an end user intentionally undervolting the GPU.
 
Reading through this thread makes me realize I was lucky to build my PC when I did before all this shortage shit happened. Out of morbid curiosity, I checked Amazon for the GPU I have. It's an RTX 2060 Super, which, when I bought it, was in the ballpark of $300-400. I got it because it was cheaper than then 2070 and I figured that it was powerful enough for my purposes (powerful enough to play modern games like Doom Eternal basically, not really into mining or any of that). Good mid-range card, decent price. Yet the Amazon prices for this card, which no one really cares about anymore in favor of 3080s, 3090s, and whatever AMD has on the table, have doubled to $700-1000. It's fucking nuts. And those are the ones in stock. Somehow this card has completely sold out at retailers like Best Buy.

I'm really glad I bought everything I needed because holy hell I can't imagine building a PC for basic bitch gaming and browsing in this environment.
 
If the VRMs, or MOSFETs, were somehow worn out, you'd be greeted with GPU crashes, because the GPU cannot control the MOSFETs or intentionally limit itself without software intervention by an end user intentionally undervolting the GPU.
Yes of course, I was talking about complete failure, not degradation in speed. Thermal throttling is pretty much the guaranteed reason of that, also sometimes caused by non-optimal cases.
 
Reading through this thread makes me realize I was lucky to build my PC when I did before all this shortage shit happened. Out of morbid curiosity, I checked Amazon for the GPU I have. It's an RTX 2060 Super, which, when I bought it, was in the ballpark of $300-400. I got it because it was cheaper than then 2070 and I figured that it was powerful enough for my purposes (powerful enough to play modern games like Doom Eternal basically, not really into mining or any of that). Good mid-range card, decent price. Yet the Amazon prices for this card, which no one really cares about anymore in favor of 3080s, 3090s, and whatever AMD has on the table, have doubled to $700-1000. It's fucking nuts. And those are the ones in stock. Somehow this card has completely sold out at retailers like Best Buy.

I'm really glad I bought everything I needed because holy hell I can't imagine building a PC for basic bitch gaming and browsing in this environment.
Nvidia are selling more of the 1050ti, and it is still the same price as it was 4.5 years ago, so that's an upgrade(?) path of sorts. I have one, and was hoping to upgrade on the 30xx series but crypto. I can't even overclock the thing anymore, because god forbid it dies on me now.

It is fine for basic bitch gaming, but it's simply not a reasonable choice to buy now, when it will be completely outclassed once the market recovers. Which hopefully will be next generation.
 
Reading through this thread makes me realize I was lucky to build my PC when I did before all this shortage shit happened. Out of morbid curiosity, I checked Amazon for the GPU I have. It's an RTX 2060 Super, which, when I bought it, was in the ballpark of $300-400. I got it because it was cheaper than then 2070 and I figured that it was powerful enough for my purposes (powerful enough to play modern games like Doom Eternal basically, not really into mining or any of that). Good mid-range card, decent price. Yet the Amazon prices for this card, which no one really cares about anymore in favor of 3080s, 3090s, and whatever AMD has on the table, have doubled to $700-1000. It's fucking nuts. And those are the ones in stock. Somehow this card has completely sold out at retailers like Best Buy.

I'm really glad I bought everything I needed because holy hell I can't imagine building a PC for basic bitch gaming and browsing in this environment.
Counter: the shortage is actually a good thing because it forces people to play retro vidya and come to terms with the fact that there's no expiration date on good design. That and lol when the bubble pops and bandwagon chasers will be left liquidating their inventory for pennies on the dollar.
 
Counter: the shortage is actually a good thing because it forces people to play retro vidya and come to terms with the fact that there's no expiration date on good design. That and lol when the bubble pops and bandwagon chasers will be left liquidating their inventory for pennies on the dollar.
True, but I also built my PC partially because my laptop could barely run certain retro vidya like KOTOR, Max Payne, and Blood.
 
The thing is, if your vram is damaged, you're going to get artificing and shit like that

So unless your card comes from a mining farm, it seems more of an urban legend than fact.
Guess a lot of vram is getting fucked up since mining cards with these problems are nothing rare, in fact those artifacts are basically the telltale sign that you got a mining card

And thats the thing, most mining cards are not sold as such because they dont want to have to discount the price
 
Update on the stock situation from a frustrated ProShop, they provide numbers.

On the first image I highlighted the important part.

Nvidia:

3060

gfx_3060_feb.JPG

3060Ti (idk, I marked the Ti so it's not mistaken for the regular 3060)
gfx_3060_ti_feb.JPG

3070
gfx_3070_feb.JPG

3080 (those are all the 3080 cards listed, I wanted the image to fit and be readable in the forum image viewer at 1080p so the last one is a little bit cut off. A simple one-click glance and disapproval experience.)
gfx_3080_feb.JPG

3090
gfx_3090_feb.JPG

AMD - they do AMD a little differently in that they also show how many cards they have ordered, are expecting and have received.

6800
gfx_6800_feb.JPG

6800XT
gfx_6800xt_feb.JPG

6900XT
gfx_6900xt_feb.JPG
 
Update on the stock situation from a frustrated ProShop, they provide numbers.

On the first image I highlighted the important part.

Nvidia:

3060

View attachment 1973991

3060Ti (idk, I marked the Ti so it's not mistaken for the regular 3060)
View attachment 1973992

3070
View attachment 1973993

3080 (those are all the 3080 cards listed, I wanted the image to fit and be readable in the forum image viewer at 1080p so the last one is a little bit cut off. A simple one-click glance and disapproval experience.)
View attachment 1973994

3090
View attachment 1973995

AMD - they do AMD a little differently in that they also show how many cards they have ordered, are expecting and have received.

6800
View attachment 1973996

6800XT
View attachment 1973997

6900XT
View attachment 1973998

If this is correct then AMD might actually not have fucked the dog in front of an open goal as much as we think as they actually are managing to deliver cards now.
 
If this is correct then AMD might actually not have fucked the dog in front of an open goal as much as we think as they actually are managing to deliver cards now.
It looks like NVIDIA is having manufacturing issues. Because the lower end cards are just cards that didn't fit the 3080 and 3090 specs. So you've got basically no higher end cards on the way, only lower end ones that weren't good enough to make the top tier.

Scalpers will still be an issue, because with the lack of NVIDIA supply, they'll just go to AMD. Honestly, I hope it bites them massively in the ass and they're just sitting on hundreds of cards which have become useless pieces of silicon once consumers basically pay $20 for a bot to get something at MSRP. I'd actually recommend buying a very cheap bot and doing that. Yeah, it might add 30 bucks to the card. Mine with it for a month and make your money back and then enjoy it. Scalpers are not just going to charge 30 dollars more.
Guess a lot of vram is getting fucked up since mining cards with these problems are nothing rare, in fact those artifacts are basically the telltale sign that you got a mining card

And thats the thing, most mining cards are not sold as such because they dont want to have to discount the price
I mean, if you're in a mining farm heat is probably terrible. and ASIC mining is some unprofitable shit unless you've got a warehouse and cheap electricity, good Jesus.

I don't keep my card mining 24/7, especially if I want to watch VR movies or movies or do basically anything but listen to music and browse the internet.

Annndddd.....I started mining on my Overclocked gaming profile and my memory temperature hit...holy fuck, 112. Thank fuck I went to this tread to notice, JFC. That's the problem with switching between mining and gaming. Especially on the card. Because if I forget to switch, the memory gets crazy hot. Glad I fucking notice. I also have HWInfo on all the time
 
I mean, if you're in a mining farm heat is probably terrible. and ASIC mining is some unprofitable shit unless you've got a warehouse and cheap electricity, good Jesus.

I don't keep my card mining 24/7, especially if I want to watch VR movies or movies or do basically anything but listen to music and browse the internet.

Annndddd.....I started mining on my Overclocked gaming profile and my memory temperature hit...holy fuck, 112. Thank fuck I went to this tread to notice, JFC. That's the problem with switching between mining and gaming. Especially on the card. Because if I forget to switch, the memory gets crazy hot. Glad I fucking notice. I also have HWInfo on all the time
How long would it take you to just pay off your card mining?
 
  • Thunk-Provoking
Reactions: graywolf88
Oh, by the by....NVIDIA is reportedly discontinuing 3080/3090 'blower' style cards. These are cards with single beefy fans. They're probably going to the miner cards. So goodbye small form factor 3080/3090, if you've got good airflow. Mine is a three fan beast.
How long would it take you to just pay off your card mining?
Remembering I have a 3090, let me check my nicehash dash....

So, my income fluctuates with the price of Bitcoin. I use nicehash, which means people are paying me in Bitcoin to mine for them, and I get a cut. So I'm not actually mining Eth or anything. I'm automatically sold like a whore to the most expensive coin and my rig does the rest. Keep in mind I also do CPU mining, and I have a 5600X. CPU mining is NOT profitable and is just there to throw some extra cents my way. I usually do it in low power mode where my CPU sits at 100% usage at 50 Degrees. If you're doing what I'm doing, and run it at full power, you're making 5 cents more at a cost of 30 degrees to your hardware. It is not worth it.

So, assuming there's no massive crash (or massive spike) in BTC or demand for miners:

I make 43 euros a week as of this moment, which is about 50 bucks. I've been doing it for about two weeks, so I've made about 120. I make...about 200 a month or so. So, in a year, if everything stays the same, I will make $2,700. So a little less than one year the card will have paid for itself mining with monitoring. In a year and a half, I will have paid for all of my components. Two years total if you want to count my Valve Index along with it. So in two years, I will have paid off a top of the line gaming PC by babysitting my hardware and doing pretty much nothing but setting up a coinbase account to transfer the money into. Or I could just buy I hardware wallet and do it that way.

This is with a hash rate of 110 MH/s, and does not take into account power costs. It is also done with constant monitoring of my GPU and really, really watching my temps and giving it a benchmark a week and a rest.

This also does not account for Nicehash's gaming mode (where if it detects you are gaming, it will slow your hash rate and load proportionally to the resource you are running). I was wondering why my hash rate went to shit and it was because I was using Virtual Desktop and fucking around in VR. So you can basically leave Nicehash running forever and if you watch everything and pet your hardware, you should not have a problem.

I looked into getting an ASCI miner and I just started laughing at how retardedly expensive and inefficient they were. There's a very good reason that GPU mining is still more profitable and popular. It isn't until you start mining yourself to pay off your hardware you realize just why GPU mining is so popular. Looking back:

With a hash rate of 110 MH/s, my power draw was only 297 Watts. This is basically impossible in an ASCI miner (this is because I have underclocked and undervolted the card). They draw huge amounts of power, are limited to one algorithm and are horribly inefficient compared to a GPU. I just started laughing when I saw the comparison.

So, to get to my profitability...I'll use my historical average. 9 bucks USD.

In 2021, that would be a miner that costs 2k, can ONLY mine, fits only one algorithm, takes up about 1980 Watts of power and produces 75 decibels of sound. Which is basically equivalent to a blender. And will probably soon be functionally useless and have 0 resale value.

Compare that to my 3090 which mines whatever it wants, I can game on, consumes maybe 300 Watts at max and produces an average sound of 36 decibels with the fan at 70%. This is WITH my case door open. If I treat this card right, I can pay it off in a year and use it for 3 more. I can then sell this card and fully buy another high end card and if GPU mining is still as profitable, do so having MADE money in the process. Then everything I mine with that card is 100% profit.

So you can see why graphics card mining is extremely popular and still VERY profitable. You won't get rich off of it, but you can pay off your hardware by essentially doing nothing. Then its just beer money without taking surveys or doing stupid shit.

So right now, with my 3090, I am listening to Youtube, fucking around in Virtual Desktop getting 144 FPS, and I'm making five bucks a day with my GPU at....ahahahah, 50 degrees. CPU 50 degrees. VRAM at 80 (this is always super fucking high, no matter what because its measuring the silicon temperature inside of the card) and the hottest measurement on my GPU being 65 degrees.

So even if I stick doing this for a year at half of what I earn, I'd still make about $1,500 extra.

I know this is a bit longer, but I just wanted to explain why no one is running for ASCI miners and everyone is trying to grab at a GPU right now.
 
Last edited:
Oh, by the by....NVIDIA is reportedly discontinuing 3080/3090 'blower' style cards. These are cards with single beefy fans. They're probably going to the miner cards. So goodbye small form factor 3080/3090, if you've got good airflow. Mine is a three fan beast.

Remembering I have a 3090, let me check my nicehash dash....

So, my income fluctuates with the price of Bitcoin. I use nicehash, which means people are paying me in Bitcoin to mine for them, and I get a cut. So I'm not actually mining Eth or anything. I'm automatically sold like a whore to the most expensive coin and my rig does the rest. Keep in mind I also do CPU mining, and I have a 5600X. CPU mining is NOT profitable and is just there to throw some extra cents my way. I usually do it in low power mode where my CPU sits at 100% usage at 50 Degrees. If you're doing what I'm doing, and run it at full power, you're making 5 cents more at a cost of 30 degrees to your hardware. It is not worth it.

So, assuming there's no massive crash (or massive spike) in BTC or demand for miners:

I make 43 euros a week as of this moment, which is about 50 bucks. I've been doing it for about two weeks, so I've made about 120. I make...about 200 a month or so. So, in a year, if everything stays the same, I will make $2,700. So a little less than one year the card will have paid for itself mining with monitoring. In a year and a half, I will have paid for all of my components. Two years total if you want to count my Valve Index along with it. So in two years, I will have paid off a top of the line gaming PC by babysitting my hardware and doing pretty much nothing but setting up a coinbase account to transfer the money into. Or I could just buy I hardware wallet and do it that way.

This is with a hash rate of 110 MH/s, and does not take into account power costs. It is also done with constant monitoring of my GPU and really, really watching my temps and giving it a benchmark a week and a rest.

This also does not account for Nicehash's gaming mode (where if it detects you are gaming, it will slow your hash rate and load proportionally to the resource you are running). I was wondering why my hash rate went to shit and it was because I was using Virtual Desktop and fucking around in VR. So you can basically leave Nicehash running forever and if you watch everything and pet your hardware, you should not have a problem.

I looked into getting an ASCI miner and I just started laughing at how retardedly expensive and inefficient they were. There's a very good reason that GPU mining is still more profitable and popular. It isn't until you start mining yourself to pay off your hardware you realize just why GPU mining is so popular. Looking back:

With a hash rate of 110 MH/s, my power draw was only 297 Watts. This is basically impossible in an ASCI miner (this is because I have underclocked and undervolted the card). They draw huge amounts of power, are limited to one algorithm and are horribly inefficient compared to a GPU. I just started laughing when I saw the comparison.

So, to get to my profitability...I'll use my historical average. 9 bucks USD.

In 2021, that would be a miner that costs 2k, can ONLY mine, fits only one algorithm, takes up about 1980 Watts of power and produces 75 decibels of sound. Which is basically equivalent to a blender. And will probably soon be functionally useless and have 0 resale value.

Compare that to my 3090 which mines whatever it wants, I can game on, consumes maybe 300 Watts at max and produces an average sound of 36 decibels with the fan at 70%. This is WITH my case door open. If I treat this card right, I can pay it off in a year and use it for 3 more. I can then sell this card and fully buy another high end card and if GPU mining is still as profitable. Having made money in the process.

So you can see why graphics card mining is extremely popular and still VERY profitable. You won't get rich off of it, but you can pay off your hardware by essentially doing nothing. Then its just beer money without taking surveys or doing stupid shit.

So right now, with my 3090, I am listening to Youtube, fucking around in Virtual Desktop getting 144 FPS, and I'm making five bucks a day with my GPU at....ahahahah, 50 degrees. CPU 50 degrees. VRAM at 80 (this is always super fucking high, no matter what because its measuring the silicon temperature inside of the card) and the hottest measurement on my GPU being 65 degrees.

So even if I stick doing this for a year at half of what I earn, I'd still make about $1,500 extra.

I know this is a bit longer, but I just wanted to explain why no one is running for ASCI miners and everyone is trying to grab at a GPU right now.
Did you build your current PC before or after the price spikes?

And considering the extra power usage of mining (since you were going to use your PC anyway, just probably at a lower load) how much longer would it take to pay itself?
 
Did you build your current PC before or after the price spikes?

And considering the extra power usage of mining (since you were going to use your PC anyway, just probably at a lower load) how much longer would it take to pay itself?
I got fucked and bought my card RIGHT after the tariffs hit. I bought it from an amazon marketplace, but this is when all 3090s increased by 300-400 retail to offset the prices. My 5600X I got super fucking lucky and somehow got it at MSRP. So $300.

My 3090 I got fucked on because of Trump's tariffs. I paid 2,500 with tax. Which is why I started mining. Realize, that even though I bought it from a re-seller, this was the average price of a 3090 in late January for my area once the tariffs caused the increase.

I actually got lucky because my card goes for....holy fucking shit, 3 to 5k on ebay. Jesus. All 3090s now will cost you 3k, minimum.

The only price I see remotely sane is Government Group and I've never heard of them before. And when I say, remotely sane, I mean $2,400. So basically, you could say I got mine for cheap.

Oh my God, Amazon its even worse. My card I got for 2,500 a month ago? Its now 3,600. Yeah, don't buy a 3090 right now, holy shit.

So, lets consider power usage, overall for my PC. This is a bit more complicated. I have an 800 Watt Platinum Power supply, which means my power efficiency is high. So...looking up power costs for my area (which are 0.10 cents Kw/H, my power costs are (VERY) roughly $360.70 a year. This is with it being on at max capacity, full bore, 24/7 with 4 hours of gaming/editing/high usage a day. It'll probably be way less than this. Even so, the miner I mentioned spends 200 bucks plus a month alone drinking down power. So basically less than a dollar a day on power. If this is accurate. And I'm not buying a voltimeter to check.

So, assuming the highest possible dollar amount, I make 2,300 per year minus electricity costs. If you're going to mine, get either a gold or platinum power supply. So it'd probably take two months more assuming everything is at SUPER MAX, which it isn't.
 
I got fucked and bought my card RIGHT after the tariffs hit. I bought it from an amazon marketplace, but this is when all 3090s increased by 300-400 retail to offset the prices. My 5600X I got super fucking lucky and somehow got it at MSRP. So $300.

My 3090 I got fucked on because of Trump's tariffs. I paid 2,500 with tax. Which is why I started mining. Realize, that even though I bought it from a re-seller, this was the average price of a 3090 in late January for my area once the tariffs caused the increase.

I actually got lucky because my card goes for....holy fucking shit, 3 to 5k on ebay. Jesus. All 3090s now will cost you 3k, minimum.

The only price I see remotely sane is Government Group and I've never heard of them before. And when I say, remotely sane, I mean $2,400. So basically, you could say I got mine for cheap.

Oh my God, Amazon its even worse. My card I got for 2,500 a month ago? Its now 3,600. Yeah, don't buy a 3090 right now, holy shit.

So, lets consider power usage, overall for my PC. This is a bit more complicated. I have an 800 Watt Platinum Power supply, which means my power efficiency is high. So...looking up power costs for my area (which are 0.10 cents Kw/H, my power costs are (VERY) roughly $360.70 a year. This is with it being on at max capacity, full bore, 24/7 with 4 hours of gaming/editing/high usage a day. It'll probably be way less than this. Even so, the miner I mentioned spends 200 bucks plus a month alone drinking down power. So basically less than a dollar a day on power. If this is accurate. And I'm not buying a voltimeter to check.

So, assuming the highest possible dollar amount, I make 2,300 per year minus electricity costs. If you're going to mine, get either a gold or platinum power supply. So it'd probably take two months more assuming everything is at SUPER MAX, which it isn't.
I didn't realize 3090 prices had gotten so insane. Not feeling as bad for paying 2k for my 3090 FE now. May start mining with is as well.

If I wasn't so autistic about saving old hardware I'd probably sell my old gpus as well.
 
Back