New super computer is a beast - Wonder what it'd cost per hour for gaming?

https://gizmodo.com/the-world-s-most-powerful-supercomputer-is-an-absolute-1826679256/amp

Behold Summit, a new supercomputer capable of making 200 million billion calculations per second. It marks the first time in five years that a machine from the United States has been ranked as the world’s most powerful.

The specs for this $200 million machine defy comprehension. Built by IBM and Nvidia for the US Department of Energy’s Oak Ridge National Laboratory, Summit is a 200 petaflop machine, meaning it can perform 20 quadrillion calculations per second. That’s about a million times faster than a typical laptop computer. As the the New York Times put it, a human would require 63 billion years to do what Summit can do in a single second. Or as stated by MIT Technology Review, “everyone on Earth would have to do a calculation every second of every day for 305 days to crunch what the new machine can do in the blink of an eye.”
The machine, with its 4,608 servers, 9,216 central processing chips, and 27,648 graphics processors, weighs 340 tons. The system is housed in a 9,250 square-foot room at Oak Ridge National Laboratory’s facility in Tennessee. To keep this machine cool, 4,000 gallons of water are pumped through the system. The 13 megawatts of energy required to power this behemoth could light up over 8,000 US homes.

Summit is now the world’s most powerful supercomputer, and it is 60 percent faster than the previous title holder, China’s Sunway TaihuLight. It’s the first time since 2013 that a US-built computer has held the title, showing the US is keeping up with its main rival in this area, China. Summit is eight times more powerful that Titan, America’s other top-ranked system.
As MIT Technology Review explains, Summit is the first supercomputer specifically designed to handle AI-specific applications, such as machine learning and neural networks. Its thousands of AI-optimized chips, produced by Nvidia and IBM, allow the machine to crunch through hideous amounts of data in search of patterns imperceptible to humans. As noted in an Energy.gov release, “Summit will enable scientific discoveries that were previously impractical or impossible.”

Summit and machines like it can be used for all sorts of processor-heavy applications, such as designing new aircraft, climate modeling, simulating nuclear explosions, creating new materials, and finding causes of disease. Indeed, its potential to help with drug discovery is huge; Summit, for example, could be used to hunt for relationships between millions of genes and cancer. It could also help with precision medicine, in which drugs and treatments are tailored to individual patients.

From here, we can look forward to the next generation of computers, so-called “exascale” computers capable of executing a billion billion (or one quintillion) calculations per second. And we may not have to wait long: The first exascale computers may arrive by the early 2020s

https://www.nytimes.com/2018/06/08/technology/supercomputer-china-us.html

The United States just won bragging rights in the race to build the world’s speediest supercomputer.

For five years, China had the world’s fastest computer, a symbolic achievement for a country trying to show that it is a tech powerhouse. But the United States retook the lead thanks to a machine, called Summit, built for the Oak Ridge National Laboratory in Tennessee.

Summit’s speeds, announced on Friday, boggle the mind. It can do mathematical calculations at the rate of 200 quadrillion per second, or 200 petaflops. To put in human terms: A person doing one calculation a second would have to live for more than 6.3 billion years to match what the machine can do in a second.

Still stupefying? Here is another analogy. If a stadium built for 100,000 people was full, and everyone in it had a modern laptop, it would take 20 stadiums to match the computing firepower of Summit.

China still has the world’s most supercomputers over all. And China, Japan and Europe are developing machines that are even faster, which could mean the American lead is short-lived.

Supercomputers like Summit, which cost $200 million in government money to build, can accelerate the development of technologies at the frontier of computing, like artificial intelligence and the ability to handle vast amounts of data.

Those skills can be used to help tackle daunting challenges in science, industry and national security — and are at the heart of an escalating rivalry between the United States and China over technology.

For years, American tech companies have accused China of stealing their intellectual property. And some Washington lawmakers say that Chinese companies like ZTE and Huawei pose a national security risk.

merlin_109076198_af429191-c86f-467f-a03e-f3806fdbb817-articleLarge.jpg

A multicore processor from a Chinese supercomputer, Sunway TaihuLight. Until Friday, it was considered the fastest computer in the world.CreditLi Xiang/Xinhua, via Associated Press
Supercomputers now perform tasks that include simulating nuclear tests, predicting climate trends, finding oil deposits and cracking encryption codes. Scientists say that further gains and fresh discoveries in fields like medicine, new materials and energy technology will rely on the approach that Summit embodies.

“These are big data and artificial intelligence machines,” said John E. Kelly, who oversees IBM Research, which helped build Summit. “That’s where the future lies.”

The global supercomputer rankings have been compiled for more than two decades by a small team of computer scientists who put together a Top 500 list. It is led by Jack Dongarra, a computer scientist at the University of Tennessee. The newest list will not be released until later this month, but Mr. Dongarra said he was certain that Summit was the fastest.

At 200 petaflops, the new machine achieves more than twice the speed of the leading supercomputer in November, when the last Top 500 list was published. That machine is at China’s National Supercomputing Center in Wuxi.

Summit is made up of rows of black, refrigerator-size units that weigh a total of 340 tons and are housed in a 9,250 square-foot room. It is powered by 9,216 central processing chips from IBM and 27,648 graphics processors from Nvidia, another American tech company, that are lashed together with 185 miles of fiber-optic cable.

Cooling Summit requires 4,000 gallons of water a minute, and the supercomputer consumes enough electricity to light up 8,100 American homes.

The global supercomputer sprint comes as internet giants like Amazon, Facebook and Google in the United States and Alibaba, Baidu and Tencent in China take the lead in developing technologies like cloud computing and facial recognition.

Supercomputers are a measure of a nation’s technological prowess. It is a narrow measure, of course, because raw speed is only one ingredient in computing performance. Software, which bring the machines to life, is another.

Scientists at the government labs like Oak Ridge are doing exploratory research in areas like new materials to make roads more robust, designs for energy storage that might apply to electric cars or energy grids, and potential power sources like harnessing fusion. All of those areas can benefit from supercomputing.

merlin_139238934_3a9d2f2f-92f9-4761-b869-173541fbcc92-articleLarge.jpg

Thomas Zacharia, the director of the Oak Ridge National Laboratory in Tennessee, with the Summit supercomputer. Machines like Summit are adding artificial intelligence and big-data handling to traditional supercomputer technology.CreditShawn Poynter for The New York Times
Modeling the climate, for example, can require running code on a supercomputer for days, processing huge amounts of scientific data like moisture and wind patterns, and modeling all the real-world physics of the environment. It is not the sort of task that can run efficiently on the cloud computing services supplied by internet companies, said Ian Buck, a computer scientist and general manager of Nvidia’s data center business.

“Industry is great, and we work with them all the time,” said Rick Stevens, an associate director of the Argonne National Laboratory in Illinois. “But Google is never going to design new materials or design a safe nuclear reactor.”

At Oak Ridge, Thomas Zacharia, the lab director, cites a large health research project as an example of the future of supercomputing. Summit has begun ingesting and processing data generated by the Million Veteran Program, which enlists volunteers to give researchers access to all of their health records, contribute blood tests for genetic analysis, and answer survey questions about their lifestyles and habits. To date, 675,000 veterans have joined; the goal is to reach one million by 2021.

The eventual insights, Mr. Zacharia said, could “help us find new ways to treat our veterans and contribute to the whole area of precision medicine.”

Dr. J. Michael Gaziano, a principal investigator on the Million Veteran Program and a professor at the Harvard Medical School, said that the potential benefit might well be a modern, supercharged version of the Framingham Heart Study. That project, begun in 1948, tracked about 5,000 people in a Massachusetts town.

Over a couple of decades, the Framingham study found that heart disease — far from previous single-cause explanations of disease — had multiple, contributing causes including blood cholesterol, diet, exercise and smoking.

Today, given the flood of digital health data and supercomputers, Dr. Gaziano said that population science might be entering a new golden age.

“We have all this big, messy data to create a new field — rethinking how we think about diseases,” he said. “It’s a really exciting time.”

Although impressive, Summit can be seen as a placeholder. Supercomputers that are five times faster — 1,000 petaflops, or an exaflop — are in the works, both abroad and in the United States. The Energy Department’s budget for its advanced computing program is being increased by 39 percent in the two fiscal years ending September 2019, said Paul M. Dabbar, the Energy Department’s under secretary for science.

“We’re doing this to help drive innovation in supercomputing and beyond,” Mr. Dabbar said.

Correction: June 7, 2018
An earlier version of this article, using calculations by Jack Dongarra, a computer scientist at the University of Tennessee who tracks supercomputer speeds, misstated how long it would take a person to do the calculations that the Summit supercomputer could do in one second. It would take 6.3 billion years, not 63 billion years. The error was repeated in the headline and in a mobile news alert. The article also misspelled the surname of the Energy Department’s under secretary for science. He is Paul Dabbar, not Dabarr.
 
Last edited by a moderator:
Would they install something like this at Oak Ridge if they were going to be renting time to drug companies? When I think Oak Ridge, the betterment of humanity isn't the first thing to pop into my mind.

They also didn't list any headsets as peripherals. What's the use of a $200M machine if you can't call people faggots on your favorite hat simulator?
 
bitcoin mining is measured in complete hash calculations per second over a spread for success given by the difficulty of that particular block. a difficulty of 1 is about 4.3e9 hashes.

the current difficulty is 4,940,704,885,521 or 4.9e12: https://data.bitcoinity.org/bitcoin/block_time/5y?f=m10&t=l

ergo some simple math: 4.3e9*4.9e11=2.1e21

you would (on a high spec desktop machine) get about 1 complete block per ~1360 years. an ASIC improves that to around 210 years depending on various factors.

the super computer they proposed can mine a complete block at the current difficulty in around 8,100 hours (petaflop being floating point calcs per second and let's assume a double precision register for the required operations, and it's obviously not optimized for coin mining).

a typical late model Intel CPU like an i7 is around 8-16 DP FLOPS; Summit is quoted to be able to do 200,000 FLOPS. the very best ASICS for mining rely heavily on math optimizations and massively parallel vector processors, and peak at 5000 currently - AntMiner, ASICMiner, et c using the Stream 2 SDK.

note that the massive difference is the use of a vector processor, usually a graphics chip, to crunch the numbers. general purpose CPUs are usually very poor at this sort thing as they are built for general purpose computing and work best in scalar or superscalar thread optimized software environments. if some gamer also lumped in his SLI setup into mining, he could easily pump out a few thousand FLOPS.
 
Last edited:
20 quadrillion calculations per second

20 quadrillion = 2000 trillion calculations per second

Bitcoin's network currently does about 35,000,000 trillion calculations per second

owned

The only issue is that Bitcoin can't do anything else which is why supercomputers r dumb and they should just create a distributed supercomputer.
 
20 quadrillion calculations per second

20 quadrillion = 2000 trillion calculations per second

Bitcoin's network currently does about 35,000,000 trillion calculations per second

owned

The only issue is that Bitcoin can't do anything else which is why supercomputers r dumb and they should just create a distributed supercomputer.
Agree on this but for a single computer (340 tons) summit is mean and you know you'd like to gameon it .:tomgirl:
 
Last edited:
But can it shitpost?
I wish we could take up a collection to buy it for the OP, on the condition that he post lots of pics of himself shitposting on it, playing Solitaire, etc.
 
Back