Regular desktop cores run at high clock speeds because a lot of applications are single threaded. It is easy to understand why this is without knowing anything about computers.
Imagine a fairly complicated task that takes one man a day, such as repairing a car. If you have eight guys work on it, will it take an hour? Of course not. It's the sort of thing that only one guy can do. A smart, skilled guy might get it done in four hours, and a novice might take three days, but really, the only way to get it done faster is to have a better mechanic with better tools. However, if I have 8 cars to fix, one mechanic will take 8 days, while, indeed, 8 mechanics will get the job done in one day. They can work in parallel. There will be some coordination overhead from management. Perhaps there is some sharing of more expensive tools. But overall, it's pretty easy to imagine how to get 8 mechanics to work on 8 cars and get an 8x speedup over having 1 mechanic in the shop.
Since a computer program is, in the end, nothing more than a task for an electronic worker to do, you might imagine, correctly, that some tasks by nature cannot be easily broken up and farmed out to a large number of workers. Such tasks don't benefit from more cores, only from faster, smarter cores. Large-scale physics computations are extremely easy to break up, and the methods for doing so were invented in the 1970s, mostly at NASA.