Moore's law is ending because the simulation we are in is run on mostly GPU HW but has a hard time emulating all our CPUs?

0  2017-04-19 by [deleted]

I've been hearing this one quite a bit lately.

25 comments

Any chance on an eli5? I've never heard this and I enjoy simulation theory stuff.

I thought Moore's law was ending because we are rapidly approaching the point where a transistor can be assembled from individual molecules. A 1nm transistor was completed in 2016.

http://newscenter.lbl.gov/2016/10/06/smallest-transistor-1-nm-gate/

I'm going to have to give you a duh on that one.

You mean because nobody noticed Silicon was only viable for a few more years, instead of the obvious idea of switching to new materials or architectures?

Uh, duh the article I linked discusses carbon nanotubes but I know reading can be difficult.

Just to painfully spell it out for you, Moore's law ended abruptly several years ago. The vague promise of nanotubes is to someday get it started up again.

14nm released in 2014 by Intel and 7nm silicon germanium chips from IBM in 2015. Hell TSMC is building a 5nm and 3nm fab right now. Thanks for "spelling out" a bunch of crap.

Look, I don't know if you are just naturally argumentative or ignorant. It is widely known that Moore's law is no longer in play. Read this and google a response:

https://arstechnica.com/information-technology/2016/03/intel-retires-tick-tock-development-model-extending-the-life-of-each-process/

Intel doesn't manufacture every piece of silicon on the planet. Here's an article from the same rag:

https://arstechnica.com/gadgets/2017/02/samsungs-got-a-new-10nm-octa-core-chip-with-gigabit-lte-for-flagship-phones/

Well my friend, this is not the conspiracy theory for you.

It's funny that you accuse me off being ignorant, then ignore two articles from the source you provided disproving you. Maybe this conspiracy is not for you?

I was just trying to be nice. I don't fault you for getting a little drunk and trying to be combative.

You do realize that you are being combative by openly ignoring evidence disproving your point? Keep bringing those attacks though, you are definitely "trying to be nice."

R/IAMVERYSMART

Silicon has reached it's limits. I think the carbon nano tube chips have been proposed to continue moores law.

And all of a sudden I wasn't smart enough to continue in the r/conspiracy community.

GPUs are the specialized hardware on your computer for running games. They are super-optimized for graphics and aren't good for much else unless you are very clever,

CPUs are general purpose processors for running everything else, and haven't been getting much better in absolute terms for the last six years or so.

The big data centers were mostly CPUs until recently, then they have all been switching to GPUs. All the deep learning breakthroughs you read about are on this kind of hardware.

The simulation theory hardware would be almost entirely slanted towards rendering holograms, thus similar to GPUs. The idea is they don't have the CPU power to continue the progress of Moore's law, but nearly infinite capability to emulate GPUs.

GPUs are the specialized hardware on your computer for running games. They are super-optimized for graphics and aren't good for much else unless you are very clever

No. GPUs are highly parallel. They were originally created for graphics applications, but there are plenty of other computations that can take advantage of that parallelism and run on GPUs (look up GPGPU). GPUs are not specialized in a meaningful sense.

Compare that with SIMD instructions on the CPU -- these are specialized and allow for parallel (but not concurrent) processing of vectors (e.g., an RGB triplet), where each element is processed at the same time. They only do this -- hence specialized.

The big data centers were mostly CPUs until recently, then they have all been switching to GPUs.

No. GPGPU is becoming more widely used. They are not replacing CPUs.

The idea is they don't have the CPU power to continue the progress of Moore's law, but nearly infinite capability to emulate GPUs.

It's nonsensical to assume that "our simulators" would be using something that resembles our CPUs or GPUs. Having said that, the same computer could be used to emulate (simulate) either a CPU or GPU, i.e., CPUs and GPUs are Turing equivalent.

But just for the sake of the argument:

The "stuff" happening inside a CPU or GPU is trivial compared to the complexity of the underlying physics. The idea that these beings could simulate a whole planet (solar system, universe) but could not simulate an x86 processor in any quantity is absurd.

At least you seem to know what you are talking about. Any disagreements we have will be superficial.

Obviously GPUs are good at other things, such as deep learning. I think you would agree that clever folks adapted GPUs to other applications that weren't envisioned by the original designers.

Turing equivalence means little, it is more about code complexity. The massive simulation of arbitrary things usually comes down to choices of genetic algorithms. Your average greedy algorithm or simulated annealing will scale with CPU but not GPU (or GPGPU).

Any insights on the nature of the simulators HW are much appreciated.

At least you seem to know what you are talking about.

Not trying to be rude but I don't think I can say the same.

I think you would agree that clever folks adapted GPUs to other applications that weren't envisioned by the original designers.

There's nothing very clever about it. It's just a matter of parallelizing parts of a computation.

Turing equivalence means little, it is more about code complexity.

I think you mean computational complexity, which is not the same as code complexity. (See below.)

The massive simulation of arbitrary things usually comes down to choices of genetic algorithms.

No it doesn't. "Genetic algorithms" has the word "genetic" in it but is not related to the simulation of natural systems. Genetic algorithms are a type of "trial and error" that are meant to allow an algorithm to adapt to a specific problem, much like evolution.

They sound cool but don't see widespread use as they're not that useful.

Your average greedy algorithm or simulated annealing will scale with CPU but not GPU (or GPGPU).

I'm not sure you know what either of those mean. You're talking about optimization problems which don't have a lot to do with simulations. Again, a CPU and GPU are Turing equivalent, so either could (theoretically, if not practically) run any algorithm. Any algorithm can benefit from the parallelization of a GPU if, well, it can be parallelized. You should look up "optimization problems on GPUs".

Note that optimization here doesn't mean making a computer program run faster, that's a different kind of optimization.

Any insights on the nature of the simulators HW are much appreciated.

You want me to comment on a hypothetical super-race's computer hardware? Sure, they use XYLON architecture and run at 1 TWANTON ZARTS per QUADPOOP. Happy?

it is more about code complexity.

There's sort of a point here, which I'll address:

It's true that any Turing equivalent machine can simulate/emulate any other, but with varying degrees of performance penalty. Sometimes that matters (emulating a quantum computer) and sometimes that doesn't (emulating a NES on a modern x86). A CPU could emulate a GPU but more slowly for a highly parallel problem. A classical computer can emulate a quantum computer, but again more slowly.

Still the idea that a super-race's computer could simulate a human body (from the subatomic scale up) and a silicon GPU but would somehow struggle with a CPU is firmly in the realm of nonsense, even for a purely hypothetical argument.

Again, not trying to be rude. You seem interested in computers and I encourage you to learn about them in earnest. It's a very rewarding area.

Er, I'm afraid you are a little like the other guy, too concerned about looking good than being right.

Anyhow, I want to reiterate if you are perfectly right and I am naive, well, that happens all the time and you should know I wouldn't be aware such a thing is happening.

Just to back it down to basics, there are a bazillion problems that are NP hard. Just to pick one out, say one of prime generating polynomials. The best algorithm for this would have a very different Big O complexity on a CPU vs. GPU. CPU better, GPU worse. You agree to this, I win, unless we are arguing about different things.

Backing out a bit, we are also arguing about different things. You are saying you know more about computers, I am saying I know more about the nature of the universe.

The whole point of big O complexity is that it measures the algorithm's performance regardless of the hardware being used. Thus a O(log n) will always outperform O(n).

The reason CPUs aren't getting much better is because transistors are approaching the size of a single atom.

It's not a really a law more than an observation.

Thank you, I came to say the same thing. More people need to realize this.

Your welcome, lol.

Www.reddit.Com/r/IAMVERYSMART

Er, I'm afraid you are a little like the other guy, too concerned about looking good than being right.

Anyhow, I want to reiterate if you are perfectly right and I am naive, well, that happens all the time and you should know I wouldn't be aware such a thing is happening.

Just to back it down to basics, there are a bazillion problems that are NP hard. Just to pick one out, say one of prime generating polynomials. The best algorithm for this would have a very different Big O complexity on a CPU vs. GPU. CPU better, GPU worse. You agree to this, I win, unless we are arguing about different things.

Backing out a bit, we are also arguing about different things. You are saying you know more about computers, I am saying I know more about the nature of the universe.