• Keine Ergebnisse gefunden

On the physical limitations of large-scale computing

N/A
N/A
Protected

Academic year: 2022

Aktie "On the physical limitations of large-scale computing"

Copied!
2
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

On the physical limitations of large-scale computing

The role of time in technological and biological computing

In many-discipline research it is hard to cooperate between experts of different fields: the optimum solution needed for the project is not simply putting together several local optima.

Today we are at the peak of hype in almost all computing-related fields, such as Big Data, exa-scale computing, real-time everything, artificial neural networks and simulating large tasks, including the operation of our brain. The demands are excessive, and at the side of the road towards those impressive goals, there are more and more failed projects. Are they results of engineering imperfectness or some invisible limit is reached?

The talk calls attention to that the present highly developed technological computing is based theoretically on a several decades old computing paradigm, based on approximations corresponding to the classic science (instant interaction). Von Neumann told that it would be ‘unsound’ (sic) to apply his simplified paradigm to neural operations, and that using “too fast” processors also vitiates his paradigm. For today, modern technology produced “too fast” processors, and networks of those processors are used to simulate the operation of large neural networks. The stealthy development of technology, however, left the theoretical basis of computing unchanged. For today, the operation of technological computing units (their timing relations) are in more close resemblance with the operation of our brain than with the operation assumed in the abstract computing model. In addition, the commonly used single-processor approach, as Amdahl coined, does not enable effective cooperation of segregated processors. The lack of proper theoretical basis results in inefficient single-processors, enormous power consumption, stalled supercomputer payload performance, experienced performance limits of Artificial Intelligence, and the unreachable moonshot of brain simulation.

The talk reveals that von Neumann’s model must be reconsidered as well as his simplified paradigm be generalized for modern technology and use cases, and introduces the way how it can be done.

We must recognize that our modern technology works according to the principles of modern science instead of the that-time good approximation classic science. We introduce formally a 4-D coordinate system to describe computing events and a general abstract computing model, which equally describes both technological and biological computing.

In biological systems, the millions times slower propagation (conduction velocity) makes the need for using time-aware computing evident and enables us to discover the effects of temporal operation even in the case of “single processors”. In technological systems, one needs excessive system size (or extremely intense communication) to see them.

Introducing time in computing science drastically changes everything. In biological computing, we can explain how information is stored and processed (how our brain computes). In technological computing, we can explain why modern processors are heating (and cooling) rather than computing.

We analyze distributed computing, and show that parallelized sequential computing has inherent performance limits and that those limits are already reached. Furthermore, we analyze the role of workload in shaping performance of such parallelized conventional systems, and explain why certain classes of simulators show up desperately low performance. We show, why (and how much) AI and brain simulation workload is very disadvantageous for the conventional architectures.

We also shortly discuss why the conventional principles, components and architectures need drastic revision to achieve reasonable performance in the future computing. Nature works with simple, low performance, replaceable processors with good networking ability. We cannot imitate their operation with sophisticated, high-performance processors with poor networking capability.

(2)

János Végh:Finally, how many efficiencies the supercomputers have?

The Journal of Supercomputing volume 76, pages 9430–9455 (2020) (upgraded in https://arxiv.org/abs/2001.01266)

János Végh: How Amdahl’s Law limits the performance of large artificial neural networks. Brain Informatics volume 6, Article number: 4 (2019)

https://braininformatics.springeropen.com/articles/10.1186/s40708-019-0097-2

János Végh: Why do we need to Introduce Temporal Behavior in both Modern Science and Modern Computing, With an Outlook to Researching Modern Effects/Materials and Technologies, Global Journal of Computer Science and Technology: Hardware & Computation, 20/1 (2020), 13-29,

10.34257/GJCSTAVOL20IS1PG13

János Végh and Ádám J. Berki: On the Role of Information Transfer’s Speed in Technological and Biological Computations, MPDI Brain Sciences, under review, (2021) doi:10.20944/preprints202103.0415.v1

János Végh: von Neumann's missing "Second Draft": what it should contain, 10.1109/CSCI51800.2020.00235

János Végh and Ádám J. Berki: Do we know the operating principles of our computers better than those of our brain?, https://american-cse.org/sites/csci2020proc/pdfs/CSCI2020-6SccvdzjqC7bKupZxFmCoA/

762400a668/762400a668.pdf

János Végh: How to extend the Single-Processor Paradigm to the Explicitly Many-Processor Approach, 2020 CSCE, Fundamentals of Computing Science, Accepted FCS2243, in print,

https://arxiv.org/abs/2006.00532

János Végh: Introducing Temporal Behavior to Computing Science, 2020 CSCE, Fundamentals of Computing Science, FCS2930, in print, https://arxiv.org/abs/2006.01128

János Végh: Which scaling rule applies to Artificial Neural Networks, Neural Computing and Applications, in print, (2021), http://arxiv.org/abs/2005.08942

Referenzen

ÄHNLICHE DOKUMENTE

Our aim is to advocate the use of strongly convergent fixed point iterations for computing dynamic user equilibrium which are provably convergent under mild a-priori

Here, we adopted a hierarchical approach to test the contribution of region, landscape heterogeneity, local management (organic vs. conventional) and location within-field (edge

- Intelligent distributed computing systems will implicitly support dynamic analysis of failures in decision making and problem solving, in communication channels

In this section, we show that if we settle for an approximate solution, we can reduce the space requirement substantially. Our matching lower bounds for the approxi- mate

The left temporal lobe, the right temporal lobe, and right frontal regions appear to strongly drive the age-related changes in functional connectivity.. understanding

On the basis of the exemplary architecture described above we discuss our experience with the programming interfaces offered in the different cloud computing scenarios, and how

We present a simulation model which allows studies of worm spread and counter mea- sures in large scale multi-AS topologies with millions of IP addresses..

Because each edge updates only the resulting element for its source vertex, we can do the same computation by a simple loop over the edge list like the multiplication loop kernel