Grid computing

Last updated

Grid computing is the use of widely distributed computer resources to reach a common goal. A computing grid can be thought of as a distributed system with non-interactive workloads that involve many files. Grid computing is distinguished from conventional high-performance computing systems such as cluster computing in that grid computers have each node set to perform a different task/application. Grid computers also tend to be more heterogeneous and geographically dispersed (thus not physically coupled) than cluster computers. [1] Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose grid middleware software libraries. Grid sizes can be quite large. [2]

Contents

Grids are a form of distributed computing composed of many networked loosely coupled computers acting together to perform large tasks. For certain applications, distributed or grid computing can be seen as a special type of parallel computing that relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to a computer network (private or public) by a conventional network interface, such as Ethernet. This is in contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-speed computer bus. This technology has been applied to computationally intensive scientific, mathematical, and academic problems through volunteer computing, and it is used in commercial enterprises for such diverse applications as drug discovery, economic forecasting, seismic analysis, and back office data processing in support for e-commerce and Web services.

Grid computing combines computers from multiple administrative domains to reach a common goal, [3] to solve a single task, and may then disappear just as quickly. The size of a grid may vary from small—confined to a network of computer workstations within a corporation, for example—to large, public collaborations across many companies and networks. "The notion of a confined grid may also be known as an intra-nodes cooperation whereas the notion of a larger, wider grid may thus refer to an inter-nodes cooperation". [4]

Coordinating applications on Grids can be a complex task, especially when coordinating the flow of information across distributed computing resources. Grid workflow systems have been developed as a specialized form of a workflow management system designed specifically to compose and execute a series of computational or data manipulation steps, or a workflow, in the grid context.

Comparison of grids and conventional supercomputers

“Distributed” or “grid” computing in general is a special type of parallel computing that relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to a network (private, public or the Internet) by a conventional network interface producing commodity hardware, compared to the lower efficiency of designing and constructing a small number of custom supercomputers. The primary performance disadvantage is that the various processors and local storage areas do not have high-speed connections. This arrangement is thus well-suited to applications in which multiple parallel computations can take place independently, without the need to communicate intermediate results between processors. [5] The high-end scalability of geographically dispersed grids is generally favorable, due to the low need for connectivity between nodes relative to the capacity of the public Internet. [6]

There are also some differences between programming for a supercomputer and programming for a grid computing system. It can be costly and difficult to write programs that can run in the environment of a supercomputer, which may have a custom operating system, or require the program to address concurrency issues. If a problem can be adequately parallelized, a “thin” layer of “grid” infrastructure can allow conventional, standalone programs, given a different part of the same problem, to run on multiple machines. This makes it possible to write and debug on a single conventional machine and eliminates complications due to multiple instances of the same program running in the same shared memory and storage space at the same time.

Design considerations and variations

One feature of distributed grids is that they can be formed from computing resources belonging to one or multiple individuals or organizations (known as multiple administrative domains). This can facilitate commercial transactions, as in utility computing, or make it easier to assemble volunteer computing networks.

One disadvantage of this feature is that the computers which are actually performing the calculations might not be entirely trustworthy. The designers of the system must thus introduce measures to prevent malfunctions or malicious participants from producing false, misleading, or erroneous results, and from using the system as an attack vector. This often involves assigning work randomly to different nodes (presumably with different owners) and checking that at least two different nodes report the same answer for a given work unit. Discrepancies would identify malfunctioning and malicious nodes. However, due to the lack of central control over the hardware, there is no way to guarantee that nodes will not drop out of the network at random times. Some nodes (like laptops or dial-up Internet customers) may also be available for computation but not network communications for unpredictable periods. These variations can be accommodated by assigning large work units (thus reducing the need for continuous network connectivity) and reassigning work units when a given node fails to report its results in the expected time.

Another set of what could be termed social compatibility issues in the early days of grid computing related to the goals of grid developers to carry their innovation beyond the original field of high-performance computing and across disciplinary boundaries into new fields, like that of high-energy physics. [7]

The impacts of trust and availability on performance and development difficulty can influence the choice of whether to deploy onto a dedicated cluster, to idle machines internal to the developing organization, or to an open external network of volunteers or contractors. In many cases, the participating nodes must trust the central system not to abuse the access that is being granted, by interfering with the operation of other programs, mangling stored information, transmitting private data, or creating new security holes. Other systems employ measures to reduce the amount of trust “client” nodes must place in the central system such as placing applications in virtual machines.

Public systems or those crossing administrative domains (including different departments in the same organization) often result in the need to run on heterogeneous systems, using different operating systems and hardware architectures. With many languages, there is a trade-off between investment in software development and the number of platforms that can be supported (and thus the size of the resulting network). Cross-platform languages can reduce the need to make this tradeoff, though potentially at the expense of high performance on any given node (due to run-time interpretation or lack of optimization for the particular platform). Various middleware projects have created generic infrastructure to allow diverse scientific and commercial projects to harness a particular associated grid or for the purpose of setting up new grids. BOINC is a common one for various academic projects seeking public volunteers; more are listed at the end of the article.

In fact, the middleware can be seen as a layer between the hardware and the software. On top of the middleware, a number of technical areas have to be considered, and these may or may not be middleware independent. Example areas include SLA management, Trust, and Security, Virtual organization management, License Management, Portals and Data Management. These technical areas may be taken care of in a commercial solution, though the cutting edge of each area is often found within specific research projects examining the field.

Market segmentation of the grid computing market

For the segmentation of the grid computing market, two perspectives need to be considered: the provider side and the user side:

The provider side

The overall grid market comprises several specific markets. These are the grid middleware market, the market for grid-enabled applications, the utility computing market, and the software-as-a-service (SaaS) market.

Grid middleware is a specific software product, which enables the sharing of heterogeneous resources, and Virtual Organizations. It is installed and integrated into the existing infrastructure of the involved company or companies and provides a special layer placed among the heterogeneous infrastructure and the specific user applications. Major grid middlewares are Globus Toolkit, gLite, and UNICORE.

Utility computing is referred to as the provision of grid computing and applications as service either as an open grid utility or as a hosting solution for one organization or a VO. Major players in the utility computing market are Sun Microsystems, IBM, and HP.

Grid-enabled applications are specific software applications that can utilize grid infrastructure. This is made possible by the use of grid middleware, as pointed out above.

Software as a service (SaaS) is “software that is owned, delivered and managed remotely by one or more providers.” (Gartner 2007) Additionally, SaaS applications are based on a single set of common code and data definitions. They are consumed in a one-to-many model, and SaaS uses a Pay As You Go (PAYG) model or a subscription model that is based on usage. Providers of SaaS do not necessarily own the computing resources themselves, which are required to run their SaaS. Therefore, SaaS providers may draw upon the utility computing market. The utility computing market provides computing resources for SaaS providers.

The user side

For companies on the demand or user side of the grid computing market, the different segments have significant implications for their IT deployment strategy. The IT deployment strategy as well as the type of IT investments made are relevant aspects for potential grid users and play an important role for grid adoption.

CPU scavenging

CPU-scavenging, cycle-scavenging, or shared computing creates a “grid” from the idle resources in a network of participants (whether worldwide or internal to an organization). Typically, this technique exploits the 'spare' instruction cycles resulting from the intermittent inactivity that typically occurs at night, during lunch breaks, or even during the (comparatively minuscule, though numerous) moments of idle waiting that modern desktop CPU's experience throughout the day (when the computer is waiting on IO from the user, network, or storage). In practice, participating computers also donate some supporting amount of disk storage space, RAM, and network bandwidth, in addition to raw CPU power.[ citation needed ]

Many volunteer computing projects, such as BOINC, use the CPU scavenging model. Since nodes are likely to go "offline" from time to time, as their owners use their resources for their primary purpose, this model must be designed to handle such contingencies.

Creating an Opportunistic Environment is another implementation of CPU-scavenging where special workload management system harvests the idle desktop computers for compute-intensive jobs, it also refers as Enterprise Desktop Grid (EDG). For instance, HTCondor [8] (the open-source high-throughput computing software framework for coarse-grained distributed rationalization of computationally intensive tasks) can be configured to only use desktop machines where the keyboard and mouse are idle to effectively harness wasted CPU power from otherwise idle desktop workstations. Like other full-featured batch systems, HTCondor provides a job queueing mechanism, scheduling policy, priority scheme, resource monitoring, and resource management. It can be used to manage workload on a dedicated cluster of computers as well or it can seamlessly integrate both dedicated resources (rack-mounted clusters) and non-dedicated desktop machines (cycle scavenging) into one computing environment.

History

The term grid computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electric power grid. The power grid metaphor for accessible computing quickly became canonical when Ian Foster and Carl Kesselman published their seminal work, "The Grid: Blueprint for a new computing infrastructure" (1999). This was preceded by decades by the metaphor of utility computing (1961): computing as a public utility, analogous to the phone system. [9] [10]

CPU scavenging and volunteer computing were popularized beginning in 1997 by distributed.net and later in 1999 by SETI@home to harness the power of networked PCs worldwide, in order to solve CPU-intensive research problems. [11] [12]

The ideas of the grid (including those from distributed computing, object-oriented programming, and Web services) were brought together by Ian Foster and Steve Tuecke of the University of Chicago, and Carl Kesselman of the University of Southern California's Information Sciences Institute. [13] The trio, who led the effort to create the Globus Toolkit, is widely regarded as the "fathers of the grid". [14] The toolkit incorporates not just computation management but also storage management, security provisioning, data movement, monitoring, and a toolkit for developing additional services based on the same infrastructure, including agreement negotiation, notification mechanisms, trigger services, and information aggregation. [15] While the Globus Toolkit remains the de facto standard for building grid solutions, a number of other tools have been built that answer some subset of services needed to create an enterprise or global grid.[ citation needed ]

In 2007 the term cloud computing came into popularity, which is conceptually similar to the canonical Foster definition of grid computing (in terms of computing resources being consumed as electricity is from the power grid) and earlier utility computing.

Progress

In November 2006, Seidel received the Sidney Fernbach Award at the Supercomputing Conference in Tampa, Florida. [16] "For outstanding contributions to the development of software for HPC and Grid computing to enable the collaborative numerical investigation of complex problems in physics; in particular, modeling black hole collisions." [17] This award, which is one of the highest honors in computing, was awarded for his achievements in numerical relativity.

Fastest virtual supercomputers

Also, as of March 2019, the Bitcoin Network had a measured computing power equivalent to over 80,000 exaFLOPS (Floating-point Operations Per Second). [25] This measurement reflects the number of FLOPS required to equal the hash output of the Bitcoin network rather than its capacity for general floating-point arithmetic operations, since the elements of the Bitcoin network (Bitcoin mining ASICs) perform only the specific cryptographic hash computation required by the Bitcoin protocol.

Projects and applications

Grid computing offers a way to solve Grand Challenge problems such as protein folding, financial modeling, earthquake simulation, and climate/weather modeling, and was integral in enabling the Large Hadron Collider at CERN. [26] Grids offer a way of using information technology resources optimally inside an organization. They also provide a means for offering information technology as a utility for commercial and noncommercial clients, with those clients paying only for what they use, as with electricity or water.

As of October 2016, over 4 million machines running the open-source Berkeley Open Infrastructure for Network Computing (BOINC) platform are members of the World Community Grid. [19] One of the projects using BOINC is SETI@home, which was using more than 400,000 computers to achieve 0.828 TFLOPS as of October 2016. As of October 2016 Folding@home, which is not part of BOINC, achieved more than 101 x86-equivalent petaflops on over 110,000 machines. [18]

The European Union funded projects through the framework programmes of the European Commission. BEinGRID (Business Experiments in Grid) was a research project funded by the European Commission [27] as an Integrated Project under the Sixth Framework Programme (FP6) sponsorship program. Started on June 1, 2006, the project ran 42 months, until November 2009. The project was coordinated by Atos Origin. According to the project fact sheet, their mission is “to establish effective routes to foster the adoption of grid computing across the EU and to stimulate research into innovative business models using Grid technologies”. To extract best practice and common themes from the experimental implementations, two groups of consultants are analyzing a series of pilots, one technical, one business. The project is significant not only for its long duration but also for its budget, which at 24.8 million Euros, is the largest of any FP6 integrated project. Of this, 15.7 million is provided by the European Commission and the remainder by its 98 contributing partner companies. Since the end of the project, the results of BEinGRID have been taken up and carried forward by IT-Tude.com.

The Enabling Grids for E-sciencE project, based in the European Union and included sites in Asia and the United States, was a follow-up project to the European DataGrid (EDG) and evolved into the European Grid Infrastructure. This, along with the Worldwide LHC Computing Grid [28] (WLCG), was developed to support experiments using the CERN Large Hadron Collider. A list of active sites participating within WLCG can be found online [29] as can real time monitoring of the EGEE infrastructure. [30] The relevant software and documentation is also publicly accessible. [31] There is speculation that dedicated fiber optic links, such as those installed by CERN to address the WLCG's data-intensive needs, may one day be available to home users thereby providing internet services at speeds up to 10,000 times faster than a traditional broadband connection. [32] The European Grid Infrastructure has been also used for other research activities and experiments such as the simulation of oncological clinical trials. [33]

The distributed.net project was started in 1997. The NASA Advanced Supercomputing facility (NAS) ran genetic algorithms using the Condor cycle scavenger running on about 350 Sun Microsystems and SGI workstations.

In 2001, United Devices operated the United Devices Cancer Research Project based on its Grid MP product, which cycle-scavenges on volunteer PCs connected to the Internet. The project ran on about 3.1 million machines before its close in 2007. [34]

Definitions

Today there are many definitions of grid computing:

See also

List of grid computing projects

Alliances and organizations

Production grids

International projects

NameRegionStartEnd
European Grid Infrastructure (EGI)EuropeMay 2010Dec 2014
Open Middleware Infrastructure Institute Europe (OMII-Europe)EuropeMay 2006May 2008
Enabling Grids for E-sciencE (EGEE, EGEE II and EGEE III)EuropeMarch 2004April 2010
Grid enabled Remote Instrumentation with Distributed Control and Computation (GridCC)EuropeSeptember 2005September 2008
European Middleware Initiative (EMI)EuropeMay 2010active
KnowARC EuropeJune 2006November 2009
Nordic Data Grid Facility Scandinavia and FinlandJune 2006December 2012
World Community Grid GlobalNovember 2004active
XtreemOS EuropeJune 2006(May 2010) ext. to September 2010
OurGrid BrazilDecember 2004active

National projects

Standards and APIs

Monitoring frameworks

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

Floating point operations per second is a measure of computer performance in computing, useful in fields of scientific computations that require floating-point calculations.

<span class="mw-page-title-main">Berkeley Open Infrastructure for Network Computing</span> Open source middleware system for volunteer and grid computing

The Berkeley Open Infrastructure for Network Computing is an open-source middleware system for volunteer computing. Developed originally to support SETI@home, it became the platform for many other applications in areas as diverse as medicine, molecular biology, mathematics, linguistics, climatology, environmental science, and astrophysics, among others. The purpose of BOINC is to enable researchers to utilize processing resources of personal computers and other devices around the world.

<span class="mw-page-title-main">United Devices</span> A privately held, commercial volunteer computing company

United Devices, Inc. was a privately held, commercial volunteer computing company that focused on the use of grid computing to manage high-performance computing systems and enterprise cluster management. Its products and services allowed users to "allocate workloads to computers and devices throughout enterprises, aggregating computing power that would normally go unused." It operated under the name Univa UD for a time, after merging with Univa on September 17, 2007.

Utility computing, or computer utility, is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed, and charges them for specific usage rather than a flat rate. Like other types of on-demand computing, the utility model seeks to maximize the efficient use of resources and/or minimize associated costs. Utility is the packaging of system resources, such as computation, storage and services, as a metered service. This model has the advantage of a low or no initial cost to acquire computer resources; instead, resources are essentially rented.

Grid MP is a commercial distributed computing software package developed and sold by Univa, a privately held company based primarily in Austin, Texas. It was formerly known as the MetaProcessor prior to the release of version 4.0, however the letters MP in Grid MP do not officially stand for anything.

<span class="mw-page-title-main">TeraGrid</span>

TeraGrid was an e-Science grid computing infrastructure combining resources at eleven partner sites. The project started in 2001 and operated from 2004 through 2011.

The Lattice Project was a volunteer computing project that combined computing resources, Grid middleware, specialized scientific application software and web services into a comprehensive Grid computing system for scientific analysis. It ran the Genetic Algorithm for Rapid Likelihood Inference (GARLI) software to determine the relationships between different genetic samples.

<span class="mw-page-title-main">Advanced Resource Connector</span> Grid computing software

Advanced Resource Connector (ARC) is a grid computing middleware introduced by NorduGrid. It provides a common interface for submission of computational tasks to different distributed computing systems and thus can enable grid infrastructures of varying size and complexity. The set of services and utilities providing the interface is known as ARC Computing Element (ARC-CE). ARC-CE functionality includes data staging and caching, developed in order to support data-intensive distributed computing. ARC is an open source software distributed under the Apache License 2.0.

A virtual appliance is a pre-configured virtual machine image, ready to run on a hypervisor; virtual appliances are a subset of the broader class of software appliances. Installation of a software appliance on a virtual machine and packaging that into an image creates a virtual appliance. Like software appliances, virtual appliances are intended to eliminate the installation, configuration and maintenance costs associated with running complex stacks of software.

Sun Cloud was an on-demand cloud computing service operated by Sun Microsystems prior to Sun's acquisition by Oracle Corporation. The Sun Cloud Compute Utility provided access to a substantial computing resource over the Internet for US$1 per CPU-hour. It was launched as Sun Grid in March 2006—the same month Amazon Web Services began offering their first IT infrastructure services. It was based on and supported open source technologies such as Solaris 10, Sun Grid Engine, and the Java platform.

The D-Grid Initiative was a government project to fund computer infrastructure for education and research (e-Science) in Germany. It uses the term grid computing. D-Grid started September 1, 2005 with six community projects and an integration project (DGI) as well as several partner projects.


Wireless grids are wireless computer networks consisting of different types of electronic devices with the ability to share their resources with any other device in the network in an ad hoc manner. A definition of the wireless grid can be given as: "Ad hoc, distributed resource-sharing networks between heterogeneous wireless devices" The following key characteristics further clarify this concept:

<span class="mw-page-title-main">Volunteer computing</span> System where users donate computer resources to contribute to research

Volunteer computing is a type of distributed computing in which people donate their computers' unused resources to a research-oriented project, and sometimes in exchange for credit points. The fundamental idea behind it is that a modern desktop computer is sufficiently powerful to perform billions of operations a second, but for most users only between 10–15% of its capacity is used. Common tasks such as word processing or web browsing leave the computer mostly idle.

<span class="mw-page-title-main">Computer cluster</span> Set of computers configured in a distributed computing system

A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software. The newest manifestation of cluster computing is cloud computing.

<span class="mw-page-title-main">DIET</span>

DIET is a software for grid-computing. As middleware, DIET sits between the operating system and the application software. DIET was created in 2000. It was designed for high-performance computing. It is currently developed by INRIA, École Normale Supérieure de Lyon, CNRS, Claude Bernard University Lyon 1, SysFera. It is open-source software released under the CeCILL license.

<span class="mw-page-title-main">Supercomputing in Europe</span> Overview of supercomputing in Europe

Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.

<span class="mw-page-title-main">Quasi-opportunistic supercomputing</span> Computational paradigm for supercomputing

Quasi-opportunistic supercomputing is a computational paradigm for supercomputing on a large number of geographically disperse computers. Quasi-opportunistic supercomputing aims to provide a higher quality of service than opportunistic resource sharing.

<span class="mw-page-title-main">Supercomputer architecture</span> Design of high-performance computers

Approaches to supercomputer architecture have taken dramatic turns since the earliest systems were introduced in the 1960s. Early supercomputer architectures pioneered by Seymour Cray relied on compact innovative designs and local parallelism to achieve superior computational peak performance. However, in time the demand for increased computational power ushered in the age of massively parallel systems.

Computation offloading is the transfer of resource intensive computational tasks to a separate processor, such as a hardware accelerator, or an external platform, such as a cluster, grid, or a cloud. Offloading to a coprocessor can be used to accelerate applications including: image rendering and mathematical calculations. Offloading computing to an external platform over a network can provide computing power and overcome hardware limitations of a device, such as limited computational power, storage, and energy.

References

  1. What is grid computing? - Gridcafe Archived 2013-02-10 at the Wayback Machine . E-sciencecity.org. Retrieved 2013-09-18.
  2. "Scale grid computing down to size". NetworkWorld.com. 2003-01-27. Archived from the original on 2023-12-06. Retrieved 2015-04-21.
  3. 1 2 "What is the Grid? A Three Point Checklist" (PDF). Archived from the original (PDF) on 2014-11-22. Retrieved 2010-10-21.
  4. "Pervasive and Artificial Intelligence Group :: publications [Pervasive and Artificial Intelligence Research Group]". Diuf.unifr.ch. May 18, 2009. Archived from the original on July 7, 2011. Retrieved July 29, 2010.
  5. Computational problems - Gridcafe Archived 2012-08-25 at the Wayback Machine . E-sciencecity.org. Retrieved 2013-09-18.
  6. "What is grid computing?". IONOS Digitalguide. Archived from the original on 2022-01-28. Retrieved 2022-03-23.
  7. Kertcher, Zack; Coslor, Erica (2018-07-10). "Boundary Objects and the Technical Culture Divide: Successful Practices for Voluntary Innovation Teams Crossing Scientific and Professional Fields" (PDF). Journal of Management Inquiry. 29: 76–91. doi: 10.1177/1056492618783875 . hdl:11343/212143. ISSN   1056-4926. S2CID   149911242. Archived (PDF) from the original on 2022-03-28. Retrieved 2019-09-18.
  8. "HTCondor - Home". research.cs.wisc.edu. Archived from the original on 2 March 2018. Retrieved 14 March 2018.
  9. John McCarthy, speaking at the MIT Centennial in 1961
  10. Garfinkel, Simson (1999). Abelson, Hal (ed.). Architects of the Information Society, Thirty-Five Years of the Laboratory for Computer Science at MIT . MIT Press. ISBN   978-0-262-07196-3.
  11. Anderson, David P; Cobb, Jeff; et al. (November 2002). "SETI@home: an experiment in public-resource computing". Communications of the ACM. 45 (11): 56–61. doi:10.1145/581571.581573. S2CID   15439521.
  12. Nouman Durrani, Muhammad; Shamsi, Jawwad A. (March 2014). "Volunteer computing: requirements, challenges, and solutions". Journal of Network and Computer Applications. 39: 369–380. doi:10.1016/j.jnca.2013.07.006.
  13. Johnson, Bridget (2019-11-06). "Grid Computing Pioneer Steve Tuecke Passes Away at 52". Archived from the original on 2022-11-04. Retrieved 2022-11-04.
  14. "Father of the Grid". Archived from the original on 2012-03-01. Retrieved 2007-04-15.
  15. Salem, M. (2007). Grid Computing: A New Paradigm for Healthcare Technologies/Applications . Retrieved 2022-08-30.
  16. "Edward Seidel 2006 Sidney Fernbach Award Recipient". IEEE Computer Society Awards. IEEE Computer Society. Archived from the original on 15 August 2011. Retrieved 14 October 2011.
  17. "Edward Seidel • IEEE Computer Society". www.computer.org. Archived from the original on 15 August 2011. Retrieved 14 March 2018.
  18. 1 2 Pande lab. "Client Statistics by OS". Folding@home. Stanford University. Archived from the original on April 12, 2020. Retrieved March 26, 2020.
  19. 1 2 "BOINCstats – BOINC combined credit overview". Archived from the original on January 22, 2013. Retrieved October 30, 2016.
  20. "SDSC, Wisconsin U IceCube Center Conduct GPU Cloudburst Experiment". SDSC. Archived from the original on September 14, 2022. Retrieved April 22, 2022.
  21. "Einstein@Home Credit overview". BOINC. Archived from the original on August 27, 2016. Retrieved October 30, 2016.
  22. "SETI@Home Credit overview". BOINC. Archived from the original on July 3, 2013. Retrieved October 30, 2016.
  23. "MilkyWay@Home Credit overview". BOINC. Archived from the original on May 20, 2012. Retrieved October 30, 2016.
  24. "Internet PrimeNet Server Distributed Computing Technology for the Great Internet Mersenne Prime Search". GIMPS. Archived from the original on May 25, 2019. Retrieved March 12, 2019.
  25. bitcoinwatch.com. "Bitcoin Network Statistics". Bitcoin. Archived from the original on January 20, 2023. Retrieved March 12, 2019.
  26. Kertcher, Zack; Venkatraman, Rohan; Coslor, Erica (23 April 2020). "Pleasingly parallel: Early cross-disciplinary work for innovation diffusion across boundaries in grid computing". Journal of Business Research. 116: 581–594. doi:10.1016/j.jbusres.2020.04.018. hdl: 11343/237477 . S2CID   219048576.
  27. "beingrid.eu: Stromkosten Vergleiche -". beingrid.eu: Stromkosten Vergleiche. Archived from the original on 23 July 2011. Retrieved 14 March 2018.
  28. "Welcome to the Worldwide LHC Computing Grid - WLCG". wlcg.web.cern.ch. Archived from the original on 25 July 2018. Retrieved 14 March 2018.
  29. "GStat 2.0 – Summary View – GRID EGEE". Goc.grid.sinica.edu.tw. Archived from the original on March 20, 2008. Retrieved July 29, 2010.
  30. "Real Time Monitor". Gridportal.hep.ph.ic.ac.uk. Archived from the original on December 16, 2009. Retrieved July 29, 2010.
  31. "LCG – Deployment". Lcg.web.cern.ch. Archived from the original on November 17, 2010. Retrieved July 29, 2010.
  32. "The Times & The Sunday Times". thetimes.co.uk. Archived from the original on 25 February 2021. Retrieved 14 March 2018.
  33. Athanaileas, Theodoros; et al. (2011). "Exploiting grid technologies for the simulation of clinical trials: the paradigm of in silico radiation oncology". SIMULATION: Transactions of the Society for Modeling and Simulation International. 87 (10): 893–910. doi:10.1177/0037549710375437. S2CID   206429690.
  34. Archived April 7, 2007, at the Wayback Machine
  35. P Plaszczak, R Wellner, Grid computing, 2005, Elsevier/Morgan Kaufmann, San Francisco
  36. IBM Solutions Grid for Business Partners: Helping IBM Business Partners to Grid-enable applications for the next phase of e-business on demand
  37. Structure of the Multics Supervisor Archived 2014-01-16 at the Wayback Machine . Multicians.org. Retrieved 2013-09-18.
  38. "A Gentle Introduction to Grid Computing and Technologies" (PDF). Archived (PDF) from the original on March 24, 2006. Retrieved May 6, 2005.
  39. "The Grid Café – The place for everybody to learn about grid computing". CERN. Archived from the original on December 5, 2008. Retrieved December 3, 2008.

Bibliography