What Is Parallel Computing

What Is Parallel Computing? Parallel Computation Definition
Parallel computing is one of computational techniques by utilizing multiple independent computers simultaneously. It is generally used to process very large computing tasks, either because it has to process large amounts of data (in the financial industry, bioinformatics, etc.) or because of the demands of many computing processes. The second case is commonly encountered in numerical calculations to solve mathematical equations in physics (computational physics), chemistry (computational chemistry) etc.

To perform various types of parallel computing is required parallel machine infrastructure consisting of many computers connected to the network and able to work in parallel to complete the task. For that required a variety of supporting software commonly referred to as a middleware that plays a role to regulate the distribution of work between nodes in a single machine parallel. Then the user must make parallel programming to realize the computation. However it does not mean that with parallel machines all programs that run automatically will be processed in parallel.

In parallel computing there is called parallel programming. Parallel programming is a computer programming technique that allows simultaneous execution of commands / operations (parallel computing), whether in a single or multiple processor (CPU multiple processor with parallel machine). When computers are used simultaneously is done by separate computers connected in a computer network more often the term used is distributed (distributed computing).

The main purpose of parallel programming is to improve computing performance. As more and more tasks can be done simultaneously (in the same time), more work can be accomplished. The simplest parable is, if you can boil water while chopping onions while you are cooking, the time you need will be less than if you do it in sequence (serial). Or the time needed to cut onions will be less if done by two people.

Performance in parallel programming is measured by how much speed up that is obtained in using parallel techniques. Logically, if you cut an onion alone takes 1 hour and with the help of a friend, both of you can do it in 1/2 hour then you get the speed increase as much as 2 times.

Parallel processing is different from multitasking, one CPU executes several programs at once. Parallel processing is also called parallel computing. In parallel computing system consists of several units of processors and some memory units. There are two different techniques for accessing data in the memory unit, namely shared memory address and message passing. Based on how to organize memory, parallel computers are divided into shared memory parallel machines and distributed memory parallel machines.

Processors and memory in parallel machines can be connected (interconnected) both statically and dynamically. Static interconnections are commonly used by distributed memory systems (distributed memory systems). A peer to peer direct connection is used to connect all processors. Dynamic interconnects generally use switches to connect between processor and memory. The thing to remember is that parallel computing is different from multitasking.

Understanding multitasking is a computer with a single processor executes multiple tasks simultaneously. Although some people who work in the field of operating systems assume that a single computer can not do several jobs at once, but the scheduling process that applies to the operating system makes the computer like doing tasks simultaneously. While parallel computing has been described earlier, that parallel computing using multiple processors or computers. In addition parallel computing does not use the Von Neumann architecture.

Differences about single computing (using 1 processor) with parallel computing (using multiple processors), we need to know the notion of models of computation. There are 4 computation models used, namely:
  1. SISD (Single Instruction, Single Data) is the only one that uses Von Neumann architecture. This is because in this model only used 1 processor only. Therefore this model can be regarded as a model for single computing. While the other three models are parallel computing that uses multiple processors. Some examples of computers using the SISD model are UNIVAC1, IBM 360, CDC 7600, Cray 1 and PDP.
  2. SIMD (Single Instruction, Multiple Data) uses multiple processors with the same instructions, but each processor processes different data. For example we want to find the number 27 on a row of numbers consisting of 100 numbers, and we use 5 processors. In each processor we use the same algorithm or command, but the processed data is different. For example, processor 1 processes data from sequence / first order to sequence to 20, processor 2 process data from sequence 21 until sequence 40, so also for other processor-processor. Some examples of computers using the SIMD model are ILLIAC IV, MasPar, Cray X-MP, Cray Y-MP, CM-2 Thingking Machine and Cell Processor (GPU).
  3. MISD (Multiple Instruction, Single Data) uses multiple processors with each processor using different instructions but processing the same data. This is the opposite of the SIMD model. For example, we can use the same case on the SIMD model example but different way of completion. In MISD if on the first, second, third, fourth and fifth computers both process data from sequence 1-100, but algorithm used for search technique is different in each processor. Until now there is no computer using MISD model.
  4. MIMD (Multiple Instruction, Multiple Data) uses multiple processors with each processor having different instructions and processing different data. However many computers that use the MIMD model also include components for the SIMD model. Some computers that use the MIMD model are IBM POWER5, HP / Compaq AlphaServer, Intel IA32, AMD Opteron, Cray XT3 and IBM BG / L.

Here is a picture of the difference between a single computing with parallel computing:
What Is Parallel Computing
Single / serial computing

What Is Parallel Computing
Parallel Computing

Comparison between serial computing and parallel computing
  1. In parallel computing system consists of several units of processor and some memory units. There are two different techniques for accessing data in the memory unit, namely shared memory address and message passing. Based on how to organize this memory the parallel computer is divided into shared memory parallel machine and distributed memory parallel machine.
  2. Processors and memory in parallel machines can be connected (interconnected) both statically and dynamically. Static interconnections are commonly used by distributed memory systems (distributed memory systems). A peer to peer direct connection is used to connect all processors. Dynamic interconnects generally use switches to connect between processor and memory.
  3. Data communications on distributed distributed parallel systems require communication tools. Tools that are often used by systems such as PC Networks today are the MPI (Message Passing Interface) standard or the PVM (Parallel Virtual Machine) standard both work on the TCP / IP communication layer. Both of these standards require remote access functionality in order to run the program on each processor unit.
  4. One of the protocols used in parallel computing is Network File System (NFS), NFS is a protocol that can share resources through the network. NFS is made to be independent of the type of machine, the type of operating system, and the type of transport protocol used. This is done using RPC. NFS allows a user who has been allowed to access files that are on a remote host such as accessing files located on the premises. The protocol used in the mount protocol specifies the remote host and the type of file system that will be accessed and placed in a directory, the NFS protocol performs I / O on the remote file system. NFS mount protocols and protocols work by using RPC and bypassing TCP and UDP protocols. The usefulness of NFS in parallel computing is to share data so that each slave node can access the same program on the master node.
  5. Software required for Parallel computing is PGI CDK, where this application has been equipped with Cluster Development Kit that already has a full feature if you want to do computing with parallel processing because this software has been supporting MPI to perform computing calculations.

Sign up here with your email address to receive updates from this blog in your inbox.

0 Response to "What Is Parallel Computing"

Post a Comment