Scheduling Memory in a Multiprocessing Environment.

Date of Submission

December 1999

Date of Award

Winter 12-12-2000

Institute Name (Publisher)

Indian Statistical Institute

Document Type

Master's Dissertation

Degree Name

Master of Technology

Subject Name

Computer Science


Advance Computing and Microelectronics Unit (ACMU-Kolkata)


Sinha, Bhabani Prasad (ACMU-Kolkata; ISI)

Abstract (Summary of the Work)

Many real-life applications such as Digital Signal Processing, Image Processing, weather fore- casting, neural processing, etc. require large amounts of computations to be performed in an acceptable time frame. It is here that parallel machines provide much superior performance compared to convèntional sequential machines (Von-Neumann Architecture). The demand for increased performance appears to have no upper-bound in the present day context. Re- search and Development activities world wide have led to faster and faster processors for meeting ever-increasing demand for better performance. Major technological breakthroughs in the form of VLSI have brought about a revolution in miniaturising component size and achieving even higher speeds. But their is a fundamental physical limit to speed whereby nothing can move faster than light. With today's technology, computers have reached close to this fundamental limit. Hence, there is a limit on the number of floating point operations that can be executed on such processors per second. Parallel Processing has been accepted as the most important architectural approach to overcome the technological barrier.Parallel Processing is currently a hot research topic with wide research being carried out by worlds leading scientists and professors. Their remains a lot of scope for improve- ments and more efficient algirithms keep coming not withstanding the fact that a quantum leap has already been mabe in design of parallel machines. Parallel machines are being forseen as the future generation machines. While the architecture for supporting parallel computation is an issue, it is certainly not the major problem. There are also issues relating to the programmability of the parallel machines. For effectively utilizing the features offered by parallel architectures, the synergism among architectures, algorithm and programming should be properly fostered.Parallel Processing has been inherently used in some form or the other to increase performance of computers ever since their emergence as computing devices. It is the per- formance that sets an index to commercial existence of a computer. In this direction, ca- pabilities for bit parallel arithmetic, development of I/0 co-processors, cache and content addressable memories and applications of pipelining, vector processing, time sharing, mul- tiprogramming, multiprocessing, all strive to achieve optimal performance from a computer exploiting inherent parallelism.In Parallel Processing we use multiple processors to execute a single task. This is achieved by parallel algorithms which distribute a single task among all processors. These processors then communicate with each other using shared variables in shared memory mod- ules after calculating the intermediate results. Proper synchronization is needed to preserve semantic dependence in order to guarantee the expected results. This is what makes the parallel computation more complicated. The interconnection mechanism used to connect processors to memory modules must be fast enough to guarantee that the time taken for communication does not nullify the advantage achieved by parallel computation.


ProQuest Collection ID:

Control Number


Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.


This document is currently not available here.