Chaos, Control and Synchronization in Excitatory-In Hibitory Neural Network Models.

Date of Submission

February 2011

Date of Award


Institute Name (Publisher)

Indian Statistical Institute

Document Type

Doctoral Thesis

Degree Name

Doctor of Philosophy

Subject Name



Theoretical Statistics and Mathematics Unit (TSMU-Kolkata)


Pal, Sankar Kumar (MIU-Kolkata; ISI)

Abstract (Summary of the Work)

Since the development of the electronic computer in the 1940s, the serial procossing computational paradigm has successfully held sway. It has developed to the point where it is now ubiquitous. However, there are many tasks which are yet to be successfully tackled computationally. A case in point is the multifarious activities that the human brain performs regularly, including pattern recognition, associative recall, etc. which is extremely difficult, if not impossible to do using traditional computation.This problem has led to the development of non-standard techniques to tackle situations at which biological information processing systems excel. One of the more successful of such developments aims at reverse-engineering the biological apparatus itself to find out why and how it works. The field of neural network models has grown up on the premise that the massively parallel distributed processing and connectionist structure observed in the brain is the key behind its superior performance. By implementing these features in the design of a new class of architectures and algorithms, it is hoped that machines will approach human-like ability in handling real-world situations.Network models of computation have been enjoying a period of revival for quiet some time now, from the perspective of both theory and applications [85). These models comprise networks of large numbers of simple processing elements, usually having continuously varying activation values and stochastic threshold dynamics. The activity of these elements, a; (i = 1, 2, .., N) at some time instant t, are determined by the temporal evolution equation:z; (t) = F ( E, Wy z; (t- 1) - 6 ).where, 6, is an internal threshold (usually taken as zero), Wij is the connection weight from element j to element i, and F is a nonlinear activation function. If Wjy > 0, the synaptic connection between neurons i and j is called emcitatory; if Wjy <0, it is called inhibitory. The activation function, F, usually has a sigmoid form, which may be of the following type:F(+) = . tanh(-) z/a a being the slope. For a= 0, F is a hard limiting or step function, = sgn ( E, Wyj a; - oi, ).Different neural network models are specified bynetwork topologgy, L.e, the pattern of connections between the elements com- prising the network,characteristics of the processing element, e.g. the explicit form of the nonlinear function F, and the value of the threshold, 0, learning rule, i.e. the rules for computing the connection weights Wy appro- priate for a given task, and,updating rule, e.g. the states of the processing elements may be updated in parallel (synchronous updating), sequentially or randomly.One of the limitations of most network models at present is that they are basically static, i.e., once an equilibrium state is reached, the network remains in that state, until the arrival of new external input [8]. In contrast, real neural networks show a Preponderance of dynamical behavior.


ProQuest Collection ID: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:28842936

Control Number


Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.



This document is currently not available here.