PARAM AWARD  Web site, Designed and Controlled by T.R.Gopal K Nair    Selected Papers Vol. 3Keynotes, Kindling Innovations II, Keynote addresses  Pseudo Blog sites with names TRJNAIR, TRGNAIR etc. donot belong to T.R. Gopalakrishnan Nair

Dr.T.R. Gopalakrishnan Nair

Selected Papers Vol.1 -2008Keynotes, Kindling Innovations Vol.1Chief Editor, InterJRI Science and Technology Chief Editor, InterJRI Computer Science and NetworkingChief Editor, Proceedings, Multidisciplinary researchSelected Papers Vol.2.

A view of Publications Authored: Selected Papers Vol. 1,2,3. InterJRI two streams (Chief Editor), Proceedings, Kindling Innovations I & II, (Click here to View) .    For details,                                   Place the cursor on Pictures

Email : trgnair@gmail.com 

Award Speech-TeamTech Excellence

A view on Europe

 

 
Reflections from Ph.D. Candidates

 

 

Theme: Defect Prevention, Software Engineering

V. Suma

Associate Member, Research and Industry Incubation Centre, D S I
Assistant. Professor, Information Science & Engineering, DSCE, Bangalore

 I met Dr. T.R. Gopalakrishnan Nair, Director, Research Industry and Incubation Centre, Dayananda Sagar Institutions, Bangalore, a year back. He helped me to identify an area in Software Engineering to carryout my research. He gave a problem to work. His interaction is a panacea to all who are thirsty for knowledge. Within a year of this interaction, I was able to come out with a set of research papers at both National and International Level. It has been made possible only because of his continuous guidance, motivation, support and help.

I adore my work and the field of Software Engineering. After my interactions with Director, RIIC in last quarter, 2007, I was given two topics and asked to choose one of them for my research. They were the Defect Prevention and Defect Predictions. I chose Defect Prevention as my area to work. Three months of the work made me to clearly understand the defect prevention and various approaches. Outcome of this was the first paper presented in a National Conference. Among several approaches, Inspections was proven to be most promising technique for both defect detection and prevention.

Next step of work proved that 13-15% inspection time with 25-30% testing time out of whole developmental effort is required to achieve 99% defect-free product. This has come out as an International paper which was presented in Singapore. Further we proved that 70% of defects can be captured only by combined inspection and developer unit testing. This has been published as an IEEE paper. I was also able to show that design phase of software development has more flaws.

Dr. T.R.Gopalakrishnan Nair, helped me to come out with a new metric called as Depth of Inspection. This metric quantifies the quality of inspection conducted at every phase of software development. Further I was able to develop a four step process of combinational model to enhance defect detection and prevention through our enhanced approach of inspection. Further, I am able to prove my findings through the occurances of defect patterns for various types of defects at each phase of software development. I am working to identify the most common root causes for occurences of defect patterns. Also trying to find out in what percentage, these root causes contribute for the observed defect pattern. We look forward to organise our work to prove the effectiveness of our combinational model to reduce the defects distribution in the year 2009.


Theme: Multimedia Communication Networks - Proxy Caching Techniques

M. Dakshayini

Associate Member, Research and Industry Incubation Centre, D S I

 In a Multimedia System, storage and bandwidth are critical resources because any presentation resources a large volume of data to be delivered in real time. The data files of multimedia systems are very large in comparison to the data requirements of most traditional applications and so demand a large space for storage and a large bandwidth for transport. To guarantee quality of service, the playback bandwidth is reserved on the entire delivery paths(s), which consists of many system components. This makes proper usage of such critical resources even more challenging.

The buffering and caching of multimedia documents in local storage can reduce the required retrieval band width and in turn reduces the transmission cost. Example: Storage of frequently used documents in client; Nodes or in intermediate distribution server nodes can reduce the expensive; network bandwidth required for retrieval of documents from remote servers. Similarly, storage of documents shared by many users in the server memory can reduce disk retrieval bandwidth.

Both buffering and caching concern the use of primary storage to avoid delay and/or overhead in accessing secondary storage. Anticipating their use, the data blocks needed by an application may already be present in the memory or pre-fetched from secondary storage. However, the distinction between the use of buffering or caching lies in the performance objectives, application requirements, and the usage of the primary storage. I thank RIIC director Dr. Gopalakrishnan Nair, for giving me an opportunity to continue my research work at RIIC under his supervision.


Theme: Optimization policies for Multimedia servers for multimedia streaming

P. Jayarekha

Associate Member, Research and Industry Incubation Centre, D S I

 A Multimedia server stores audio and video and textual information on a large array of extremely high capacity storage devices such as optical or magnetic disks. The storage manager in this server is responsible for storage and retrieval of multimedia data from storage devices. Much multimedia data consists of a sequence of frames which is made up of data blocks belonging to various multimedia objects. Therefore it is necessary to map these frame buffers into storage blocks and allocate these blocks on storage devices as well as placement of multimedia files on various logical devices. Many multimedia presentations may consist of multiple multimedia files to be played sequentially or in parallel. Because data placement has an important impact on performance, it may be possible to exploit this knowledge to improve the performance of storage subsystem.

There are two major objectives of scheduling policies for retrieving multimedia data.

  1. Efficient retrieval of multimedia data while minimizing the retrieval overhead of the storage device.
  2. Ensuring continuous delivery by guaranteeing that each multimedia data stream is able to obtain the required bandwidth.

Real time policies are required for retrieval of multimedia data. Retrieval policies may be either round robin based or fixed block size. To increase efficiency of retrieval, both type of policies read large block size from the disk. The retrieval time from the disk can be variable so both types of policies use buffering to reduce jitter.

I thank RIIC director Dr T R. Gopalakrishnan Nair, for giving me an opportunity to continue my research work at RIIC under his supervision.


Theme: Reasoning methodologies in Networking

Kavitha Sooda

Associate Member, Research and Industry Incubation Centre, D S I

 I am an Associate Member, RIIC of DSI, carrying out my research work in cognitive networking under the guidance of Dr. T. R. Gopalakrishanan Nair. The work aims at the reasoning techniques which could be applied by the nodes for reasoning out the optimal path. The dynamic decision taking capability of the nodes in the network scenario is referred to as the advance networking. The dynamic decision is not simply based on direct algorithm but on computational intelligence aspect, machine learning, artificial intelligence, neural networks and genetic algorithms play an important role in advance network.

In most of the simple network systems, it is simply based on static and little of dynamic algorithms, which required prerequisite data as the input. With advance networking, every decision will be purely based on the current scenario where learning happens at every stage. Application can be extended to aspect of network decision like the QoS, available bandwidth, path finding, de-routing, user resource expectation, adaptive routers, communication demands and many more.

The work was carried out on the reasoning methodology using the rule-based method. The input to reasoning is classified into three categories: (i) monitoring and finding (ii) policies and goals (iii) coordinal networks. Monitoring information gives the demand per user and services supported by that router. Finding provides the alternate configuration. Policies determine what are the initial assumptions made for the design of the network. Goals are what the network should achieve for the end-to-end connectivity of the end users. Coordinal networks tell that the network should react to the dynamic reaction or decision made by the routers at various levels.


 Theme: Effect of Non- Functional requirements (NFR’s) on Software production Engineering

A. Keshav Bharadwaj

Doctoral Member, Research and Industry Incubation Centre, D S I
Assistant Professor, Computer Science Engineering, S. J. B. Institiute of Technology, Bangalore.

 I am working with Dr. T.R. Gopalakrishnan Nair and my research topic is “Factoring Non-Functional and Performance requirements into Estimation Models and defining an inclusive Estimation Model”. In 2008 we have carried out a serious amount of study in this line, the result of this is published in IEEE International Advance Computing Conference (IACC ’09) which is scheduled to be held at Patiala in March 2009. We have received an initial acceptance and are waiting for a confirmation of the same. I am looking forward to an exhilarating 2009 where an effort will be made to quantify NFRs through various metrics and state space models. Currently I am a doctoral candidate in RIIC.

Going forward in 2009 course-work would be first attempted and the four selected subjects will be studied. I would like to clear all the four papers and then begin working on the research scope. However I would simultaneously continue to read and keep myself abreast with the developments in the field of estimation. I would also like to write papers in related topics and widen my understanding of the field of Software Engineering.


    Theme: S oftware Engineering and Cognitive Methods

 N. R. Shashi Kumar

Associate Member, Research and Industry Incubation Centre, D S I
Executive Engineer (Systems), Karnataka Power Corporation Ltd., Bangalore.

I have been associated with RIIC since April 2008 as an Associate Member and have been working on the Cognitive Processes in the Software Engineering under the guidance of Dr. T.R. Gopalakrishnan Nair , Director, RIIC and Prof. R Selvarani, Head, ASERIG, RIIC, DSI.

 Cognitive informatics is a new & interdisciplinary research area that studies how information is processed & represented by human brain & how this knowledge is applied in computing. In other words it is the way humans understand and solve problems. Software Requirement Analysis is the first stage in systems engineering process and software development process. Systematic Requirement Analysis is also known as Requirements Engineering and is critical to the success of the development project. Understanding information requirements is a difficult and error-prone process because of the inherent complexity and size of current organizational information systems, cognitive limitation of users and information analysts and the complex of interaction between analysts and users in defining information requirements. Although users may be experts in application domains, it is difficult to extract domain knowledge by simply asking the users what their information requirements are because of the cognitive limitation of humans as information processors.

The limitation of knowledge and experience of information analysts contributes to the difficulty in understanding information requirements. Without adequate knowledge, information analysts may develop their representations to a lesser extent than the users’ expectation and even misrepresent how the functions would interact with each other, resulting in poor quality of requirement specifications. Software development process requires the programmers to gather & learn large amounts of knowledge distributed over several domains such as applications & programming domains. Therefore studying of cognitive process during software development can shed light on many software development issues & problems.


Theme: Segmentation methodology for detecting Bone from overlapped
tissues and vessels in CT Angiographic Images

Harikrishna Rai G.N

Technical Architect, SET labs, Infosys, Bangalore Associate Member,
Research and Industry Incubation Centre, D S I.

 Major challenges faced in the visualization of CTA images are the elimination of bone structures for detailed analysis of underlying arteries and soft tissues. Usually Bone removal method acts as crucial pre-processing step in the advanced visualization and clinical analysis phase. Trivial thresholding method on the basis of bone characteristics which uses Hounsfield values (CT number) fails due to overlapping Hounsfield values for bone and contrast filled vessels. This is the key challenge for medical imaging researchers to find our robust segmentation algorithm independent of bone characteristics. In this research work we are going to address various problems involved bone segmentation and finding an optimal and robust model which can differentiate between overlapping anatomic structures and successful elimination of bone structure from CTA images.

Key challenge is low contrast and inhomogeneous edges. Inhomogeneous regions are due to nature of bone structure in which outer cortical layer is denser than inner spongy layer. Also during image acquisition process small gap can exist in the bone surface where blood vessels go through. Also when the boundaries of two bone regions are close to each other, they tend to be diffused making background pixels between them brighter and thus lower contrast. This could lead to the segmentation of bone leaking in to other tissues. Hence no one could provide a complete and clear solution for delineation of bone-soft tissue interface region and overlapping structures.

Based on our survey we have found that bone segmentation in CT Angiography is most challenging one and developing a fast, robust and with minimal user intervention is the key demand from radiologists to visualize these images quickly. Also key problems in CT Angiography image segmentation is coming up with a novel technique to segment bone when there is an overlapping vessels and arteries surrounding bone knowing the fact that pixel characteristics of contrast filled vessels and bone structure are similar with overlapping Hounsfield values. Key objective of our research is to come up with robust model which can differentiate between overlapping anatomic structures and successful elimination of bone information. The models developed by this research will be a great aid to radiologists and clinical researchers to perform detailed analysis of arterial and vascular anomalies.


Theme: Advanced Applications of Wavelet Transform Theory in Natural Signals

Geetha A. P.

Associate Member, Research and Industry Incubation Centre, D S I.

 I am working as a research associate in RIIC under the guidance of Dr. T.R. Gopalakrishnan Nair. My area of research is “The Application of Wavelet Transforms on Natural Signals”. In the present scenario signal prediction plays an important role in numerous applications which can be aimed at extracting important information regarding a system. Wavelet transform is used because of its time – frequency analyzing capability. The natural signals occurring in nature is fascinating by its properties. At the same time, it requires multidisciplinary knowledge and higher level of rationality in analyzing and deriving information from them. Here, we deal with wavelet transforms as a field of investigations for analysis purpose. We have modeled analytical planes where wavelet transform theory closely observes signals in biological systems, nuclear physics and phenomenon of universe where the original system is to be understood through available signals emanating from them.


Theme:

Kalman Filter based Ultrasound Signal Segmentation for the fetus movement estimation

Rajendra Kurady

Lead Architect, CERNER (USA) Corporation, Bangalore Associate Member,
Research and Industry Incubation Centre, D S I

 In Ultrasound (A-scan) and Doppler modes the differentiating of the blood flow signals from tissue motion signal is very important. Tissue motion signals are generally large in amplitude and low in Doppler frequency when compared to the blood flow signals. Such signals results in low frequency spikes in spectral display. A sophisticated Kalman filter can be used to differentiate frequency and amplitude for segmenting out the tissue motion signals. This project is dealing with the implementation of Kalman filter on ultrasound signals for the estimation of fetus tissue movement.

The following are the goals and objectives of the projects

  1. Feasibility study of Kalman filter and mathematical formulation for it
  2. Feasibility study of Ultrasound signals characteristics, specifically focused on
    fetus movement during pregnancy.
  3. Identifying and implementing techniques for applying Kalman filter in ultrasound signal processing.
  4. Ultrasound signal phase and amplitude differentiation using Kalman filter
    in order to detect fetus movement.
  5. A sophisticated signal processing filter is essential for segmenting the ultrasound
    signals accurately. This enables the accurate measurement of the fetus movement.

 

Highlights

 

  • Education from the leading Institutions like IISc. , Bangalore

  • Experience in Electronics, Computers, Aerospace and Business
  • Leading Roles in these domains
  • Awards received include: PARAM award, Intel IRMX award, TeamTech Research Excellence Award
  • IEEE Senior member for two decades, ACM member

Research

The research is a passion.  There are several products, process and principles developed during this period. Total Number of publications include 75 Technical papers. No of patents 3. No of products developed 17.

Other  details of achievements are shown in resume

Research papers in leading Knowledge Servers like Cornell Arxiv, UniTrier DBLP, Harvard.edu etc.

Till 2010

  • Around 90 Research Papers
  • 42 Keynote addresses
  • Several International Visits and Presentations
  • 3 patents, 6 under Processing 
  • 5 Book chapters- International
  • The Development of  "International Journal of Research and Industry in two streams", www.interjri.org
  • Industrial Consultancy