Ila Vennila
PSG College of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ila Vennila.
multimedia signal processing | 2011
T. Veerakumar; S. Esakkirajan; Ila Vennila
This paper proposes a new algorithm to remove salt and pepper noise in video. The adaptive decision algorithm first checks whether the selected pixel in the video sequence is noisy or noise free. Initially the window size is selected as 3 × 3. If the selected pixel within the window is 0s or 255s, and some of other pixels within the window are noise free, then the selected pixel value is replaced by trimmed median value. If the selected pixel is 0 or 255 and other pixel values in a selected window (3 × 3) all are 0s and 255s, then change the selected window size as 5 × 5, then the selected pixel value is replaced by trimmed median value. In the selected new window (5 × 5), all the elements are 0s or 255s then the processing pixel is replaced by the previous resultant pixel. Finally, the performance of the proposed algorithm is compared with the existing algorithms like Median Filter; Decision Based Filter and Progressive Switched Median Filter. The proposed algorithm gives better PSNR and IEF results than the existing algorithms.
Signal, Image and Video Processing | 2014
T. Veerakumar; S. Esakkirajan; Ila Vennila
Spline-based approach is proposed to remove very high density salt-and-pepper noise in grayscale and color images. The algorithm consists of two stages, the first stage detects whether the pixel is noisy or noise-free. The second stage removes the noisy pixel by recursive spline interpolation filter. The proposed recursive spline interpolation filter is based on the neighborhood noise-free pixels and previous noise-free output pixel; hence, it is termed as recursive spline interpolation filter. The performance of the proposed algorithm is compared with the existing algorithms like standard median filter, decision-based filter, progressive switched median filter, and modified decision-based unsymmetric trimmed median filter at very high noise density. The proposed algorithm gives better peak signal-to-noise ratio, image enhancement factor, and correlation factor results than the existing algorithms.
international conference on recent trends in information technology | 2012
T. Veerakumar; S. Esakkirajan; Ila Vennila
In this paper, a new algorithm is introduced to remove the random valued impulse noise in images. This algorithm contains two stages. The first stage detects the noisy pixels in the image. In the second stage, the noisy pixel is replaced by the median value of the neighborhood noise free pixels. The absolute difference is used to detect the noisy pixel and trimmed median value replaces the noisy pixel. This proposed algorithm shows better results than the Progressive Switching Median Filter (PSM), Pixel-wise Median Absolute Difference (PWMAD), Tristate median filter (TSM), Efficient Procedure for removing Random Valued Impulse Noise (EPRIN) and Optimal Direction Based random valued impulse noise (ODRIN). The proposed algorithm is tested for different gray scale images and it gives better Peak Signal to Noise Ratio.
international conference on pattern recognition | 2012
J. Yogapriya; Ila Vennila
This paper seeks to focus on Medical Image Retrieval based on Feature extraction, Classification and Similarity Measurements which will aid for computer assisted diagnosis. The selected features are Shape(Generic Fourier Descriptor (GFD)and Texture(Gabor Filter(GF)) that are extracted and classified as positive and negative features using a classification technique called Relevance Vector Machine (RVM) that provides a natural way to classify multiple features of images. The similarity model is used to measure the relevance between the query image and the target images based on Euclidean Distance(ED). This type of Medical Image Retrieval System framework is called GGRE. The retrieval algorithm performances are evaluated in terms of precision and recall. The results show that the multiple feature classifier system yields good retrieval performance than the retrieval systems based on the individual features.
international conference on recent trends in information technology | 2011
S. Mary Praveena; Ila Vennila
Image fusion is to integrate the information of two or more source images in order to obtain the more accurate, comprehensive and reliable description of the same scene. One of the most important fields about image analysis and computer vision is image fusion. With the development of multi-sensors, it is possible to obtain data from different sensors. A new and improved image can be got if taking into account all the images. A new image fusion scheme based on Mallet Transform is proposed in this paper. Since medical images have several objects and curved structures, it is expected that the curve let transform would be better in their fusion. The visual Effect of the fused image and experimental data show that the proposed scheme is a feasible and effective method for image fusion.
Wireless Personal Communications | 2016
S. Mary Praveena; Ila Vennila
In the real world applications, wireless networks are an integral part of day-to-day life for many people, with businesses and home users relying on them for connectivity and communication. This paper examines the problems relating to the topic of wireless security and the background literature. The biometric systems often face limitations because of sensitivity to noise, intra class invariability, data quality, and other factors. Improving the performance of individual matchers in the aforementioned situation may not be effective. Multi biometric systems are used to overcome this problem by providing multiple pieces of evidence of the same identity. This system provides effective fusion scheme that combines information presented by the multiple domain experts based on the Rank level fusion integration method, thereby increasing the efficiency of the system which is not possible by the unimodal biometric system. The proposed multimodal biometric system has a number of unique qualities, starting from utilizing principal component analysis and fisher’s linear discriminant methods for individual matchers authentication and the novel rank level fusion method is used in order to consolidate the results obtained from different biometric matchers. The ranks of the individual matchers are combined using highest rank, Borda count, and logistic regression method. From the results it can be concluded that the overall performances of the wireless security based multi biometric systems are improved even in the presence of quality of data.
world congress on information and communication technologies | 2011
T. Balakumaran; Ila Vennila
Mammography is the most used diagnostic technique for breast cancer. Microcalcification clusters are the early sign of breast cancer and their early detection is a key to increase the survival rate of women. The appearance of microcalcification clusters in mammogram as small localized granular points, which is difficult to identify by radiologists because of its tiny size. An efficient method to improve diagnostic accuracy in digitized mammograms is the use of Computer Aided Diagnosis (CAD) system. This paper presents Multiresolution based foveal algorithm for microcalcification detection in mammograms. The detection of microcalcifications is achieved by decomposing the mammogram by wavelet transform without sampling operator into different sub-bands, suppressing the coarsest approximation subband, and finally reconstructing the mammogram from the subbands containing only significant detail information. The significant details are obtained by foveal concepts. Experimental results show that the proposed method is better in detecting the microcalcification clusters than other wavelet decomposition methods.
Wireless Personal Communications | 2017
S. Mary Praveena; Ila Vennila; R. Vaishnavi
The steady increase in the demand for broadband services and the consequent increase in the volume of generated traffic in our communication networks have motivated the need to implement next generation networks in our territories. Optical Fibre cable is used as media to design long/short network and it supports high bandwidth in Gigabits per second speed. Earlier OFC is used to connect the long distance places and called Optical Transport Network and presently used even in local/Access network called Optical Access Network. In present environment data to be transmitted is so high due to growth in internet. Successful transmission of such a huge bandwidth is big challenging job for long distance network designer. All customers require the QOS and they are interested to make SLA for their service to be obtained from Service provider. ISP should design their network to support the customer requirement suitably otherwise ISP cannot survive in this competitive environment. This paper aims to explain the design and planning of a passive optical network based fiber to the home architecture. The main idea of this paper is to build a fabricated environment that allows us to analyse the depth on FTTx networks and decide which is the most preferable option for this environment. Finally, the simulation software that meets the design requirements will be chosen, the design of passive optical network will be made and the results justify that the network is more viable and can be implemented in a real time.
Wireless Personal Communications | 2017
S. Mary Praveena; Ila Vennila; R. Vaishnavi
At present Optical fibre cable is used as media to design long/short network and it supports high bandwidth in Gigabits per second speed. Earlier OFC is used to connect the long distance places and called Optical transport network and presently used even in local/Access network called Optical Access network. In present environment data to be transmitted is so high due to growth in internet. Successful transmission of such a huge bandwidth is big challenging job for long distance network designer. This paper aims to explain the design and planning of a Passive Optical Network (PON) based wireless fibre to the Home architecture. The purpose is to show the behaviour of links of optical fibre when the signal goes through all the elements such as optical fibre, splitters, multiplexers and the goal is to find a good quality of signal in all receivers. The final goal pursued with this paper is to evaluate the performance of the whole system. The steady increase in the demand for broadband services and the consequent increase in the volume of generated traffic in our communication networks have motivated the need to implement next generation access networks in our territories. To develop multimedia telecommunication networks as an infrastructure, it is necessary to install highly reliable optical fibre cable network architecture like PON based fibre to the Home Network. In a point-to-multipoint architecture, PON is an appropriate architecture to provide high bandwidth for many customers. For any Internet Service Providers, the biggest challenge is to design an efficient access network like PON to ensure the QoS and maintenance of that network that includes Identification and Rectification of cable faults in that access network within short time thus enabling efficiency and reputation of ISPs. So this work deals with design and analysis of access network like PON and also the maintenance of the access network which includes Identification and Rectification of optical fibre cable faults in PON based fibre to the Home Network. The quality of this fibre infrastructure/network is monitored by Optical Time Domain Reflectometer for observing losses and fibre breaks and its efficiency is improved by Centralized Fault Detection System. The hardware implementation of the access network like PON based wireless fibre to the Home Network and the testing instruments are also considered for fault identification.
Archive | 2016
Judith Justin; Ila Vennila
This research focuses on the analysis of speech produced by a prosthetic device implanted in laryngectomees and the comparison of its paralinguistic features with that of a normal voice. Acoustic analysis was done using fundamental frequency, jitter, shimmer, and intensity. The study included eight males and the results indicated that the alaryngeal speech has values closer to natural voice in features like jitter, shimmer, pitch, and intensity. The harmonics-to-noise ratio of alaryngeal voice was slightly lower than that of normal voice. The formants and bandwidth were higher than that of normal voice. The study implies that though the alaryngeal voice is a pseudo voice produced by a speech aid (Blom-singer), it can produce a voice as close to the natural voice as possible and observations indicate that the pronunciations produced are as natural as the voice produced by the vocal cords.
Collaboration
Dive into the Ila Vennila's collaboration.
Avinashilingam Institute for Home Science and Higher Education for Women
View shared research outputs