Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mukul Shirvaikar is active.

Publication


Featured researches published by Mukul Shirvaikar.


Neural Networks | 1989

A neural network approach to character recognition

A. Rajavelu; Mohamad T. Musavi; Mukul Shirvaikar

Abstract An application of neural networks in optical character recognition (OCR) is presented. The concept of learning in neural networks is utilized to a large extent in developing an OCR system to recognize characters of various fonts and sizes, and hand written characters. Parallel computational capability helps reduce recognition time which is crucial in a commercial context. The sensitivity of the network is such that small variations in the input do not affect the output and this results in an improvement in the recognition rate of characters with slight variations in structure, linearity, and orientation.


southeastern symposium on system theory | 2004

An optimal measure for camera focus and exposure

Mukul Shirvaikar

Consistent image quality is one of the most important requirements for a camera system. This applies to application systems in industrial inspection, consumer photography and microscopy. The quality of an image can be measured in terms of two components: sharpness and contrast. These can be directly translated to the camera system control variables: focus and exposure. A number of measures have been developed to adjust the focus and exposure independently. In this paper, an optimal statistical measure of image quality is developed and tested. This measure allows the simultaneous optimization of both the focus and exposure settings during system calibration or operation. The performance of this measure is demonstrated using a series of test patterns and compared to other popular measures.


Journal of Real-time Image Processing | 2006

Trends in automated visual inspection

Mukul Shirvaikar

Automated visual inspection (AVI) or automated optical inspection (AOI) systems offer significant advantages over human inspection from a fatigue, throughput, speed and accuracy point of view. This area has matured considerably over the last decade or so, due to advances in enabling technologies like sensors, processor hardware, software methodologies, and networking to name a few. These advances in cost, performance and design methodologies have resulted in an explosion of application areas, where AVI and AOI systems have become an integral component of quality control schemes in product inspection and certification. AVI systems represent a quantitative feedback node to identify and eliminate problems at different stages in the production process. Commercial systems The major application areas for AVI include but are not limited to: (a) Packaging: medical containers (pill strips, bottles), food cans (hole detection, shape defects, finish), chips (pad contacts), etc. (b) Electronics: semiconductor wafers, dies, chips, PCB inspection, etc. (c) Web and surface inspection: metal, textiles, paper etc. AVI systems can be further classified based on whether they detect gross defects using relatively global features or fine defects based on fine local features. Print inspection and die inspection are examples of the latter and typically require greater algorithmic support. Commercial systems can also be classified based on the level of integration. Custom systems are complete solutions for a specific industrial application available for high-volume industries like semiconductors. On the other hand, the AVI industry has always had a large number of system integrators and consultants that design systems based on off-theshelf vision processors, sensors, software and varying degrees of proprietary contributions. Commercial AVI systems have always had to operate under ‘‘hard’’ realtime conditions and harsh plant environments and have become mature and reliable over the years. Increasingly, industry trends have led to further de facto requirements: (a) systems have to be networked and accessible remotely, (b) systems have to be integrated with manufacturing software or processes and (c) systems have to interface with databases. While these trends raise the bar for any developer, they have made defect classification and statistical analysis, a real possibility. This presents an opportunity to AVI system developers to ‘‘add value’’ to the manufacturing process.


International Journal of Reconfigurable Computing | 2012

Cellular automata-based parallel random number generators using FPGAs

David H. K. Hoe; Jonathan M. Comer; Juan C. Cerda; Chris D. Martinez; Mukul Shirvaikar

Cellular computing represents a new paradigmfor implementing high-speed massively parallel machines. Cellular automata (CA), which consist of an array of locally connected processing elements, are a basic formof a cellular-based architecture. The use of field programmable gate arrays (FPGAs) for implementing CA accelerators has shown promising results. This paper investigates the design of CA-based pseudo-random number generators (PRNGs) using an FPGA platform. To improve the quality of the random numbers that are generated, the basic CA structure is enhanced in two ways. First, the addition of a superrule to each CA cell is considered. The resulting self-programmable CA (SPCA) uses the superrule to determine when to make a dynamic rule change in each CA cell. The superrule takes its inputs from neighboring cells and can be considered itself a second CA working in parallel with the main CA. When implemented on an FPGA, the use of lookup tables in each logic cell removes any restrictions on how the super-rules should be defined. Second, a hybrid configuration is formed by combining a CA with a linear feedback shift register (LFSR). This is advantageous for FPGA designs due to the compactness of the LFSR implementations. A standard software package for statistically evaluating the quality of random number sequences known as Diehard is used to validate the results. Both the SPCA and the hybrid CA/LFSR were found to pass all the Diehard tests.


Proceedings of SPIE | 2009

A Comparison between DSP and FPGA Platforms for Real-Time Imaging Applications

Mukul Shirvaikar; Tariq Bushnaq

Real-time applications impose serious demands on hardware size, time deadlines, power dissipation, and cost of the solution. A typical system may also require modification of parameters during operation. Digital Signal Processors (DSPs) are a special class of microprocessors designed to specifically address real time implementation issues. As the complexity of real-time systems increases the need to introduce more efficient hardware platforms grows. In recent years Field Programmable Gate Arrays (FPGAs) have gained a lot of traction in the real-time community, as a replacement for the traditional DSP solutions. FPGAs are indeed revolutionizing image and signal processing due to their advanced capabilities such as reconfigurability. The Discrete Wavelet Transform is a classic real-time imaging algorithm that is drawing the attention of engineers in recent years. In this paper, we compare the FPGA implementation of 2-D liftingbased wavelet transform using optimized hand written VHDL code with a DSP implementation of the same algorithm using the C language. The goal of this paper is to compare the development effort and the performance of a traditional DSP processor to a FPGA based implementation of an image real-time application. The results of the experiment proves the superiority of FPGAs over traditional DSP processors in terms of time execution, power dissipation, and hardware utilization, nevertheless this advantage comes at the cost of a higher development effort. The hardware platform used is an Altera DE2 board with a 50MHz Cyclone II FPGA chip and a TI TMS320C6416 DSP Starter Kit (DSK).


Bone | 2013

Biomechanical properties and microarchitecture parameters of trabecular bone are correlated with stochastic measures of 2D projection images.

Xuanliang N. Dong; Mukul Shirvaikar; Xiaodu Wang

It is well known that loss of bone mass, quantified by areal bone mineral density (aBMD) using DXA, is associated with the increasing risk of bone fractures. However, bone mineral density alone cannot fully explain changes in fracture risks. On top of bone mass, bone architecture has been identified as another key contributor to fracture risk. In this study, we used a novel stochastic approach to assess the distribution of aBMD from 2D projection images of Micro-CT scans of trabecular bone specimens at a resolution comparable to DXA images. Sill variance, a stochastic measure of distribution of aBMD, had significant relationships with microarchitecture parameters of trabecular bone, including bone volume fraction, bone surface-to-volume ratio, trabecular thickness, trabecular number, trabecular separation and anisotropy. Accordingly, it showed significantly positive correlations with strength and elastic modulus of trabecular bone. Moreover, a combination of aBMD and sill variance derived from the 2D projection images (R2=0.85) predicted bone strength better than using aBMD alone (R2=0.63). Thus, it would be promising to extend the stochastic approach to routine DXA scans to assess the distribution of aBMD, offering a more clinically significant technique for predicting risks of bone fragility fractures.


southeastern symposium on system theory | 2010

Optimization of computer vision algorithms for real time platforms

Pramod Poudel; Mukul Shirvaikar

Real time computer vision applications like video streaming on cell phones, remote surveillance and virtual reality have stringent performance requirements but can be severely restrained by limited resources. The use of optimized algorithms is vital to meet real-time requirements especially on popular mobile platforms. This paper presents work on performance optimization of common computer vision algorithms such as correlation on such embedded systems. The correlation algorithm which is popular for face recognition, can be implemented using convolution or the Discrete Fourier Transform (DFT). The algorithms are benchmarked on the Intel Pentium processor and Beagleboard, which is a new low-cost low-power platform based on the Texas Instruments (TI) OMAP 3530 processor architecture. The OMAP processor consists of an asymmetric dual-core architecture, including an ARM and a DSP supported by shared memory. OpenCV, which is a computer vision library developed by Intel corporation was utilized for some of the algorithms. Comparative results for the various approaches are presented and discussed with an emphasis on real-time implementation.


Journal of Biomechanics | 2015

Random field assessment of inhomogeneous bone mineral density from DXA scans can enhance the differentiation between postmenopausal women with and without hip fractures

Xuanliang Neil Dong; Rajeshwar Pinninti; Timothy Lowe; Patricia Cussen; Joyce E. Ballard; David Di Paolo; Mukul Shirvaikar

Bone mineral density (BMD) measurements from Dual-energy X-ray Absorptiometry (DXA) alone cannot account for all factors associated with the risk of hip fractures. For example, the inhomogeneity of bone mineral density in the hip region also contributes to bone strength. In the stochastic assessment of bone inhomogeneity, the BMD map in the hip region is considered as a random field and stochastic predictors can be calculated by fitting a theoretical model onto the experimental variogram of the BMD map. The objective of this study was to compare the ability of bone mineral density and stochastic assessment of inhomogeneous distribution of bone mineral density in predicting hip fractures for postmenopausal women. DXA scans in the hip region were obtained from postmenopausal women with hip fractures (N=47, Age: 71.3±11.4 years) and without hip fractures (N=45, Age: 66.7±11.4 years). Comparison of BMD measurements and stochastic predictors in assessing bone fragility was based on the area under the receiver operating characteristic curves (AUC) from logistic regression analyses. Although stochastic predictors offered higher accuracy (AUC=0.675) in predicting the risk of hip fractures than BMD measurements (AUC=0.625), this difference was not statistically significant (p=0.548). Nevertheless, the combination of stochastic predictors and BMD measurements had significantly (p=0.039) higher prediction accuracy (AUC=0.748) than BMD measurements alone. This study demonstrates that stochastic assessment of bone mineral distribution from DXA scans can serve as a valuable tool in enhancing the prediction of hip fractures for postmenopausal women in addition to BMD measurements.


southeastern symposium on system theory | 2004

Automatic detection and interpretation of road signs

Mukul Shirvaikar

Automatic sign interpretation on highways and roads is a real-time imaging application with utility in autonomous vehicle operation, intelligent highway systems and sign inventory systems for transportation departments. We propose a step-wise multistage sign recognition and interpretation strategy. The approach relies on independent examination of spectral and spatial features. The spectral processing step utilizes color cues to extract candidate target pixels in the image. In the next stage, spatial features extracted from the image are matched against attributes derived from object models. Relational feature analysis can further refine the results after the spatial analysis step. Color images of a variety of signs including speed limit, yield, stop and route number signs formed the training set. The accuracy of the method is measured for different types of signs and the results are discussed.


electronic imaging | 2015

Fast semivariogram computation using FPGA architectures

Yamuna Lagadapati; Mukul Shirvaikar; Xuanliang N. Dong

The semivariogram is a statistical measure of the spatial distribution of data and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. The semivariogram is a plot of semivariances for different lag distances between pixels. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is 𝑂(𝑛2). Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz, but they can perform tens of thousands of calculations per clock cycle while operating in the low range of power. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. The design consists of several modules dedicated to the constituent computational tasks. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. Anisotropic semivariogram implementation is anticipated to be an extension of the current architecture, ostensibly based on refinements to the current modules. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from MRI scans are utilized for the experiments. Computational speedup is measured with respect to Matlab implementation on a personal computer with an Intel i7 multi-core processor. Preliminary simulation results indicate that a significant advantage in speed can be attained by the architectures, making the algorithm viable for implementation in medical devices

Collaboration


Dive into the Mukul Shirvaikar's collaboration.

Top Co-Authors

Avatar

Ron J. Pieper

University of Texas at Tyler

View shared research outputs
Top Co-Authors

Avatar

Xuanliang N. Dong

University of Texas at Tyler

View shared research outputs
Top Co-Authors

Avatar

Xuanliang Neil Dong

University of Texas at Tyler

View shared research outputs
Top Co-Authors

Avatar

David Di Paolo

University of Texas at Tyler

View shared research outputs
Top Co-Authors

Avatar

David M. Beams

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nikhil Satyala

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Ning Huang

University of Texas at Tyler

View shared research outputs
Top Co-Authors

Avatar

Pramod Poudel

University of Texas at Tyler

View shared research outputs
Top Co-Authors

Avatar

Premananda Indic

University of Texas at Tyler

View shared research outputs
Researchain Logo
Decentralizing Knowledge