Jae Cha
Virginia Tech
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jae Cha.
Proceedings of SPIE | 2014
Francois Lalonde; Nitin Gogtay; Jay N. Giedd; Nadarajen Vydelingum; David G. Brown; Binh Q. Tran; Charles Hsu; Ming-Kai Hsu; Jae Cha; Jeffrey Jenkins; Lien Ma; Jefferson Willey; Jerry Wu; Kenneth Oh; Joseph Landa; Chingfu Lin; Tzyy-Ping Jung; Scott Makeig; Carlo Francesco Morabito; Qyu Moon; Takeshi Yamakawa; Soo-Young Lee; Jong Hwan Lee; Harold H. Szu; Balvinder Kaur; Kenneth Byrd; Karen Dang; Alan T. Krzywicki; Babajide O. Familoni; Louis Larson
Since the Brain Order Disorder (BOD) group reported on a high density Electroencephalogram (EEG) to capture the neuronal information using EEG to wirelessly interface with a Smartphone [1,2], a larger BOD group has been assembled, including the Obama BRAIN program, CUA Brain Computer Interface Lab and the UCSD Swartz Computational Neuroscience Center. We can implement the pair-electrodes correlation functions in order to operate in a real time daily environment, which is of the computation complexity of O(N3) for N=102~3 known as functional f-EEG. The daily monitoring requires two areas of focus. Area #(1) to quantify the neuronal information flow under arbitrary daily stimuli-response sources. Approach to #1: (i) We have asserted that the sources contained in the EEG signals may be discovered by an unsupervised learning neural network called blind sources separation (BSS) of independent entropy components, based on the irreversible Boltzmann cellular thermodynamics(ΔS < 0), where the entropy is a degree of uniformity. What is the entropy? Loosely speaking, sand on the beach is more uniform at a higher entropy value than the rocks composing a mountain – the internal binding energy tells the paleontologists the existence of information. To a politician, landside voting results has only the winning information but more entropy, while a non-uniform voting distribution record has more information. For the human’s effortless brain at constant temperature, we can solve the minimum of Helmholtz free energy (H = E − TS) by computing BSS, and then their pairwise-entropy source correlation function. (i) Although the entropy itself is not the information per se, but the concurrence of the entropy sources is the information flow as a functional-EEG, sketched in this 2nd BOD report. Area #(2) applying EEG bio-feedback will improve collective decision making (TBD). Approach to #2: We introduce a novel performance quality metrics, in terms of the throughput rate of faster (Δt) & more accurate (ΔA) decision making, which applies to individual, as well as team brain dynamics. Following Nobel Laureate Daniel Kahnmen’s novel “Thinking fast and slow”, through the brainwave biofeedback we can first identify an individual’s “anchored cognitive bias sources”. This is done in order to remove the biases by means of individually tailored pre-processing. Then the training effectiveness can be maximized by the collective product Δt * ΔA. For Area #1, we compute a spatiotemporally windowed EEG in vitro average using adaptive time-window sampling. The sampling rate depends on the type of neuronal responses, which is what we seek. The averaged traditional EEG measurements and are further improved by BSS decomposition into finer stimulus-response source mixing matrix [A] having finer & faster spatial grids with rapid temporal updates. Then, the functional EEG is the second order co-variance matrix defined as the electrode-pair fluctuation correlation function C(s~, s~’) of independent thermodynamic source components. (1) We define a 1-D Space filling curve as a spiral curve without origin. This pattern is historically known as the Peano-Hilbert arc length a. By taking the most significant bits of the Cartesian product a≡ O(x * y * z), it represents the arc length in the numerical size with values that map the 3-D neighborhood proximity into a 1-D neighborhood arc length representation. (2) 1-D Fourier coefficients spectrum have no spurious high frequency contents, which typically arise in lexicographical (zig-zag scanning) discontinuity [Hsu & Szu, “Peano-Hilbert curve,” SPIE 2014]. A simple Fourier spectrum histogram fits nicely with the Compressive Sensing CRDT Mathematics. (3) Stationary power spectral density is a reasonable approximation of EEG responses in striate layers in resonance feedback loops capable of producing a 100, 000 neuronal collective Impulse Response Function (IRF). The striate brain layer architecture represents an ensemble <IRF< e.g. at V1-V4 of Brodmann areas 17-19 of the Cortex, i.e. stationary Wiener-Kintchine-Einstein Theorem. Goal#1: functional-EEG: After taking the 1-D space-filling curve, we compute the ensemble averaged 1-D Power Spectral Density (PSD) and then make use of the inverse FFT to generate f-EEG. (ii) Goal#2 individual wellness baseline (IWB): We need novel change detection, so we derive the ubiquitous fat-tail distributions for healthy brains PSD in outdoor environments (Signal=310°C; Noise=27°C: SNR=310/300; 300°K=(1/40)eV). The departure from IWB might imply stress, fever, a sports injury, an unexpected fall, or numerous midnight excursions which may signal an onset of dementia in Home Alone Senior (HAS), discovered by telemedicine care-giver networks. Aging global villagers need mental healthcare devices that are affordable, harmless, administrable (AHA) and user-friendly, situated in a clothing article such as a baseball hat and able to interface with pervasive Smartphones in daily environment.
Proceedings of SPIE | 2010
Richard L. Espinola; Jae Cha; Kevin R. Leonard
Atmospheric turbulence is an imaging phenomenon that introduces blur, distortion, and intensity fluctuations that corrupt image quality and can decrease target acquisition performance. The modeling of imaging sensors requires an accurate description of turbulence effects. We present two novel methodologies for the measurement of the turbulence MTF in infrared imagery. First, the structural similarity metric is used to compare pristine and degraded imagery. Second, contrast modulations of radial bar targets are analyzed to extract an equivalent blur. Human perception tests are compared against model predictions. The results show that complex turbulence effects can be measured and modeled with simple MTF blurs.
Infrared Imaging Systems: Design, Analysis, Modeling, and Testing X | 1999
Eddie L. Jacobs; Ronald G. Driggers; Timothy C. Edwards; Jae Cha
Virtual minimum resolvable temperature difference (MRTD) measurements have been performed on an infrared sensor simulation based on FLIR 92 input parameters. By using this simulation , it is possible to perform virtual laboratory experiments on simulated sensors. As part of the validation of this simulation, a series of MRTD experiments were conducted on simulated and real sensors. This paper describes the methodology for the sensor simulation. The experimental procedures for both real and simulated MRTD are presented followed by a comparison and analysis of the results. The utility of the simulation in assessing the performance of current and notional sensors is discussed.
Proceedings of SPIE | 2013
Charles Hsu; Ming Kai Hsu; Jae Cha; Tomo Iwamura; Joseph Landa; Charles C. Nguyen; Harold H. Szu
We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal–oxide–semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).
Proceedings of SPIE | 2011
David P. Haefner; Joseph P. Reynolds; Jae Cha; Van A. Hodgkin
Reflective band sensors are often signal to noise limited in low light conditions. Any additional filtering to obtain spectral information further reduces the signal to noise, greatly affecting range performance. Modern sensors, such as the sparse color filter CCD, circumvent this additional degradation through reducing the number of pixels affected by filters and distributing the color information. As color sensors become more prevalent in the warfighter arsenal, the performance of the sensor-soldier system must be quantified. While field performance testing ultimately validates the success of a sensor, accurately modeling sensor performance greatly reduces the development time and cost, allowing the best technology to reach the soldier the fastest. Modeling of sensors requires accounting for how the signal is affected through the modulation transfer function (MTF) and noise of the system. For the modeling of these new sensors, the MTF and noise for each color band must be characterized, and the appropriate sampling and blur must be applied. We show how sparse array color filter sensors may be modeled and how a soldiers performance with such a sensor may be predicted. This general approach to modeling color sensors can be extended to incorporate all types of low light color sensors.
Information Systems | 2010
Richard L. Espinola; Jae Cha
Mitigation algorithms can improve the target acquisition performance of imaging systems in atmospheric turbulence. We quantify this improvement using perception tests and develop a model that predicts sensor/observer ID performance with software-based turbulence mitigation algorithms.
Proceedings of SPIE | 2009
S. Susan Young; Ronald G. Driggers; Keith Krapels; Richard L. Espinola; Joseph P. Reynolds; Jae Cha
Understanding turbulence effects on wave propagation and imaging systems has been an active research area for more than 50 years. Conventional atmospheric optics methods use statistical models to analyze image degradation effects that are caused by turbulence. In this paper, we intend to understand atmospheric turbulence effects using a deterministic signal processing and imaging theory point of view and modeling. The model simulates the formed imagery by a lens by tracing the optical rays from the target through a band of turbulence. We examine the nature of the turbulence-degraded image, and identify its characteristics as the parameters of the band of turbulence, e.g., its width, angle, and index of refraction, are varied. Image degradation effects due to turbulence, such as image blurring and image dancing, are revealed by this signal modeling. We show that in fact these phenomena can be related not only to phase errors in the frequency domain of the image but also a 2D modulation effect in the image spectrum. Results with simulated and realistic data are provided.
Proceedings of SPIE, the International Society for Optical Engineering | 2008
Richard L. Espinola; Jae Cha; Bradley L. Preece
The bandwidth requirements of modern target acquisition systems continue to increase with larger sensor formats and multi-spectral capabilities. To obviate this problem, still and moving imagery can be compressed, often resulting in greater than 100 fold decrease in required bandwidth. Compression, however, is generally not error-free and the generated artifacts can adversely affect task performance. The U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate recently performed an assessment of various compression techniques on static imagery for tank identification. In this paper, we expand this initial assessment by studying and quantifying the effect of various video compression algorithms and their impact on tank identification performance. We perform a series of controlled human perception tests using three dynamic simulated scenarios: target moving/sensor static, target static/sensor static, sensor tracking the target. Results of this study will quantify the effect of video compression on target identification and provide a framework to evaluate video compression on future sensor systems.
Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XI | 2000
Eddie L. Jacobs; Jae Cha; Timothy C. Edwards; Charmaine C. Franck
Dynamic measurement of minimum resolvable temperature difference (MRTD) has been shown to avoid the problems of phase optimization and beat frequency disruption associated with static MRT testing of under sampled systems. In order to predict field performance, the relationship between static and dynamic MRTD (DMRTD) must be quantified. In this paper, the dynamic MRTD of a sampled system is performed using both laboratory measurements and a simulation. After reviewing, the principles of static and dynamic MRTD, the design of a sensor simulator is described. A comparison between real and simulated DMRTD is shown. Measurement procedures are documented for both the static and dynamic MRTD. Conclusions are given regarding the utility of the simulator for performing comparative experiments between static and dynamic MRTD.
Proceedings of SPIE | 2013
Harold H. Szu; Jae Cha; Richard L. Espinola; Keith Krapels
Modeling and Simulation (M&S) has been evolving along two general directions: (i) data-rich approach suffering the curse of dimensionality and (ii) equation-rich approach suffering computing power and turnaround time. We suggest a third approach. We call it (iii) compressive M&S (CM&S); because the basic Minimum Free-Helmholtz Energy (MFE) facilitating CM&S can reproduce and generalize Candes, Romberg, Tao & Donoho (CRT&D) Compressive Sensing (CS) paradigm as a linear Lagrange Constraint Neural network (LCNN) algorithm. CM&S based MFE can generalize LCNN to 2nd order as Nonlinear augmented LCNN. For example, during the sunset, we can avoid a reddish bias of sunlight illumination due to a long-range Rayleigh scattering over the horizon. With CM&S we can take instead of day camera, a night vision camera. We decomposed long wave infrared (LWIR) band with filter into 2 vector components (8~10μm and 10~12μm) and used LCNN to find pixel by pixel the map of Emissive-Equivalent Planck Radiation Sources (EPRS). Then, we up-shifted consistently, according to de-mixed sources map, to the sub-micron RGB color image. Moreover, the night vision imaging can also be down-shifted at Passive Millimeter Wave (PMMW) imaging, suffering less blur owing to dusty smokes scattering and enjoying apparent smoothness of surface reflectivity of man-made objects under the Rayleigh resolution. One loses three orders of magnitudes in the spatial Rayleigh resolution; but gains two orders of magnitude in the reflectivity, and gains another two orders in the propagation without obscuring smog . Since CM&S can generate missing data and hard to get dynamic transients, CM&S can reduce unnecessary measurements and their associated cost and computing in the sense of super-saving CS: measuring one & getting one’s neighborhood free .