Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark S. Schmalz is active.

Publication


Featured researches published by Mark S. Schmalz.


Journal of diabetes science and technology | 2009

Smart home-based health platform for behavioral monitoring and alteration of diabetes patients.

Abdelsalam Helal; Diane J. Cook; Mark S. Schmalz

Background: Researchers and medical practitioners have long sought the ability to continuously and automatically monitor patients beyond the confines of a doctors office. We describe a smart home monitoring and analysis platform that facilitates the automatic gathering of rich databases of behavioral information in a manner that is transparent to the patient. Collected information will be automatically or manually analyzed and reported to the caregivers and may be interpreted for behavioral modification in the patient. Method: Our health platform consists of five technology layers. The architecture is designed to be flexible, extensible, and transparent, to support plug-and-play operation of new devices and components, and to provide remote monitoring and programming opportunities. Results: The smart home-based health platform technologies have been tested in two physical smart environments. Data that are collected in these implemented physical layers are processed and analyzed by our activity recognition and chewing classification algorithms. All of these components have yielded accurate analyses for subjects in the smart environment test beds. Conclusions: This work represents an important first step in the field of smart environment-based health monitoring and assistance. The architecture can be used to monitor the activity, diet, and exercise compliance of diabetes patients and evaluate the effects of alternative medicine and behavior regimens. We believe these technologies are essential for providing accessible, low-cost health assistance in an individuals own home and for providing the best possible quality of life for individuals with diabetes.


Neurocomputing | 2009

Autonomous single-pass endmember approximation using lattice auto-associative memories

Gerhard X. Ritter; Gonzalo Urcid; Mark S. Schmalz

We propose a novel method for the autonomous determination of endmembers that employs recent results from the theory of lattice based auto-associative memories. In contrast to several other existing methods, the endmembers determined by the proposed method are physically linked to the data set spectra. Numerical examples are provided to illustrate lattice theoretical concepts and a hyperspectral image subcube, from the Cuprite site in Nevada, is used to find all endmember candidates in a single pass.


ieee international conference on fuzzy systems | 2006

Learning In Lattice Neural Networks that Employ Dendritic Computing

Gerhard X. Ritter; Mark S. Schmalz

Recent discoveries in neuroscience imply that the basic computational elements are the dendrites that make up more than 50% of a cortical neurons membrane. Neuroscientists now believe that the basic computation units are dendrites, capable of computing simple logic functions. This paper discusses two types of neural networks that take advantage of these new discoveries. The focus of this paper is on some learning algorithms in the two neural networks. Learning is in terms of lattice computations that take place in the dendritic structure as well as in the cell body of the neurons used in this model.


iberoamerican congress on pattern recognition | 2004

A New Auto-associative Memory Based on Lattice Algebra

Gerhard X. Ritter; Laurentiu Iancu; Mark S. Schmalz

This paper presents a novel, three-stage, auto-associative memory based on lattice algebra. The first two stages of this memory consist of correlation matrix memories within the lattice domain. The third and final stage is a two-layer feed-forward network based on dendritic computing. The output nodes of this feed-forward network yield the desired pattern vector association. The computations performed by each stage are all lattice based and, thus, provide for fast computation and avoidance of convergence problems. Additionally, the proposed model is extremely robust in the presence of noise. Bounds of allowable noise that guarantees perfect output are also discussed.


international conference on conceptual modeling | 2003

EITH – A Unifying Representation for Database Schema and Application Code in Enterprise Knowledge Extraction

Mark S. Schmalz; Joachim Hammer; Mingxi Wu; Oguzhan Topsakal

The integration of heterogeneous legacy databases requires understanding of database structure and content. We previously developed a theoretical and software infrastructure to support the extraction of schema and business rule information from legacy sources, combining database reverse engineering with semantic analysis of associated application code (DRE/SA). In this paper, we present a compact formalism called EITH that unifies the representation of database schema and application code. EITH can be efficiently derived from various types of schema representations, particularly the relational model, and supports comparison of a wide variety of schema and code constructs to enable interoperation. Unlike UML or E/R diagrams, for example, EITH has compact notation, is unambiguous, and uses a small set of efficient heuristics. We show how EITH is employed in the context of SEEK, using a construction project management example. We also show how EITH can represent various structures in relational databases, and can serve as an efficient representation for E/R diagrams. This suggests that EITH can support efficient matching of more complex, hierarchical structures via indexed tree representations, without compromising the EITH design philosophy or formalism.


Proceedings of SPIE | 2005

Lattice associative memories that are robust in the presence of noise

Gerhard X. Ritter; Gonzalo Urcid-Serrano; Mark S. Schmalz

This paper presents a novel two-layer feedforward neural network that acts as an associative memory for pattern recall. The neurons of this network have dendritic structures and the computations performed by the network are based on lattice algebra. Use of lattice computation avoids multiplicative processes and, thus, provides for fast computation. The synaptic weights of the axonal fibers are preset, making lengthy training unnecessary. The proposed model exhibits perfect recall for perfect input vectors and is extremely robust in the presence of noisy or corrupted input.


oceans conference | 1997

Performance evaluation of data compression transforms for underwater imaging and object recognition

Mark S. Schmalz; G.X. Ritter; Frank M. Caimi

Underwater (UW) imagery presents several challenging problems for automated target recognition (ATR) using compressed imagery, due to the presence of noise, point-spread function effects resulting from camera or media inhomogeneities, as well as loss of contrast and resolution due to in-water scattering and absorption. In practice, sensor noise can severely degrade algorithm performance by producing featural aliasing in the reconstructed (decompressed) imagery. This paper summarizes the latest research in low-distortion, high-rate image compression transforms for ATR applications that require image transmission along low-bandwidth channels such as UW acoustic uplinks. In particular, a novel transform called BLAST has been developed that can achieve compression ratios in the range 50:1<CR<280:1 on UW imagery at visually acceptable quality, via simple arithmetic operations over small local neighborhoods. Comparative analysis of performance among BLAST, pyramid coding (EPIC), and visual pattern image coding (VPIC) includes compression ratio, information loss, and computational efficiency measured over a large database of UW imagery. Information loss is discussed in terms of the modulation transfer function and several image quality measures. Parallel implementation of the BLAST, VPIC and EPIC transforms is discussed in terms of speed advantages and storage costs.


international symposium on signal processing and information technology | 2012

Techniques for mapping synthetic aperture radar processing algorithms to multi-GPU clusters

Eric T. Hayden; Mark S. Schmalz; William Chapman; Sanjay Ranka; Sartaj Sahni; Gunasekara Seetharaman

This paper presents a design for parallel processing of synthetic aperture radar (SAR) data using multiple Graphics Processing Units (GPUs). Our approach supports real-time reconstruction of a two-dimensional image from a matrix of echo pulses and their response values. Key to runtime efficiency is a partitioning scheme that divides the output image into tiles and the input matrix into a collection of pulses associated with each tile. Each image tile and its associated pulse set are distributed to thread blocks across multiple GPUs, which support parallel computation with near-optimal I/O cost. The partial results are subsequently combined by a host CPU. Further efficiency is realized by the GPUs low-latency thread scheduling, which masks memory access latencies. Performance analysis quantifies runtime as a function of input/output parameters and number of GPUs. Experimental results were generated with 10 nVidia Tesla C2050 GPUs having maximum throughput of 972 Gflop/s. Our approach scales well for output (reconstructed) image sizes from 2,048 × 2,048 pixels to 8,192 × 8,192 pixels.


Mathematics of Data/Image Coding, Compression, and Encryption V, with Applications | 2003

Object-Based Image Compression

Mark S. Schmalz

Image compression frequently supports reduced storage requirement in a computer system, as well as enhancement of effective channel bandwidth in a communication system, by decreasing the source bit rate through reduction of source redundancy. The majority of image compression techniques emphasize pixel-level operations, such as matching rectangular or elliptical sampling blocks taken from the source data stream, with exemplars stored in a database (e.g., a codebook in vector quantization or VQ). Alternatively, one can represent a source block via transformation, coefficient quantization, and selection of coefficients deemed significant for source content approximation in the decompressed image. This approach, called transform coding (TC), has predominated for several decades in the signal and image processing communities. A further technique that has been employed is the deduction of affine relationships from source properties such as local self-similarity, which supports the construction of adaptive codebooks in a self-VQ paradigm that has been called iterated function systems (IFS). Although VQ, TC, and IFS based compression algorithms have enjoyed varying levels of success for different types of applications, bit rate requirements, and image quality constraints, few of these algorithms examine the higher-level spatial structure of an image, and fewer still exploit this structure to enhance compression ratio. In this paper, we discuss a fourth type of compression algorithm, called object-based compression, which is based on research in joint segmentaton and compression, as well as previous research in the extraction of sketch-like representations from digital imagery. Here, large image regions that correspond to contiguous recognizeable objects or parts of objects are segmented from the source, then represented compactly in the compressed image. Segmentation is facilitated by source properties such as size, shape, texture, statistical properties, and spectral signature. In particular, discussion addresses issues such as efficient boundary representation, variance assessment and representation, as well as a texture classification and replacement algorithms that can decrease compression overhead and increase reconstruction fidelity in the decompressed image. Contextual extraction of motion patterns in digital video sequences, using a frequency-domain pattern recognition technique based on interframe correlation, is described in a companion paper. This technique can also be extended to multidimensional image domains, to support joint spectral, spatial, and temporal compression.


acm southeast regional conference | 1998

Approximating the longest approximate common subsequence problem

Wen-Chen Hu; Gerhard X. Ritter; Mark S. Schmalz

Ah&act Finding a kmgest common subsequence of two strings is a well4nown pmbktu We genemlize this ptvbkm to a longest approximate common subse!+wnceprobkm that produces a nuuhum-gain appnximate common subsequence of two strings. An apptvximate subsequence of a string X is a string edited from a subsequence of X String Z is an appnximate common subsequence of two strings X and Y if Z is an apptvximate subsequence of both X and Y. The gain jibnc* g assigns a nonnegatkve real number to each subsequence. The ptvbkm is divided into smaller segments in on&r to kssen its compkxity with some of these segments having been ptvven to be NP-hatd A heuristic apptvximation algorithm and an optimization neural network are constructed to jind a near-optimal solution for the ptobkm, where a ratio bound of the apptvximation algorithm is given, and a technique of interception is used to determke the values of the network weights. Some experimental msults and the comparative performance of the two methods also are discussed

Collaboration


Dive into the Mark S. Schmalz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frank M. Caimi

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Junior Barrera

University of São Paulo

View shared research outputs
Top Co-Authors

Avatar

Jaakko Astola

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge