Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ronald F. DeMara is active.

Publication


Featured researches published by Ronald F. DeMara.


Integration | 2004

Optimization of NULL convention self-timed circuits

Ronald F. DeMara; Jiann S. Yuan; D. Ferguson; D. Lamb

Self-timed logic design methods are developed using Threshold Combinational Reduction (TCR) within the NULL Convention Logic (NCL) paradigm. NCL logic functions are realized using 27 distinct transistor networks implementing the set of all functions of four or fewer variables, thus facilitating a variety of gate-level optimizations. TCR optimizations are formalized for NCL and then assessed by comparing levels of gate delays, gate counts, transistor counts, and power utilization of the resulting designs. The methods are illustrated to produce (1) fundamental logic functions that are 2.2-2.3 times faster and require 40-45% fewer transistors than conventional canonical designs, (2) a Full Adder with reduced critical path delay and transistor count over various alternative gate-level synthesis approaches, resulting in a circuit with at least 48% fewer transistors, half as many gate delays to generate the carry output, and the same number of gate delays to generate the sum output, as its nearest competitors, and (3) time, space, and power optimized increment circuits for a 4-bit up-counter, resulting in a throughput-optimized design that is 14% and 82% faster than area- and power-optimized designs, respectively, an area-optimized design that requires 22% and 42% fewer transistors than the speed- and power-optimized designs, respectively, and a power-optimized design that dissipates 63% and 42% less power than the speed- and area-optimized designs, respectively. Results demonstrate support for a variety of optimizations utilizing conventional Boolean minimization followed by table-driven gate substitutions, providing for an NCL design method that is readily automatable.


Integration | 2001

Delay-insensitive gate-level pipelining

Ronald F. DeMara; Jiann S. Yuan; M. Hagedorn; D. Ferguson

Abstract Gate-level pipelining (GLP) techniques are developed to design throughput-optimal delay-insensitive digital systems using NULL convention logic (NCL). Pipelined NCL systems consists of combinational , registration , and completion circuits implemented using threshold gates equipped with hysteresis behavior. NCL combinational circuits provide the desired processing behavior between asynchronous registers that regulate wavefront propagation. NCL completion logic detects completed DATA or NULL output sets from each register stage. GLP techniques cascade registration and completion elements to systematically partition a combinational circuit and allow controlled overlapping of input wavefronts. Both full-word and bit-wise completion strategies are applied progressively to select the optimal size grouping of operand and output data bits. To illustrate the methodology, GLP is applied to a case study of a 4-bit×4-bit unsigned multiplier, yielding a speedup of 2.25 over the non-pipelined version, while maintaining delay insensitivity.


Informing Science The International Journal of an Emerging Transdiscipline | 2004

Evaluation of the human impact of password authentication practices on information security

Deborah Sater Carstens; Pamela R. McCauley-Bell; Linda C. Malone; Ronald F. DeMara

Introduction The increase in computing and networking expansion as well as increases in threats have enhanced the need to perpetually manage information security within an organization. Although there is literature addressing the human side of information security, events such as 9/11 and the war on terrorism has created more of a burden for organizations, government and private industry, enhancing the need for more research in information security. Carnegie Mellons Computer Emergency Response Team (2004) has collected statistics showing that 6 security incidents were reported in 1988 compared to 137,529 in 2003. A survey by the Federal Bureau of Investigation (FBI) suggested that 40% of organizations surveyed claimed that system penetrations from outside their organization have increased from the prior year by 25% (Ives, Walsh, & Schneider, 2004). The U.S. Department of Homeland Security (2002) is concerned with the need for information security measures. Therefore, the Federal Information Security Management Act of 2002 was put into place for the purposes of protecting information and systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide integrity, confidentiality, and availability of information. The government has an information security responsibility ranging from protecting intelligence information to issuing social security numbers for each citizen. Private industry must also be concerned with information security as it is vital for the livelihood of any company to protect customers personal information along with the management of each companys supply chain (Olivia, 2003). Earlier research identified the presence of human error risks to the security of information systems (Wood & Banks 1993, Courtney as cited in NIST, 1992). A survey conducted by one of the authors, identified password issues as the second most likely human error risk factor to impact an information system. The significance of this is enhanced when realizing that passwords are the primary source of user authentication for the majority of personal and private information systems. The past research findings of password issues as a human error risk factor has been further identified as a threat to security by the University of Findlay Center for Terrorism Preparedness (2003), who developed a vulnerability assessment methodology to better help organizations identify their weaknesses in terms of information security. Extensive password requirements can overload human memory capabilities as the number of passwords and their complexity level increases. The exponential growth in security incidents (Carnegie Mellon Computer Emergency Response Team, 2004) requires a comprehensive approach to the development of password guidelines which do not exceed human memory limitations yet maintain strength of passwords as necessitated by the information technology (IT) community. The IT community consists of network administrators or security officers who are directly responsible for information security in terms of integrity, confidentiality, and availability of information. In earlier investigations, over 50% of incidents that occur within government and private organizations have been connected to human errors (NIST, 1992). The impact of human error on information security is an important issue that left unresolved can have adverse affects on industry. This research is focused on measuring the impact of password demands as a means of authentication and mitigating the risks that result when these demands exceed human capabilities. Literature Review Information Security Information security involves making information accessible to those who need the information, while maintaining integrity and confidentiality. The three categories that are used to classify information security risks are confidentiality, integrity, and accessibility or availability of information (U. …


Microelectronics Journal | 2015

Design and evaluation of an ultra-area-efficient fault-tolerant QCA full adder

Arman Roohi; Ronald F. DeMara; Navid Khoshavi

Quantum-dot cellular automata (QCA) has been studied extensively as a promising switching technology at nanoscale level. Despite several potential advantages of QCA-based designs over conventional CMOS logic, some deposition defects are probable to occur in QCA-based systems which have necessitated fault-tolerant structures. Whereas binary adders are among the most frequently-used components in digital systems, this work targets designing a highly-optimized robust full adder in a QCA framework. Results demonstrate the superiority of the proposed full adder in terms of latency, complexity and area with respect to previous full adder designs. Further, the functionality and the defect tolerance of the proposed full adder in the presence of QCA deposition faults are studied. The functionality and correctness of our design is confirmed using high-level synthesis, which is followed by delineating its normal and faulty behavior using a Probabilistic Transfer Matrix (PTM) method. The related waveforms which verify the robustness of the proposed designs are discussed via generation using the QCADesigner simulation tool.


systems man and cybernetics | 2006

Learning tactical human behavior through observation of human performance

Hans Fernlund; Avelino J. Gonzalez; Michael Georgiopoulos; Ronald F. DeMara

It is widely accepted that the difficulty and expense involved in acquiring the knowledge behind tactical behaviors has been one limiting factor in the development of simulated agents representing adversaries and teammates in military and game simulations. Several researchers have addressed this problem with varying degrees of success. The problem mostly lies in the fact that tactical knowledge is difficult to elicit and represent through interactive sessions between the model developer and the subject matter expert. This paper describes a novel approach that employs genetic programming in conjunction with context-based reasoning to evolve tactical agents based upon automatic observation of a human performing a mission on a simulator. In this paper, we describe the process used to carry out the learning. A prototype was built to demonstrate feasibility and it is described herein. The prototype was rigorously and extensively tested. The evolved agents exhibited good fidelity to the observed human performance, as well as the capacity to generalize from it.


international conference on evolvable systems | 2003

A genetic representation for evolutionary fault recovery in Virtex FPGAs

Jason D. Lohn; Gregory V. Larchev; Ronald F. DeMara

Most evolutionary approaches to fault recovery in FPGAs focus on evolving alternative logic configurations as opposed to evolving the intra-cell routing. Since the majority of transistors in a typical FPGA are dedicated to interconnect, nearly 80% according to one estimate, evolutionary fault-recovery systems should benefit by accommodating routing. In this paper, we propose an evolutionary fault-recovery system employing a genetic representation that takes into account both logic and routing configurations. Experiments were run using a software model of the Xilinx Virtex FPGA. We report that using four Virtex combinational logic blocks, we were able to evolve a 100% accurate quadrature decoder finite state machine in the presence of a stuck-at-zero fault.


ACM Transactions in Embedded Computing Systems | 2009

Scalable FPGA-based architecture for DCT computation using dynamic partial reconfiguration

Jian Huang; Matthew Parris; Jooheung Lee; Ronald F. DeMara

In this article, we propose field programmable gate array-based scalable architecture for discrete cosine transform (DCT) computation using dynamic partial reconfiguration. Our architecture can achieve quality scalability using dynamic partial reconfiguration. This is important for some critical applications that need continuous hardware servicing. Our scalable architecture has three features. First, the architecture can perform DCT computations for eight different zones, that is, from 1 × 1 DCT to 8× 8 DCT. Second, the architecture can change the configuration of processing elements to trade off the precisions of DCT coefficients with computational complexity. Third, unused PEs for DCT can be used for motion estimation computations. Using dynamic partial reconfiguration with 2.3MB bitstreams, 80 distinct hardware architectures can be implemented. We show the experimental results and comparisons between different configurations using both partial reconfiguration and nonpartial reconfiguration process. The detailed trade-offs among visual quality, power consumption, processing clock cycles, and reconfiguration overhead are analyzed in the article.


nasa dod conference on evolvable hardware | 2005

Autonomous FPGA fault handling through competitive runtime reconfiguration

Ronald F. DeMara; Kening Zhang

An autonomous self-repair approach for SRAM-based FPGAs is developed based on competitive runtime reconfiguration (CRR). Under the CRR technique, an initial population of functionally identical (same input-output behavior), yet physically distinct (alternative design or place-and-route realization) FPGA configurations is produced at design time. At run-time, these individuals compete for selection based on a fitness function favoring fault-free behavior. Hence, any physical resource exhibiting an operationally-significant fault decreases the fitness of those configurations which use it. Through runtime competition, the presence of the fault becomes occluded from the visibility of subsequent FPGA operations. Meanwhile, the offspring formed through crossover and mutation of faulty and viable configurations are reintroduced into the population. This enables evolution of a customized fault-specific repair, realized directly as new configurations using the FPGAs normal throughput processing operations. Multiple phases of the fault handling process including detection, isolation, diagnosis, and recovery are integrated into a single cohesive approach. FPGA-based multipliers are examined as a case study demonstrating evolution of a complete repair for a 3-bit /spl times/ 3-bit multiplier from several stuck-at-faults within a few thousand iterations. Repairs are evolved in-situ, in real-time, without test vectors, while allowing the FPGA to remain partially online.


ACM Computing Surveys | 2011

Progress in autonomous fault recovery of field programmable gate arrays

Matthew Parris; Carthik A. Sharma; Ronald F. DeMara

The capabilities of current fault-handling techniques for Field Programmable Gate Arrays (FPGAs) develop a descriptive classification ranging from simple passive techniques to robust dynamic methods. Fault-handling methods not requiring modification of the FPGA device architecture or user intervention to recover from faults are examined and evaluated against overhead-based and sustainability-based performance metrics such as additional resource requirements, throughput reduction, fault capacity, and fault coverage. This classification alongside these performance metrics forms a standard for confident comparisons.


Neural Networks | 2005

Data-partitioning using the Hilbert space filling curves: Effect on the speed of convergence of Fuzzy ARTMAP for large database problems

José Castro; Michael Georgiopoulos; Ronald F. DeMara; Avelino J. Gonzalez

The Fuzzy ARTMAP algorithm has been proven to be one of the premier neural network architectures for classification problems. One of the properties of Fuzzy ARTMAP, which can be both an asset and a liability, is its capacity to produce new nodes (templates) on demand to represent classification categories. This property allows Fuzzy ARTMAP to automatically adapt to the database without having to a priori specify its network size. On the other hand, it has the undesirable side effect that large databases might produce a large network size (node proliferation) that can dramatically slow down the training speed of the algorithm. To address the slow convergence speed of Fuzzy ARTMAP for large database problems, we propose the use of space-filling curves, specifically the Hilbert space-filling curves (HSFC). Hilbert space-filling curves allow us to divide the problem into smaller sub-problems, each focusing on a smaller than the original dataset. For learning each partition of data, a different Fuzzy ARTMAP network is used. Through this divide-and-conquer approach we are avoiding the node proliferation problem, and consequently we speedup Fuzzy ARTMAPs training. Results have been produced for a two-class, 16-dimensional Gaussian data, and on the Forest database, available at the UCI repository. Our results indicate that the Hilbert space-filling curve approach reduces the time that it takes to train Fuzzy ARTMAP without affecting the generalization performance attained by Fuzzy ARTMAP trained on the original large dataset. Given that the resulting smaller datasets that the HSFC approach produces can independently be learned by different Fuzzy ARTMAP networks, we have also implemented and tested a parallel implementation of this approach on a Beowulf cluster of workstations that further speeds up Fuzzy ARTMAPs convergence to a solution for large database problems.

Collaboration


Dive into the Ronald F. DeMara's collaboration.

Top Co-Authors

Avatar

Avelino J. Gonzalez

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Michael Georgiopoulos

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Rizwan A. Ashraf

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Arman Roohi

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Ramtin Zand

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Navid Khoshavi

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Naveed Imran

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Carthik A. Sharma

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Soheil Salehi

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge