Gregg T. Vesonder
AT&T Labs
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gregg T. Vesonder.
knowledge discovery and data mining | 2003
Tamraparni Dasu; Gregg T. Vesonder; Jon R. Wright
Traditionally, data quality programs have acted as a preprocessing stage to make data suitable for a data mining or analysis operation. Recently, data quality concepts have been applied to databases that support business operations such as provisioning and billing. Incorporating business rules that drive operations and their associated data processes is critically important to the success of such projects. However, there are many practical complications. For example, documentation on business rules is often meager. Rules change frequently. Domain knowledge is often fragmented across experts, and those experts do not always agree. Typically, rules have to be gathered from subject matter experts iteratively, and are discovered out of logical or procedural sequence, like a jigsaw puzzle. Our approach is to impement business rules as constraints on data in a classical expert system formalism sometimes called production rules. Our system works by allowing good data to pass through a system of constraints unchecked. Bad data violate constraints and are flagged, and then fed back after correction. Constraints are added incrementally as better understanding of the business rules is gained. We include a real-life case study.
Ai Magazine | 1993
Jon R. Wright; Elia Weixelbaum; Gregg T. Vesonder; Karen E. Brown; Stephen R. Palmer; Jay I. Berman; Harry H. Moore
PROSE is a knowledge-based configurator platform for telecommunications products. Its outstanding feature is a product knowledge base written in C-classIC, a frame-based knowledge representation system in the KL-ONE family of languages. It is one of the first successful products using a KL-ONE style language. Unlike previous configurator applications, the PROSE knowledge base is in a purely declarative form that provides developers with the ability to add knowledge quickly and consistently. The PROSE architecture is general and is not tied to any specific telecommunications product. As such, it is being reused to develop configurators for several different products. Finally, PROSE not only generates configurations from just a few high-level parameters, but it can also verify configurations produced manually by customers, engineers, or salespeople. The same product knowledge, encoded in C-classIC, supports both the generation and the verification of product configurations.
conference on scientific computing | 1984
Jon R. Wright; Frederick D. Miller; G. V. E. Otto; Elizabeth M. Siegfried; Gregg T. Vesonder; John E. Zielinski
ACE (Automated Cable Expertise) is a knowledge-based expert system that provides trouble-shooting and diagnostic reports for telephone company managers. Its application domain is telephone cable maintenance. ACE departs from standard expert system architecture in that a separate data base system is used as its primary source of information. ACE designers were influenced by the R1/XCON project, and ACE uses techniques similar to those of R1/XCON. This paper reports the progress of ACE as it moves out of experimentation and into a live software environment, and characterizes it in terms of current expert system technology.
Expert Systems With Applications | 1990
Jon R. Wright; Gregg T. Vesonder
Abstract Expert systems have been successfully applied to many maintenance, provisioning, and administrative tasks in telecommunications networks. Given that they can be appropriately integrated with the existing base of software applications, expert systems will play an important role in the future. We review nearly 40 current projects, which run the gamut from research prototype to finished product.
symposium on reliable distributed systems | 2012
Rajesh Krishna Panta; James A. Pelletier; Gregg T. Vesonder
Energy conservation and reliability of wireless communications are two crucial requirements of practical sensor networks. Radio duty cycling is a widely used mechanism to reduce energy consumption of sensor devices and to increase the lifetime of the network. A side effect of radio duty cycling is that it can cause the wireless communications to be unreliable---if a sender node transmits a packet while the receiver is asleep, the communication fails. Early duty cycling protocols like B-MAC that were designed for bit streaming radios achieve low duty cycle by keeping the radio transceiver awake for short time periods. However, they require a transmitter node to precede a packet transmission with a long preamble to ensure the reliability of wireless communication. Furthermore, they cannot be used with modern packet radios like widely used IEEE 802.15.4 based radio transceivers, which cannot transmit arbitrarily long preambles. Recent duty cycling schemes like X-MAC, on the other hand, reduce the length of the preamble and are designed to work with packet radios. However, in order to ensure that a receiver can reliably detect a transmitters preamble transmission, these schemes need to turn the radio transceiver on for longer time durations than the early schemes like B-MAC. In this paper, we present a novel duty cycling scheme called Quick MAC, that achieves a very low duty cycle without compromising the reliability of wireless communication. Furthermore, Quick MAC is stateless, compatible with packet (and bit stream) radios, and does not require synchronization among sensor nodes. From our experiments using TMote sky motes, we show that Quick MAC reduces duty cycle by a factor of about 4 compared to X-MAC, and yet maintains the same level of reliability of wireless communication as X-MAC.
high-assurance systems engineering | 2008
Robin Berthier; Dave Korman; Michel Cukier; Matti A. Hiltunen; Gregg T. Vesonder; Daniel Sheleheda
Network malicious activity can be collected and reported by various sources using different attack detection solutions. The granularity of these solutions provides either very detailed information (intrusion detection systems, honeypots) or high-level trends (CAIDA, SANS). The problem for network security operators is often to select the sources of information to better protect their network. How much information from these sources is redundant and how much is unique? The goal of this paper is to show empirically that while some global attack events can be correlated across various sensors, the majority of incoming malicious activity has local specificities. This study presents a comparative analysis of four different attack datasets offering three different levels of granularity: 1) two high interaction honeynets deployed at two different locations (i.e., a corporate and an academic environment); 2) ATLAS which is a distributed network telescope from Arbor; and 3) Internet Protecttrade which is a global alerting service from AT&T.
conference on scientific computing | 1987
James R. Rowland; Gregg T. Vesonder
Conceptual clustering enhances the value of existing databases by revealing patterns in the data. These patterns may be useful for understanding trends, making predictions of future events from historical data, or synthesizing data records into meaningful clusters. LODE (Learning On Database Environments) is an incremental conceptual clustering program. The premise of the LODE system is that the task of discovering patterns in a large set of potentially noisy examples can be accomplished in a generate and test paradigm using generalization techniques to generate hypotheses describing similar examples and then testing the accuracy of these hypotheses by comparing them to examples. The LODE system is an implementation of this premise. LODE was used to analyze keystroke data collected from novices learning to use the vi editor. The analysis shows that LODE discovered descriptions of recurring patterns of errors made by the novices that are known as mode errors.
Artificial Life | 2009
Gregg T. Vesonder
This paper describes a series of simulation experiments based on a model by John W. Pepper [1] on two mechanisms, mutation rate and culling, and their effect on evolvability. The findings suggest that while culling may positively affect the performance of the population, increased culling negatively affects the evolution of the evolvability of the lineage. Similarly, decreasing the mutation rate positively affects performance of the population whereas it negatively affects aspects of evolvability. It suggests that mechanisms affecting evolvability are similar to mechanisms affecting evolution in complex spaces.
industrial and engineering applications of artificial intelligence and expert systems | 1988
Douglas N. Gordin; Douglas Foxvog; James R. Rowland; Pamela Surko; Gregg T. Vesonder
OKIES is an expert system that troubleshoots newly assembled AT&T 3B2 computer systems. All AT&T 3B2 models and configurations are analyzed by OKIES. The expert system uses an architecture-based design to apply the same knowledge to different machines. An architectural model of the machine is constructed when the session begins. This model is used to determine which tests are applicable, the components that compose the machine, and how the machine should be fixed. OKIES was built as a production system using the OPS/83[1] language. All inference is done by matching. No search or backtracking is performed. The first OKIES prototype used rules generated by a conceptual clustering system. A diagnosis is interactively developed by first having the user pick among problem descriptions. The expert system then refines this by asking the user questions. For instance, the user is asked to examine hardware connections, run tests and report on error messages. If the expert system still can not classify the problem it requests further tests or presents a new problem classification. After determining the problem a treatment is prescribed. The treatment depends on the problem found, the machines configuration, and the machines prior history. The current organization of OKIES is presented along with a description of the process by which it was built. Initially the system was developed ad hoc with structure later imposed. Specifically, generalization and decision trees were used to organize the knowledge.
systems, man and cybernetics | 2011
Gregg T. Vesonder
This paper describes a series of simulation experiments examining the effect of environmental dynamism on evolution and evolvability. Decreasing dynamism had a modest positive effect on evolution which was enhanced when mutation rate was decreased and culling was increased. Decreasing mutation rate and increasing culling had a stronger effect than either alone. Decreasing dynamism had a modest effect on evolvability and this effect increased to a point for decreasing mutation rates. Therefore both evolution and evolvability were affected by decreasing environmental dynamism but the evolvability advantage disappeared when population diversity was decreased.