Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ramon Lawrence is active.

Publication


Featured researches published by Ramon Lawrence.


IEEE Transactions on Education | 2004

Teaching data structures using competitive games

Ramon Lawrence

A motivated student is more likely to be a successful learner. Interesting assignments encourage student learning by actively engaging them in the material. Active student learning is especially important in an introductory data structures course where students learn the fundamentals of programming. In this paper, the author describes a project for a data structures course based on the idea of competitive programming. Competitive programming motivates student learning by allowing students to evaluate and improve their programs throughout an assignment by competing their code against instructor-defined code and the code of other students in a tournament environment. Pedagogical results indicate that the combination of game development and friendly student competition is a significant motivator for increased student performance.


acm symposium on applied computing | 2001

Integrating relational database schemas using a standardized dictionary

Ramon Lawrence; Ken Barker

Schema integration requires the resolution of naming, structural, and semantic conflicts. Currently, automatic schema integration is not possible. We propose that integration can be increasingly automated by capturing data semantics using a standardized dictionary. Our integration architecture constructs an integrated view by automatically combining local views defined by independently expressing database semantics in XML documents using only a pre-defined dictionary as a binding between integration sites. The dictionary eliminates naming conflicts and reduces semantic conflicts. Structural conflicts are resolved at query-time by a query processor which translates from the semantic integrated view to structural queries. Thus, the system provides both logical and physical access transparency by mapping user queries on high-level concepts to schema elements in the underlying data sources. The architecture automatically integrates and transparently queries relational data sources, and its application of standardization to the integration problem is unique.


cooperative information systems | 2004

Composing Mappings Between Schemas Using a Reference Ontology

Eduard C. Dragut; Ramon Lawrence

Large-scale database integration requires a significant cost in developing a global schema and finding mappings between the global and local schemas. Developing the global schema requires matching and merging the concepts in the data sources and is a bottleneck in the process. In this paper we propose a strategy for computing the mapping between schemas by performing a composition of the mappings between individual schemas and a reference ontology. Our premise is that many organizations have standard ontologies that, although they may not be suitable as a global schema, are useful in providing standard terminology and naming conventions for concepts and relationships. It is valuable to leverage these existing ontological resources to help automate the construction of a global schema and mappings between schemas. Our system semi-automates the matching between local schemas and a reference ontology then automatically composes the matchings to build mappings between schemas. Using these mappings, we use model management techniques to compute a global schema. A major advantage of this approach is that human intervention in validating matchings mostly occurs during the matching between schema and ontology. A problem is that matching schemas to ontologies is challenging because the ontology may only contain a subset of the concepts in the schema or may be more general than the schema. Further, the more complicated ontological graph structure limits the effectiveness of some matchers. Our contribution is showing how schema-to-ontology matchings can be used to compose mappings between schemas with high accuracy by adapting the COMA schema matching system to work with ontologies.


international conference on computational science | 2014

Integration and Virtualization of Relational SQL and NoSQL Systems Including MySQL and MongoDB

Ramon Lawrence

NoSQL databases are growing in popularity for Big Data applications in web analytics and supporting large web sites due to their high availability and scalability. Since each NoSQL system has its own API and does not typically support standards such as SQL and JDBC, integrating these systems with other enterprise and reporting software requires extra effort. In this work, we present a generic standards-based architecture that allows NoSQL systems, with specific focus on MongoDB, to be queried using SQL and seamlessly interact with any software supporting JDBC. A virtualization system is built on top of the NoSQL sources that translates SQL queries into the source-specific APIs. The virtualization architecture allows users to query and join data from both NoSQL and relational SQL systems in a single SQL query. Experimental results demonstrate that the virtualization layer adds minimal overhead in translating SQL to NoSQL APIs, and the virtualization system can efficiently perform joins across sources.


Journal of Artificial Intelligence Research | 2010

Case-based subgoaling in real-time heuristic search for video game pathfinding

Vadim Bulitko; Yngvi Björnsson; Ramon Lawrence

Real-time heuristic search algorithms satisfy a constant bound on the amount of planning per action, independent of problem size. As a result, they scale up well as problems become larger. This property would make them well suited for video games where Artificial Intelligence controlled agents must react quickly to user commands and to other agents. actions. On the downside, real-time search algorithms employ learning methods that frequently lead to poor solution quality and cause the agent to appear irrational by re-visiting the same problem states repeatedly. The situation changed recently with a new algorithm, D LRTA*, which attempted to eliminate learning by automatically selecting subgoals. D LRTA* is well poised for video games, except it has a complex and memory-demanding pre-computation phase during which it builds a database of subgoals. In this paper, we propose a simpler and more memory-efficient way of pre-computering subgoals thereby eliminating the main obstacle to applying state-of-the-art real-time search methods in video games. The new algorithm solves a number of randomly chosen problems off-line, compresses the solutions into a series of subgoals and stores them in a database. When presented with a novel problem on-line, it queries the database for the most similar previously solved case and uses its subgoals to solve the problem. In the domain of pathfinding on four large video game maps, the new algorithm delivers solutions eight times better while using 57 times less memory and requiring 14% less pre-computation time.


Information & Software Technology | 2004

The space efficiency of XML

Ramon Lawrence

Abstract XML is the future language for data exchange, and support for XML has been extensive. Although XML has numerous benefits including self-describing data, improved readability, and standardization, there are always tradeoffs in the introduction of new technologies that replace existing systems. The tradeoff of XML versus other data exchange languages is improved readability and descriptiveness versus space efficiency. There has been limited work on examining the space efficiency of XML. This paper compares XML to other data exchange formats. Experiments are performed to measure the overhead in XML files and determine the amount of space used for data, schema, and overhead in a typical XML document.


canadian conference on electrical and computer engineering | 2009

Cluster head selection using RF signal strength

Scott Fazackerley; Alan Paeth; Ramon Lawrence

The LEACH algorithm for selecting cluster heads is a probabilistic method which produces clusters with a large variation of link distances and uneven energy consumption during the data transmission phase. To address this issue, a RF signal strength algorithm based on link quality is presented. Using a competitive distributed algorithm, nodes attempt to reduce the overall energy required for transmission in addition to forming favourable clusters based on Received Signal Strength Indication (RSSI) density and quality. Cluster heads form in areas of high node density leading to a significant reduction in transmission link length, a reduced variance in link length distribution and greater opportunity for energy savings through data aggregation. Simulations show that cluster heads selected by this algorithm form clusters with a lower average link length and have less link distance variability. This produces a lower and more evenly distributed energy cost per node in the network.


static analysis symposium | 2010

Reducing turfgrass water consumption using sensor nodes and an adaptive irrigation controller

Scott Fazackerley; Ramon Lawrence

This paper describes a complete wireless sensor and irrigation control system that reduces water consumption for residential turfgrass irrigation. It has been estimated that 50–75% of residential water use is for irrigation. Current systems are exceptionally poor at adapting irrigation to meet demand, primarily due to incomplete information for system operators who rely either on visual inspection or periodic irrigation programs. This results in over-watering and fertilizer and soil leaching. Our approach couples easy-to-deploy wireless soil moisture sensors nodes with an adaptive irrigation controller that waters on demand without user input. The result is a system that requires less user intervention, lowers water consumption, and adapts to changing climatic conditions.


IEEE Transactions on Computational Intelligence and Ai in Games | 2013

Database-Driven Real-Time Heuristic Search in Video-Game Pathfinding

Ramon Lawrence; Vadim Bulitko

Real-time heuristic search algorithms satisfy a constant bound on the amount of planning per action, independent of the problem size. These algorithms are useful when the amount of time or memory resources are limited, or a rapid response time is required. An example of such a problem is pathfinding in video games where numerous units may be simultaneously required to react promptly to a players commands. Classic real-time heuristic search algorithms cannot be deployed due to their obvious state revisitation (“scrubbing”). Recent algorithms have improved performance by using a database of precomputed subgoals. However, a common issue is that the precomputation time can be large, and there is no guarantee that the precomputed data adequately cover the search space. In this paper, we present a new approach that guarantees coverage by abstracting the search space, using the same algorithm that performs the real-time search. It reduces the precomputation time via the use of dynamic programming. The new approach eliminates the learning component and the resultant “scrubbing.” Experimental results on maps of tens of millions of grid cells from Counter-Strike: Source and benchmark maps from Dragon Age: Origins show significantly faster execution times and improved optimality results compared to previous real-time algorithms.


Computers & Geosciences | 2006

Building a terabyte NEXRAD radar database for hydrometeorology research

Anton Kruger; Ramon Lawrence; Eduard C. Dragut

The management and processing of terabyte-scale radar data sets is time-consuming, costly, and an impediment to research. Researchers require rapid and transparent access to the data without being encumbered with the technical challenges of data management. In this paper, we describe a database architecture that manages over 12TB (and growing) of Archive Level II data that is produced by the United States National Weather Services network of WSR-88D weather radars. The contribution of this work is an automatic system for archiving and analyzing radar data that isolates geoscientists from the complexities of data storage and retrieval. Data access transparency is achieved by using a relational database to store metadata on the raw data, which enables simple SQL queries to retrieve data subsets of interest. The second component is a distributed web platform that cost-effectively distributes data across web servers for access using the ubiquitous HTTP protocol. This work demonstrates how massive data sets can be effectively queried and managed.

Collaboration


Dive into the Ramon Lawrence's collaboration.

Top Co-Authors

Avatar

Scott Fazackerley

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Graeme Douglas

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bryce Cutt

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Giuseppe Burtini

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Henderson

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Wade Penson

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge