Keith Gardiner
Dublin Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Keith Gardiner.
database and expert systems applications | 2002
James D. Carswell; Alan Eustace; Keith Gardiner; Eoin Kilfeather; Marco Neumann
This paper proposes a novel solution to querying hyperlinked multimedia cultural heritage datasets based on the users context. Context in this sense is defined as the users location in virtual space and the particular mobile device being modeled together with user preferences or profile. The purpose is to automatically push relevant data from the database server to the client based on this comprehensive definition of the users context. Consideration in regard to which mobile device is currently being modeled is a primary filter for determining what data will be sent and in what format. For example, image data will not be sent to a mobile phone and video will not be sent to a PDA. The CHI (Cultural Heritage Interfaces) project differs from many of the models encountered on the Web in that its primary focus is not the accurate 3D rendering of a street/landscape, but the simulation of such a physical reality to explore the adaptive hypermedia paradigm in the context of a spatial navigation interface.
Transactions in Gis | 2010
James D. Carswell; Keith Gardiner; Junjun Yin
This article describes research carried out in the area of mobile spatial interaction (MSI) and the development of a 3D mobile version of a 2D web-based directional query processor. The TellMe application integrates location (from GPS, GSM, WiFi) and orientation (from magnetometer/accelerometer) sensor technologies into an enhanced spatial query processing module capable of exploiting a mobile devices position and orientation for querying real-world spatial datasets. This article outlines our technique for combining these technologies and the architecture needed to deploy them on a sensor enabled smartphone (i.e. Nokia Navigator 6210). With all these sensor technologies now available on off-the-shelf devices, it is possible to employ a mobile query system that can work effectively in any environment using location and orientation as primary parameters for directional queries. Novel approaches for determining a users visible query space in three dimensions based on their line-of-sight (ego-visibility) are investigated to provide for “hidden query removal” functionality. This article presents demonstrable results of a mobile application that is location, direction, and orientation aware, and that retrieves database objects and attributes (e.g. buildings, points-of-interest, etc.) by simply pointing, or “looking”, at them with a mobile phone.
web and wireless geographical information systems | 2009
Keith Gardiner; Junjun Yin; James D. Carswell
This paper describes research carried out in the area of mobile spatial interaction and the development of a mobile (i.e. on-device) version of a simulated web-based 2D directional query processor. The TellMe application integrates location (from GPS, GSM, WiFi) and orientation (from digital compass/tilt sensors) sensing technologies into an enhanced spatial query processing module capable of exploiting a mobile devices position and orientation for querying real-world 3D spatial datasets. This paper outlines the technique used to combine these technologies and the architecture needed to deploy them on a sensor enabled smartphone (i.e. Nokia 6210 Navigator). With all these sensor technologies now available on one device, it is possible to employ a personal query system that can work effectively in any environment using location and orientation as primary parameters for directional queries. In doing so, novel approaches for determining a users query space in 3 dimensions based on line-of-sight and 3D visibility (ego-visibility) are also investigated. The result is a mobile application that is location, direction and orientation aware and using these data is able to identify objects (e.g. buildings, points-of-interest, etc.) by pointing at them or when they are in a specified field-of-view.
acm symposium on applied computing | 2004
James D. Carswell; Keith Gardiner; Marco Neumann
Advances in spatially enabled semantic computing can provide situation aware assistance for mobile users. This intelligent and context-aware technology presents the right information at the right time, place and situation by exploiting semantically referenced data for knowledge discovery. The system takes advantage of new metadata standards to enable semantic, user, and device adapted transactions on multimedia datasets. Information accessed in the past and the activities planned by the user, the situation dependencies (e.g. location) of these activities are used to infer future information requirements. The focus of this paper describes an application of the above functionalities for performing mobile context-aware queries and updating of a multimedia spatial database of cultural heritage artifacts concerning early 20th century Dublin. It aims to exploit current consumer trends in mobile device usage by opening new markets for the increasing number of visitors to Dublins streets. An ongoing development of this technology, the project MoCHA (Mobile Cultural Heritage Adventures), will allow the mobile cultural heritage consumer to explore a personally tailored view of Dublins treasured artefacts, historical events and districts in an interactive and intuitive way directly on their spatially enabled PDA.
web and wireless geographical information systems | 2007
Seamus Rooney; Keith Gardiner; James D. Carswell
This paper describes the current efforts to develop an open source, privacy sensitive, location determination software component for mobile devices. Currently in mobile computing, the ability of a mobile device to determine its own location is becoming increasingly desirable as the usefulness of such a feature enhances many commercial applications. There have been numerous attempts to achieve this from both the network positioning perspective and also from the wireless beacon angle not to mention the integration of GPS into mobile devices. There are two important aspects to consider when using such a system which are privacy and cost. This paper describes the development of a software component that is sensitive to these issues. The ICiNG Location Client (ILC) is based on some pioneering work carried out by the Place Lab Project at Intel. (Hightower et al., 2006) The ILC advances this research to make it available on mobile devices and attempts to integrate GSM, WiFi, Bluetooth and GPS positioning into one positioning module. An outline of the ILCs design is given and some of the obstacles encountered during its development are described.
web and wireless geographical information systems | 2006
Andrea Rizzini; Keith Gardiner; Michela Bertolotto; James D. Carswell
Spatially enabled computing can provide assistance to both web-based and mobile users by exploiting positional information and associated contextual knowledge. The Mobile Environmental Management System (MEMS) is a proof of concept prototype that has been developed in order to simplify administrative duties of biologists at the Department of Fisheries and Oceans (DFO), Canada. MEMS aims to deliver context-aware functionality aided by visualization, analysis and manipulation of spatial and attribute datasets. The resulting application delivers a set of functions and services that aids the DFOs biologists in making everyday management decisions.
database and expert systems applications | 2015
Keith Gardiner; Charlie Cullen; James D. Carswell
This paper describes original research carried out in the area of Location-Based Services (LBS) with an emphasis on Auditory User Interfaces (AUI) for content delivery. Previous work in this area has focused on accurately determining spatial interactions and informing the user mainly by means of the visual modality. mobiSurround is new research that builds upon these principles with a focus on multimodal content delivery and navigation and in particular the development of an AUI. This AUI enables the delivery of rich media content and natural directions using audio. This novel approach provides a hands free method for navigating a space while experiencing rich media content dynamically constructed using techniques such as phrase synthesis, algorithmic music and 3D soundscaping. This paper outlines the innovative ideas employed in the design and development of the AUI that provides an overall immersive user experience.
Journal of Location Based Services | 2016
Keith Gardiner; Charlie Cullen; James D. Carswell
Abstract Delivering high-quality context-relevant information in a timely manner is a priority for location-based services (LBS) where applications require an immediate response based on spatial interaction. Previous work in this area typically focused on ever more accurately determining this interaction and informing the user in the customary graphical way using the visual modality. This paper describes the research area of multimodal LBS and focuses on audio as the key delivery mechanism. This new research extends familiar graphical information delivery by introducing a geoservices platform for delivering multimodal content and navigation services. It incorporates a novel auditory user interface (AUI) that enables delivery of natural language directions and rich media content using audio. This unifying concept provides a hands-free modality for navigating a mapped space while simultaneously enjoying rich media content that is dynamically constructed using such mechanisms as algorithmic music and phrase synthesis to generate task-relevant content based on the path taken. This paper outlines the innovative ideas employed in the design of the AUI and details the geoservices platform developed for facilitating the authoring and delivery of multimodal LBS applications. The paper concludes with a discussion on the results of a live user trial. The results are analysed and presented to validate the original hypothesis for this approach, address the research questions outlined and to inform further research directions. The results show that the proposed solution significantly progresses the state-of-the-art in terms of mobile tour production. The results also show that an AUI is an effective modality for the delivery of audio content and natural directions when used in combination with a graphical user interface, producing significantly reduced overheads in terms of content size and network usage. The results also indicate that the AUI provides a good overall user experience, performing well in the user trial.
web information systems engineering | 2003
Keith Gardiner; James D. Carswell
Archive | 2010
James D. Carswell; Keith Gardiner; Junjun Yin