Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew Montebello is active.

Publication


Featured researches published by Matthew Montebello.


Revised Papers from the NODe 2002 Web and Database-Related Workshops on Web, Web-Services, and Database Systems | 2002

DAML Enabled Web Services and Agents in the Semantic Web

Matthew Montebello; Charlie Abela

Academic and industrial bodies are considering the issue of Web Services as being the next step forward. A number of efforts have been made and are evolving to define specifications and architectures for the spreading of this new breed of web applications. One such work revolves around the Semantic Web. Lead researches are trying to combine the semantic advantages that a Semantic Web can provide to Web Services. The research started with the now standardized RDF (Resource Description Framework) and continued with the creation of DAML+OIL (DARPA Agent Markup Language and Ontology Inference Layer) and its branches, particularly DAML-S (where S stands for Services) [1].The Semantic Webs point of view, being considered in this paper presents a rich environment where the advantages of incorporating semantics in searching for Web Services can be fully expressed. This paper aims to describe an environment called DASD (DAML Agents for Service Discovery) where Web Service requesters and providers can discover each other with the intermediary action of a Matchmaking service.


Information Sciences | 2008

Interschema correspondence establishment in a cooperative OWL-based multi-information server grid environment

Abdel-Rahman H. Tawil; Matthew Montebello; Rami Bahsoon; W. A. Gray; Nick J. Fiddian

Establishing interschema semantic knowledge between corresponding elements in a cooperating OWL-based multi-information server grid environment requires deep knowledge, not only about the structure of the data represented in each server, but also about the commonly occurring differences in the intended semantics of this data. The same information could be represented in various incompatible structures, and more importantly the same structure could be used to represent data with many diverse and incompatible semantics. In a grid environment interschema semantic knowledge can only be detected if both the structural and semantic properties of the schemas of the cooperating servers are made explicit and formally represented in a way that a computer system can process. Unfortunately, very often there is lack of such knowledge and the underlying grid information servers (ISs) schemas, being semantically weak as a consequence of the limited expressiveness of traditional data models, do not help the acquisition of this knowledge. The solution to overcome this limitation is primarily to upgrade the semantic level of the IS local schemas through a semantic enrichment process by augmenting the local schemas of grid ISs to semantically enriched schema models, then to use these models in detecting and representing correspondences between classes belonging to different schemas. In this paper, we investigate the possibility of using OWL-based domain ontologies both for building semantically rich schema models, and for expressing interschema knowledge and reasoning about it. We believe that the use of OWL/RDF in this setting has two important advantages. On the one hand, it enables a semantic approach for interschema knowledge specification, by concentrating on expressing conceptual and semantic correspondences between both the conceptual (intensional) definition and the set of instances (extension) of classes represented in different schemas. On the other hand, it is exactly this semantic nature of our approach that allows us to devise reasoning mechanisms for discovering and reusing interschema knowledge when the need arises to compare and combine it.


Serious Games and Edutainment Applications | 2011

Social Interactive Learning in Multiplayer Games

Vanessa Camilleri; Leonard Busuttil; Matthew Montebello

The way people have been learning and living is constantly evolving. Whereas, a couple of decades ago, society required a workforce dominated primarily by the ‘production-line’ paradigm, nowadays the balance has tipped towards the necessity of a work-force which is dynamic, innovative, creative, and able to deal with problems in the most efficient manner. These characteristics are most often inherent of ‘Gamers’ or that section of the work-force which society is harbouring. This chapter will explore some of the characteristics, which games are capable of extracting and extrapolate them to a learning continuum shifting from the individual to the more collaborative framework. Ultimately this chapter aims to show why a shift in the mentality needs to occur when it comes to education and learning, as we move forward in the same steps which games have successfully undertaken.


string processing and information retrieval | 1998

Information overload-an IR problem?

Matthew Montebello

Information overload on the World Wide Web (WWW) is a well recognised problem. Research to subdue this problem and extract maximum benefit from the Internet is still in its infancy. With huge amounts of information connected to the Internet, efficient and effective discovery of resources and knowledge has become an imminent research issue. A vast array of network services is growing up around the Internet and a massive amount of information is added everyday. Despite the potential benefits of existing indexing, retrieving and searching techniques in assisting users in the browsing process, little has been done to ensure that the information presented is of a high recall and precision standard. Therefore, search for specific information on this massive and exploding information resource base becomes highly critical. The author discusses the issues involved in resolving the information overload over the WWW and argues that this is solely an information retrieval problem. As a contribution to the field he proposes a general architecture to subdue information overload and describes how this architecture has been instantiated in a functional system he developed.


Archive | 2013

Using Mobile Technology and a Participatory Sensing Approach for Crowd Monitoring and Management During Large-Scale Mass Gatherings

Martin Wirz; Eve Mitleton-Kelly; Tobias Franke; Vanessa Camilleri; Matthew Montebello; Daniel Roggen; Paul Lukowicz; Gerhard Tröster

A real-time understanding of the behavior of pedestrian crowds in physical spaces is important for crowd monitoring and management during large-scale mass gatherings. Thanks to the proliferation of location-aware smartphones in our society, we see a big potential in inferring crowd behavior patterns by tracking the location of attendees via their mobile phones. This chapter describes a framework to infer and visualize crowd behavior patterns in real-time, using a specially developed smartphone app. Attendees at an event voluntarily provide their location updates and in return may receive timely, targeted and personalized notifications directly from the security personnel which can be of help during an emergency situation. Users also have access to event-related information including travel advice to the location. We conducted a systems trial during the Lord Mayor’s Show 2011 in London, UK and the Notte Bianca festival 2011 in Valletta, Malta. In this chapter, besides verifying the technological feasibility, we report on interviews conducted with app users and police forces that were accessing the monitoring tools during the event. We learned from both sides that the created feedback loop between the attendees of the event running the app and the security personnel is seen as a strong incentive to follow such a participatory sensing approach. The researchers worked closely with policy makers, the emergency services and event organisers and policy implications of using the Socionical App will be discussed; as well as the response of users to being guided by an AmI device during a possible emergency.


international acm sigir conference on research and development in information retrieval | 1998

Optimizing recall/precision scores in IR over the WWW

Matthew Montebello

The rapid growth of the World Wide Web (WWW) and the massive size of the information corpus available for access symbolizes the wealth and benefits of this medium. At the same time this immense pool of information has created an information overflow which requires users to revert to techniques and tools in order to take advantage of such a resource and enhance the effectiveness of online information access. Search engines were created to assist users to find information by employing indexing techniques and suggest appropriate alternatives to browse. These search engines have inefficiencies and are not focused enough to the needs of individual users and little has been done to ensure that the information presented is of a high recall and precision standard. ‘Recall’ measures how efficient the system is at retrieving the relevant documents from the WWW, while ‘precision’ measures the relevance of the retrieved set of documents to the users’ requirements. We present our experiences with a system we developed to optimize the recall/precision scores. We attempt to achieve this objective by employing a number of search engines and user profiling in tandem. Namely, we attempt to optimize: l recall by aggregating hits from major search engines and other previously developed retrieval systems, l precision by generating user profiles and predicting appropriate and focused information to specific users. Our system is able to easily and inexpensively accommodate future generations of web-based retrieval systems and technologies. Our contribution to the IR field is that we were able to incorporate several desirable characteristics from different techniques to optimize and personalize WWW searching. 1 Background and Motivations In recent years there has been a well-publicized explosion of information available on the Internet, and a corresponding increase in usage. The WWW’s sheer scale and its exponential growth renders the task of simply finding information, tucked away in some Web site, laborious, tedious, long-winded and time consuming. The fact that Permission to make digital/hard copy of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage, the cowriaht notice. the title of the Dublication and its date appear, and notice is gi;en that copying is by permission of ACM, Inc. To copy otherwise, to republish, to post on servers OF to redistribute to lists, requires prior specific permission and/or fee. SlGIR’98, Melbourne, Australia @ 1998 ACM l-58113-015-5 8/98


international conference on games and virtual worlds for serious applications | 2011

Virtual World Presence for Pre-service Teachers: Does the TAM Model Apply?

Vanessa Camilleri; Matthew Montebello

5.00. a user’s time is valuable and that relevant information might not be accessed, imposes serious restrictions on the efficient use of the WWW and the benefits that users can expect from their interaction. Users are faced with the problem of search engines being too generalised[2, 31 and not focused enough[8, 91 to their real and specific needs. This triggered research to develop more sophisticated techniques and agent like systems that make use of the user profile to personalize the service they provide and add value to the information they presented [4, 5, 71. In Section 2 we briefly present the system we developed to reuse information generated by search engines and previously developed retrieval systems. Conceptually, it is similar to a meta-search engine, but with the major difference that it employs user profiling to specifically target documents for individual users. The system makes use of a number of machine learning techniques to extract features from documents and generate profiles. We point out how this web-based application has been designed and developed to incorporate different techniques and evolve in the eventuality that new retrieval systems or novel machine learning techniques need to be implemented in future. Finally, in Section 3 we present our conclusions and future work. 2 Architecture and Current Implementation Our goal is to achieve a high recall and high precision performance score on the information presented to the user. Recall measures how efficient the system is at retrieving the relevant documents from the WWW, while precision measures the relevance of the retrieved set of documents to the user requirements. In order to obtain a high recall execution we make use of the metasearch approach, namely, hits returned by a number of traditional search engines together with previously developed retrieval systems are blended, aggregated and utilised by our system. On the other hand, to achieve a high precision execution we employ machine learning techniques[6] to extract features from documents specific users find interesting, generate a profile and predict other documents that fit this profile. The task performed by the system is decomposed into a number of simpler tasks. Figure 1 shows the major components of the system: the WWW and the external systems at the bottom level, the underlying application software on the next level up, and the GUI at the top. All the external systems are considered to be black boxes and action is taken upon the information they output. Wrappers are used to manage the appropriate and proper handshaking between the diverse search engines and the other retrieving systems and the application layer. The system requires an administrator to manage the general needs and demands of a specific interest group of users. The administrator can initialise search terms tailored to any type of interest group and furthermore users will be able to suggest any other terms to add to the main search list. Documents relevant to the specific area of interest are retrieved and stored by the underlying application within the main index, and when a user logs-in he/she is able to benefit from the systems’ high recall fidelity. Having analysed the documents, individual users can bookmark and highlight specific items as interesting and appealing. These will be saved inside their personal database index. At this stage the underlying application plays another important role in attaining precise targeting of documents to individual users by generating a profile from the personal database index and predicting other documents from within the main index. Users can decide to add the suggested documents to their personal database index or remove them completely. As new and suggested documents are entered in the personal database index the user profile becomes more focused and finely tuned, as a result of which higher precision results will be achieved.


international world wide web conferences | 2006

DoNet: a semantic domotic framework

Malcolm Attard; Matthew Montebello

In this paper, we would like to present a model framework for testing the Technology Acceptance Model (TAM) initially proposed by Davis [5] with pre-service teachers using Virtual Worlds (VWs). The main hypothesis of this study states that the use of VWs will enhance technology acceptance by pre-service teachers, and will also facilitate adoption of technology applications within the classroom environment. There have been plenty of studies which have tested the TAM within work-related environments. Other breakthrough studies have also tried to apply the TAM for an education environment, investigating reasons for the possible lack of adoption of technology by teachers within the classroom environment. However, as yet, the models effectiveness has not been investigated with immersive technology applications such as VWs and their possible use and adoption within the teacher training framework.


Archive | 2006

CCBR Ontology for Reusable Service Templates

Charlie Abela; Matthew Montebello

In the very near future complete households will be entirely networked as a de facto standard. In this poster we briefly describe our work in the area of domotics, where personalization, semantics and agent technology come together. We illustrate a home system oriented ontology and an intelligent agent based framework for the rapid development of home control and automation. The ever changing nature of the home, places the user in a position were he needs to be involved and become, through DoNet, a part of an ongoing home system optimization process.


international database engineering and applications symposium | 2000

Wrapping WWW information sources

Matthew Montebello

the motivation and design of CCBROnto, an OWL Ontology for Conversational Case-Base Reasoning (CCBR). We use this ontology to define cases that can eventually be stored, retrieved and reused by a mixed-initiative approach based on CCBR. We apply this technique for retrieving Web Service Composition templates.Template isDefinedBy

Collaboration


Dive into the Matthew Montebello's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicholas Micallef

Glasgow Caledonian University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge