Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ria Mae Borromeo is active.

Publication


Featured researches published by Ria Mae Borromeo.


international database engineering and applications symposium | 2015

Automatic vs. Crowdsourced Sentiment Analysis

Ria Mae Borromeo; Motomichi Toyama

Due to the amount of work needed in manual sentiment analysis of written texts, techniques in automatic sentiment analysis have been widely studied. However, compared to manual sentiment analysis, the accuracy of automatic systems range only from low to medium. In this study, we solve a sentiment analysis problem by crowdsourcing. Crowdsourcing is a problem solving approach that uses the cognitive power of people to achieve specific computational goals. It is implemented through an online platform, which can either be paid or volunteer-based. We deploy crowdsourcing applications in paid and volunteer-based platforms to classify teaching evaluation comments from students. We present a comparison of the results produced by crowdsourcing, manual sentiment analysis, and an existing automatic sentiment analysis system. Our findings show that the crowdsourced sentiment analysis in both paid and volunteer-based platforms are considerably more accurate than the automatic sentiment analysis algorithm but still fail to achieve high accuracy compared to the manual method. To improve accuracy, the effect of increasing the size of the crowd could be explored in the future.


Human-centric Computing and Information Sciences | 2016

An investigation of unpaid crowdsourcing

Ria Mae Borromeo; Motomichi Toyama

The continual advancement of internet technologies has led to the evolution of how individuals and organizations operate. For example, through the internet, we can now tap a remote workforce to help us accomplish certain tasks, a phenomenon called crowdsourcing. Crowdsourcing is an approach that relies on people to perform activities that are costly or time-consuming using traditional methods. Depending on the incentive given to the crowd workers, crowdsourcing can be classified as paid or unpaid. In paid crowdsourcing, the workers are incentivized financially, enabling the formation of a robust workforce, which allows fast completion of tasks. Consequently, in unpaid crowdsourcing, the lack of financial incentive potentially leads to an unpredictable workforce and indeterminable task completion time. However, since payment to workers is not necessary, it can be an economical alternative for individuals and organizations who are more concerned about the budget than the task turnaround time. In this study, we explore unpaid crowdsourcing by reviewing crowdsourcing applications where the crowd comes from a pool of volunteers. We also evaluate its performance in sentiment analysis and data extraction projects. Our findings suggest that for such tasks, unpaid crowdsourcing completes slower but yields results of similar or higher quality compared to its paid counterpart.


international database engineering and applications symposium | 2016

The Influence of Crowd Type and Task Complexity on Crowdsourced Work Quality

Ria Mae Borromeo; Thomas Laurent; Motomichi Toyama

As the use of crowdsourcing spreads, the need to ensure the quality of crowdsourced work is magnified. While quality control in crowdsourcing has been widely studied, established mechanisms may still be improved to take into account other factors that affect quality. However, since crowdsourcing relies on humans, it is difficult to identify and consider all factors affecting quality. In this study, we conduct an initial investigation on the effect of crowd type and task complexity on work quality by crowdsourcing a simple and more complex version of a data extraction task to paid and unpaid crowds. We then measure the quality of the results in terms of its similarity to a gold standard data set. Our experiments show that the unpaid crowd produces results of high quality regardless of the type of task while the paid crowd yields better results in simple tasks. We intend to extend our work to integrate existing quality control mechanisms and perform more experiments with more varied crowd members.


international database engineering and applications symposium | 2015

A Data Retrieval Model Based on Independence Rules for SuperSQL

Arnaud Wolf; Ria Mae Borromeo; Motomichi Toyama

SuperSQL is an extension of SQL that automatically formats data retrieved from the database into various kinds of application data (HTML, PDF...). Current developments lead us to identify improvement points and remodel the design of the SuperSQL architecture. Among them, in the current SuperSQL version, the emptiness of one single relation leads to the emptiness of the entire table forming the output data. This is because the process handling the retrieval of desired data does not consider the schema representation of the data and thus does not identify independence between data lists. In this paper, we propose a new process of data retrieval based on a three layers model: the definition layer, the equivalence layer and the optimisation layer. As a result, our proposed architecture is able to manage empty sets and allows easier integration to support future developments.


The Vldb Journal | 2018

User group analytics: hypothesis generation and exploratory analysis of user data

Behrooz Omidvar-Tehrani; Sihem Amer-Yahia; Ria Mae Borromeo

User data is becoming increasingly available in multiple domains ranging from the social Web to retail store receipts. User data is described by user demographics (e.g., age, gender, occupation) and user actions (e.g., rating a movie, publishing a paper, following a medical treatment). The analysis of user data is appealing to scientists who work on population studies, online marketing, recommendations, and large-scale data analytics. User data analytics usually relies on identifying group-level behavior such as “Asian women who publish regularly in databases.” Group analytics addresses peculiarities of user data such as noise and sparsity to enable insights. In this paper, we introduce a framework for user group analytics by developing several components which cover the life cycle of user groups. We provide two different analytical environments to support “hypothesis generation” and “exploratory analysis” on user groups. Experiments on datasets with different characteristics show the usability and efficiency of our group analytics framework.


Information Systems | 2017

Deployment strategies for crowdsourcing text creation

Ria Mae Borromeo; Thomas Laurent; Motomichi Toyama; Maha Alsayasneh; Sihem Amer-Yahia; Vincent Leroy

Abstract Automatically generating text of high quality in tasks such as translation, summarization, and narrative writing is difficult as these tasks require creativity, which only humans currently exhibit. However, crowdsourcing such tasks is still a challenge as they are tedious for humans and can require expert knowledge. We thus explore deployment strategies for crowdsourcing text creation tasks to improve the effectiveness of the crowdsourcing process. We consider effectiveness through the quality of the output text, the cost of deploying the task, and the latency in obtaining the output. We formalize a deployment strategy in crowdsourcing along three dimensions: work structure, workforce organization, and work style. Work structure can either be simultaneous or sequential, workforce organization independent or collaborative, and work style either by humans only or by using a combination of machine and human intelligence. We implement these strategies for translation, summarization, and narrative writing tasks by designing a semi-automatic tool that uses the Amazon Mechanical Turk API and experiment with them in different input settings such as text length, number of sources, and topic popularity. We report our findings regarding the effectiveness of each strategy and provide recommendations to guide requesters in selecting the best strategy when deploying text creation tasks.


extending database technology | 2017

Crowdsourcing Strategies for Text Creation Tasks.

Ria Mae Borromeo; Maha Alsayasneh; Sihem Amer-Yahia; Vincent Leroy


IEEE Transactions on Knowledge and Data Engineering | 2018

Personalized and Diverse Task Composition in Crowdsourcing

Maha Alsayasneh; Sihem Amer-Yahia; Eric Gaussier; Vincent Leroy; Julien Pilourdault; Ria Mae Borromeo; Motomichi Toyama; Jean Michel Renders


ieee international conference on data science and advanced analytics | 2016

Task Composition in Crowdsourcing

Sihem Amer-Yahia; Eric Gaussier; Vincent Leroy; Julien Pilourdault; Ria Mae Borromeo; Motomichi Toyama


ieee international conference on data science and advanced analytics | 2017

Customizing Travel Packages with Interactive Composite Items

Manish Singh; Ria Mae Borromeo; Anas Hosami; Sihem Amer-Yahia; Shady Elbassuoni

Collaboration


Dive into the Ria Mae Borromeo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sihem Amer-Yahia

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Maha Alsayasneh

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Eric Gaussier

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Julien Pilourdault

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge