Pedro Alves Nogueira
University of Porto
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Pedro Alves Nogueira.
Biosensors and Bioelectronics | 2008
Manuel Azenha; Porkodi Kathirvel; Pedro Alves Nogueira; António Fernando-Silva
The present manuscript reports the first application of molecular modelling to the design of molecularly imprinted polymers (MIPs) prepared by alkoxysilane sol-gel polymerization. The major goal was to determine the requisite level of theory for the selection of suitable alkoxysilane functional monomers. A comparative study, applied to the design of a MIP for beta-damascenone, involving different levels of theory, basis set superposition error (BSSE) correction and basis set augmentation and also semi-empirical methods, was performed. The computations results suggest that the use of the 3-21G basis set concomitantly with a method for BSSE correction represents a good compromise between theory level and computation time for the successful screening of functional monomers. Additionally, a few selected MIPs and their corresponding non-imprinted congeners (NIPs) were prepared and tested in the role of solid-phase extraction (SPE) sorbents. The confrontation of the computational results with the observed performance and morphological characteristics of the prepared MIPs suggest that besides the strength and type of interactions existing between template and functional monomers other concomitant features, related with the sol-gel process, must also be accounted for so that effective molecular imprinting is achieved in an alkoxysilane xerogel. Nevertheless, since an optimal template-functional monomer interaction is a necessary condition for successful imprinting, the choice of the best monomers is still of the greatest importance and the proposed computational method may constitute an expeditious and reliable screening tool.
Analytica Chimica Acta | 2008
Manuel Azenha; Pedro Alves Nogueira; António Fernando-Silva
A novel procedure for solid-phase microextraction fiber preparation is presented, which combines the use of a rigid titanium alloy wire as a substrate with a blend of PDMS sol-gel mixture/silica particles, as a way of increasing both the mechanical robustness and the extracting capability of the sol-gel fibers. The approximately 30 microm average thick fibers displayed an improvement in the extraction capacity as compared to the previous sol-gel PDMS fibers, due to a greater load of stable covalently bonded sol-gel PDMS. The observed extraction capacity was comparable to that of 100 microm non-bonded PDMS fiber, having in this case the advantages of the superior robustness and stability conferred, respectively, by the unbreakable substrate and the sol-gel intrinsic characteristics. Repeatability (n=3) ranged 1-8% while fiber production reproducibility (n=3) ranged 15-25%. The presence of the silica particles was found to have no direct influence on the kinetics and mechanism of the extraction process, thus being possible to consider the new procedure as a refinement of the previous ones. The applicability potential of the devised fiber was illustrated with the analysis of gasoline under the context of arson samples.
Proceedings of the 2013 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT) on | 2013
Pedro Alves Nogueira; Eugénio C. Oliveira; Lennart E. Nacke
With the rising popularity of affective computing techniques, there have been several advances in the field of emotion recognition systems. However, despite the several advances in the field, these systems still face scenario adaptability and practical implementation issues. In light of these issues, we developed a nonspecific method for emotional state classification in interactive environments. The proposed method employs a two-layer classification process to detect Arousal and Valence (the emotions hedonic component), based on four psycho physiological metrics: Skin Conductance, Heart Rate and Electromyography measured at the corrugator supercilii and zygomaticus major muscles. The first classification layer applies multiple regression models to correctly scale the aforementioned metrics across participants and experimental conditions, while also correlating them to the Arousal or Valence dimensions. The second layer then explores several machine learning techniques to merge the regression outputs into one final rating. The obtained results indicate we are able to classify Arousal and Valence independently from participant and experimental conditions with satisfactory accuracy (97% for Arousal and 91% for Valence).
international conference on engineering applications of neural networks | 2013
Pedro Alves Nogueira; Eugénio C. Oliveira
Despite the rising number of emotional state detection methods motivated by the popularity increase in affective computing techniques in recent years, they are yet faced with subject and domain transferability issues. In this paper, we present an improved methodology for modelling individuals’ emotional states in multimedia interactive environments. Our method relies on a two-layer classification process to classify Arousal and Valence based on four distinct physiological sensor inputs. The first classification layer uses several regression models to normalize each of the sensor inputs across participants and experimental conditions, while also correlating each input to either Arousal or Valence – effectively addressing the aforementioned transferability issues. The second classification layer then employs a residual sum of squares-based weighting scheme to merge the various regression outputs into one optimal Arousal/Valence classification in real-time, while maintaining a smooth prediction output. The presented method exhibited convincing accuracy ratings – 85% for Arousal and 78% for Valence –, which are only marginally worse than our previous non-real-time approach.
international conference on entertainment computing | 2013
Pedro Alves Nogueira; Vasco Torres
Current affective response studies lack dedicated data analysis procedures and tools for automatically annotating and triangulating emotional reactions to game-related events. The development of such a tool would potentially allow for both a deeper and more objective analysis of the emotional impact of digital media stimuli on players, as well as towards the rapid implementation of this type of studies. In this paper we describe the development of such a tool that enables researchers to conduct objective a posteriori analyses, without disturbing the gameplay experience, while also automating the annotation and emotional response identification process. The tool was designed in a data-independent fashion and allows the identified responses to be exported for further analysis in third-party statistical software applications.
Journal on Multimodal User Interfaces | 2016
Pedro Alves Nogueira; Vasco Torres; Eugénio C. Oliveira; Lennart E. Nacke
To understand the impact of emotionally driven games on player experience, we developed a procedural horror game (Vanish) capable of run-time level, asset, and event generation. Vanish was augmented to interpret players’ physiological data as a simplified emotional state, mapping it to a set of adaptation rules that modify the player experience. To explore the effects of adaptation mechanisms on player experience, we conducted a mixed-methods study on three different versions of the game, two of which integrated varying biofeedback mechanisms. Players’ affective experiences were objectively measured by analysing physiological data. Additionally, subjective experience was recorded through the use of the Game Experience Questionnaire. Our study confirmed that biofeedback functionality had a statistically significant effect on the ratings of player experience dimensions: immersion, tension, positive affect, and negative affect. Furthermore, participants reported noticeable differences in player experience, favouring the added depth present in the biofeedback-enabled iterations of the game. In the future, these conclusions will help to develop more immersive and engaging player experiences.
Artificial Intelligence Review | 2014
Pedro Alves Nogueira; Luís Filipe Teófilo
Fluorescent microscopy imaging is a popular and well-established method for biomedical research. However, the large number of images created in each research trial quickly eliminates the possibility of a manual annotation; thus, the need for automatic image annotation is quickly becoming an urgent need. Furthermore, the high clustering indexes and noise observed in these images contribute to a complex issue, which has attracted the attention of the scientific community. In this paper, we present a fully automated method for annotating fluorescent confocal microscopy images in highly complex conditions. The proposed method relies on a multi-layered segmentation and declustering process, which begins with an adaptive segmentation step using a two-level Otsu’s Method. The second layer is comprised of two probabilistic classifiers, responsible for determining how many components may constitute each segmented region. The first of these employs rule-based reasoning grounded on the decreasing harmonic pattern observed in the region area density function, while the second one consists of a Support Vector Machine trained with features derived from the log likelihood ratio function of Gaussian mixture models of each region. Our results indicate that the proposed method is able to perform the identification and annotation process on par with an expert human subject, thus presenting itself a viable alternative to the traditional manual approach.
multi agent systems and agent based simulation | 2012
Luís Filipe Teófilo; Rosaldo J. F. Rossetti; Luís Paulo Reis; Henrique Lopes Cardoso; Pedro Alves Nogueira
The challenge in developing agents for incomplete information games resides in the fact that the maximum utility decision for given information set is not always ascertainable. For large games like Poker, the agents’ strategies require opponent modeling, since Nash equilibrium strategies are hard to compute. In light of this, simulation systems are indispensable for accurate assessment of agents’ capabilities. Nevertheless, current systems do not accommodate the needs of computer poker research since they were designed mainly as an interface for human players competing against agents. In order to contribute towards improving computer poker research, a new simulation system was developed. This system introduces scientifically unexplored game modes with the purpose of providing a more realistic simulation environment, where the agent must play carefully to manage its initial resources. An evolutionary simulation feature was also included so as to provide support for the improvement of adaptive strategies. The simulator has built-in odds calculation, an agent development API, other platform agents and several variants support and an agent classifier with realistic game indicators including exploitability estimation. Tests and qualitative analysis have proven this simulator to be faster and better suited for thorough agent development and performance assessment.
Journal on Multimodal User Interfaces | 2015
Pedro Alves Nogueira; Luís Filipe Teófilo; Pedro Brandão Silva
Previous work on player experience research has focused on identifying the major factors involving content creation and interaction. This has encouraged a large investment in new types of physical interaction artefacts (e.g. Wiimote™, Rock Band™, Kinect™). However, these artefacts still require custom interaction schemes to be developed for them, which critically limits the number of commercial videogames and multimedia applications that can benefit from those. Moreover, there is currently no agreement as to which factors better describe the impact that natural and complex multi-modal user interaction schemes have on users’ experiences—a gap in part created by the limitations in adapting this type of interaction to existing software. Thus, this paper presents a generic middleware framework for multi-modal natural interfaces which enables game-independent data acquisition that encourages further advancement on this domain. Furthermore, our framework can then redefine the interaction scheme of any software tool by mapping body poses and voice commands to traditional input means (keyboard and mouse). We have focused on digital games, where the use of physical interaction artefacts has become mainstream. The validation methods for this tool consisted of a series of increasing difficulty stress tests, with a total of 25 participants. Also, a pilot study was conducted on a further 16 subjects which demonstrated mainly positive impact of natural interfaces on player’s experience. The results supporting this were acquired when subjects played a complex commercial role-playing game whose mechanics were adapted using our framework; statistical tests on the obtained Fun ratings, along with subjective participant opinions indicate that this kind of natural interaction indeed has a significant impact on player’s experience and enjoyment. However, different impact patterns emerge from this analysis, which seem to fit with standing theories of player experience and immersion.
world conference on information systems and technologies | 2013
Luís Filipe Teófilo; Pedro Alves Nogueira; Pedro Brandão Silva
In recent years videogame companies have recognized the role of player engagement as a major factor in user experience and enjoyment. This encouraged a greater investment in new types of game controllers such as the WiiMoteTM, Rock BandTM instruments and the KinectTM. However, the native software of these controllers was not originally designed to be used in other game applications. This work addresses this issue by building a middleware framework, which maps body poses or voice commands to actions in any game. This not only warrants a more natural and customized user-experience but it also defines an interoperable virtual controller. In this version of the framework, body poses and voice commands are respectively recognized through the Kinect’s built-in cameras and microphones. The acquired data is then translated into the native interaction scheme in real time using a lightweight method based on spatial restrictions. The system is also prepared to use Nintendo’s WiimoteTM as an auxiliary and unobtrusive gamepad for physically or verbally impractical commands. System validation was performed by analyzing the performance of certain tasks and examining user reports. Both confirmed this approach as a practical and alluring alternative to the game’s native interaction scheme. In sum, this framework provides a game-controlling tool that is totally customizable and very flexible, thus expanding the market of game consumers.