Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where João Nuno de Oliveira e Silva is active.

Publication


Featured researches published by João Nuno de Oliveira e Silva.


Skin Pharmacology and Physiology | 2009

Stratum corneum is an effective barrier to TiO2 and ZnO nanoparticle percutaneous absorption.

P.M. Filipe; João Nuno de Oliveira e Silva; Ricardo Machado da Silva; J.L. Cirne de Castro; M. Marques Gomes; L.C. Alves; R. Santus; T. Pinheiro

Background: There is increasing concern over the local and systemic side effects of TiO2 and ZnO coated nanoparticles widely used in sun blockers. Objective: To determine the localization and possible skin penetration of TiO2 and ZnO nanoparticles, dispersed in 3 sunscreen formulations, under realistic in vivo conditions in normal and altered skin. Methods: Nuclear microscopy techniques provided spatially resolved quantitative analysis of Ti and Zn nanoparticle distributions in transversal cryosections of skin obtained by biopsy with no further treatment. A test hydrophobic formulation containing coated 20-nm TiO2 nanoparticles and 2 commercial sunscreen formulations containing TiO2 alone or in combination with ZnO were tried, taking into account realistic use conditions by consumers and compared with the recommended standard condition for the sun protection factor test. The protocols consisted of an open test. Results: Following a 2-hour exposure period of normal human skin to TiO2- and ZnO-containing sunscreens, detectable amounts of these physical blockers were only present at the skin surface and in the uppermost stratum corneum regions. Layers deeper than the stratum corneum were devoid of TiO2 or exogenous ZnO, even after 48 h of exposure to the sunscreen, under occlusion. Deposition of TiO2 and ZnO nanoparticles in the openings of the pilosebaceous follicles was also observed, suggesting a preferential fixation area. Penetration of nanoparticles into viable skin tissue could not be detected. Conclusions: TiO2 or ZnO nanoparticles are absent or their levels are too low to be tested under the stratum corneum in human viable epidermal layers. Therefore, significant penetration towards the underlying keratinocytes is unlikely.


middleware for grid computing | 2008

Heuristic for resources allocation on utility computing infrastructures

João Nuno de Oliveira e Silva; Luís Veiga; Paulo Ferreira

The use of utility on-demand computing infrastructures, such as Amazons Elastic Clouds [1], is a viable solution to speed lengthy parallel computing problems to those without access to other cluster or grid infrastructures. With a suitable middleware, bag-of-tasks problems could be easily deployed over a pool of virtual computers created on such infrastructures. In bag-of-tasks problems, as there is no communication between tasks, the number of concurrent tasks is allowed to vary over time. In a utility computing infrastructure, if too many virtual computers are created, the speedups are high but may not be cost effective; if too few computers are created, the cost is low but speedups fall below expectations. Without previous knowledge of the processing time of each task, it is difficult to determine how many machines should be created. In this paper, we present an heuristic to optimize the number of machines that should be allocated to process tasks so that for a given budget the speedups are maximal. We have simulated the proposed heuristics against real and theoretical workloads and evaluated the ratios between number of allocated hosts, charged times, speedups and processing times. With the proposed heuristics, it is possible to obtain speedups in line with the number of allocated computers, while being charged approximately the same predefined budget.


Bio-medical Materials and Engineering | 2008

Photodynamic therapy: Dermatology and ophthalmology as main fields of current applications in clinic

João Nuno de Oliveira e Silva; Paulo Filipe; Patrice Morlière; Jean-Claude Mazière; João P. Freitas; Manuel Marques Gomes; R. Santus

Photodynamic therapy (PDT) of skin tumors or pre-cancerous lesions and of age-related macular degeneration combines the administration of porphyrins or porphyrin precursors and illumination with red light at the diseased sites. Photosensitizers absorbing light beyond 630 nm where tissues have the highest transmittance produce singlet oxygen, a highly reactive activated oxygen species and a major cytotoxin. The PDT of age-related macular degeneration is performed with red laser light after i.v. injection of verteporfin (Visudyne) a hydrophobic porphyrin carried by serum lipoproteins whose endocytosis leads to accumulation of the porphyrin in endothelial cells of choroidal neo-vessels. In the PDT of skin cancers, local synthesis of the photosensitizer occurs after topical application of the natural protoporphyrin IX precursor delta-aminolevulinic acid (or its ester forms) on the lesions. In all the cases, the photosensitizers should be rapidly excreted to avoid a long lasting skin photosensitivity.


international parallel and distributed processing symposium | 2010

Service and resource discovery in cycle-sharing environments with a utility algebra

João Nuno de Oliveira e Silva; Paulo Ferreira; Luís Veiga

The Internet has witnessed a steady and widespread increase in available idle computing cycles and computing resources in general. Such available cycles simultaneously allow and foster the increase in development of existing and new computationally demanding applications, driven by algorithm complexity, intensive data processing, or both. Available cycles may be harvested from several scenarios, ranging from college or office LANs, cluster, grid and utility or cloud computing infrastructures, to peer-to-peer overlay networks. Existing resource discovery protocols have a number of shortcomings for the existing variety of cycle sharing scenarios. They either (i) were designed to return only a binary answer stating whether a remote computer fulfills the requirements, (ii) rely on centralized schedulers (or coherently replicated) that are impractical in certain environments such as peer-to-peer computing, (iii) they are not extensible as it is impossible to define new resources to be discovered and evaluated or new ways to evaluate them. In this paper we present a novel, extensible, expressive, and flexible requirement specification algebra and resource discovery middleware. Besides standard resources (CPU, memory, network bandwidth,...), application developers may define new resource requirements and new ways to evaluate them. Application programmers can write complex requirements (that evaluate several resources) using fuzzy logic operators. Each resource evaluation (either standard or specially coded) returns a value between 0.0 and 1.0 stating the capacity to (partially) fulfill the requirement, considering client-specific utility depreciation (i.e., partial-utility, a downgraded measure of how the user assesses the available resources) and policies for combined utility evaluation. By comparing the values obtained from the various hosts, it is possible to precisely know which ones best fulfill each clients needs, regarding a set of required resources.


international conference on parallel processing | 2012

Quality-of-service for consistency of data geo-replication in cloud computing

Sérgio Esteves; João Nuno de Oliveira e Silva; Luís Veiga

Today we are increasingly more dependent on critical data stored in cloud data centers across the world. To deliver high-availability and augmented performance, different replication schemes are used to maintain consistency among replicas. With classical consistency models, performance is necessarily degraded, and thus most highly-scalable cloud data centers sacrifice to some extent consistency in exchange of lower latencies to end-users. More so, those cloud systems blindly allow stale data to exist for some constant period of time and disregard the semantics and importance data might have, which undoubtedly can be used to gear consistency more wisely, combining stronger and weaker levels of consistency. To tackle this inherent and well-studied trade-off between availability and consistency, we propose the use of VFC3, a novel consistency model for replicated data across data centers with framework and library support to enforce increasing degrees of consistency for different types of data (based on their semantics). It targets cloud tabular data stores, offering rationalization of resources (especially bandwidth) and improvement of QoS (performance, latency and availability), by providing strong consistency where it matters most and relaxing on less critical classes or items of data.


Journal of Internet Services and Applications | 2011

A2HA—automatic and adaptive host allocation in utility computing for bag-of-tasks

João Nuno de Oliveira e Silva; Luís Veiga; Paulo Ferreira

There are increasingly more computing problems requiring lengthy parallel computations. For those without access to current cluster or grid infrastructures, a recent and proven viable solution can be found with on-demand utility computing infrastructures, such as Amazon Elastic Compute Cloud (EC2). A relevant class of such problems, Bag-of-Tasks (BoT), can be easily deployed over such infrastructures (to run on pools of virtual computers), if provided with suitable software for host allocation. BoT problems are found in several and relevant scenarios such as image rendering and software testing.In BoT jobs, tasks are mostly independent; thus, they can run in parallel with no communication among them. The number of allocated hosts is relevant as it impacts both the speedup and the cost: if too many hosts are used, the speedup is high but this may not be cost-effective; if too few are used, the cost is low but speedup falls below expectations. For each BoT job, given that there is no prior knowledge of neither the total job processing time nor the time each task takes to complete, it is hard to determine the number of hosts to allocate. Current solutions (e.g., bin-packing algorithms) are not adequate as they require knowing in advance either the time that the next task will take to execute or, for higher efficiency, the time taken by each one of the tasks in each job considered.Thus, we present an algorithm and heuristics that adaptively predicts the number of hosts to be allocated, so that the maximum speedup can be obtained while respecting a given predefined budget. The algorithm and heuristics were simulated against real and theoretical workloads. With the proposed solution, it is possible to obtain speedups in line with the number of allocated hosts, while being charged less than the predefined budget.


workshop on middleware for pervasive and ad hoc computing | 2008

SPADE: scheduler for parallel and distributed execution from mobile devices

João Nuno de Oliveira e Silva; Luís Veiga; Paulo Ferreira

Mobile computing devices, such as mobile phones or even ultra-mobile PCs, are becoming more and more powerful. Because of this fact, users are starting to use these devices to execute tasks that until a few years ago would only be executed on a desktop PC, e.g. picture manipulation, or text editing. Furthermore, these devices, are by now almost continuously connected, either by Wi-Fi or 3G UMTS links. Nevertheless power consumption is still a major factor on these mobile devices usage, restricting autonomy. While users should be able to employ mobile computing devices to perform these tasks with convenience, it would improve performance and reduce battery drain if the bulk processing of such tasks could be offloaded to remote hosts accessible by the same user. To accomplish this, we present SPADE, a middleware to deploy remote and parallel execution of some commodity applications to solve complex problems, from mobile devices, without any special programming effort, and by simply defining several data input sets. In SPADE, jobs are composed of simpler tasks that will be executed on remote computers. The user states what files should be processed by each task, what application will be launched and defines the application arguments. By using SPADE any user can, for instance, accelerate a batch image manipulation by using otherwise idle remote computers, while releasing the mobile device for other tasks. In order to make SPADE usable by a wide set of computer users we implemented two ideas: i) the execution code is a commodity piece of software already installed on the remote computers (e.g. image processing applications), and ii) the definition of the data sets to be remotely processed is done in a simple and intuitive way. The results are promising as the speedups accomplished are near optimal, while reducing power consumption, and SPADE allows the easy and efficient deployment of jobs on remote hosts.


acm/ieee international conference on mobile computing and networking | 2015

Poster: Unified RemoteU¡ for Mobile Environments

Miguel Almeida Carvalho; João Nuno de Oliveira e Silva

In our daily lives we assist to an exponential growth of mobile and fixed devices that surround us, though many of them having limited resources, and not even providing an interface screen. In this paper, we present remoteU¡, a middleware that allows the interaction of those devices with users, resorting to simple but expressive programming mechanisms, and providing efficient implementation and communication.


Journal of Parallel and Distributed Computing | 2015

Incremental dataflow execution, resource efficiency and probabilistic guarantees with Fuzzy Boolean nets

Sérgio Esteves; João Nuno de Oliveira e Silva; João Paulo Carvalho; Luís Veiga

Currently, there is a strong need for organizations to analyze and process ever-increasing volumes of data in order to answer to real-time processing demands. Such continuous and data-intensive processing is often achieved through the composition of complex data-intensive workflows (i.e., dataflows).Dataflow management systems typically enforce strict temporal synchronization across the various processing steps. Non-synchronous behavior often has to be explicitly programmed on an ad-hoc basis, which requires additional lines of code in programs and thus the possibility of errors. More so, in a large set of scenarios for continuous and incremental processing, the output of dataflow applications at each execution can suffer almost no difference when comparing to the previous execution, and therefore resources, energy and computational power are unknowingly wasted.To face such lack of efficiency, transparency, and generality, we introduce the notion of Quality-of-Data (QoD), which describes the level of changes required on a data store that cause the triggering of processing steps. This, so that the dataflow (re-)execution is reduced until its outcome would reach a significant and meaningful variation, which is inside a specified freshness limit.Based on the QoD notion, we propose a novel dataflow model, with framework (Fluxy), for orchestrating data-intensive processing steps, which communicate data via a NoSQL storage, and whose triggering semantics is driven by dynamic QoD constraints automatically defined for different datasets by means of Fuzzy Boolean Nets. These nets give probabilistic guarantees about the prediction of the cumulative error between consecutive dataflow executions. With Fluxy, we demonstrate how dataflows can be leveraged to respond to quality boundaries (that can be seen as SLAs) to deliver controlled and augmented performance, rationalization of resources, and task prioritization. We offer a framework for resource efficient continuous and data intensive workflows.We are able to learn correlations between dataflow input and final output.We avoid re-executions when input data is predicted not to be impactful to the output.We ensure dataflow correctness within a small error constant.We achieve controlled performance, task prioritization and high resource efficiency.


Journal of Internet Services and Applications | 2013

Fluχ: a quality-driven dataflow model for data intensive computing

Sérgio Esteves; João Nuno de Oliveira e Silva; Luís Veiga

Today, there is a growing need for organizations to continuously analyze and process large waves of incoming data from the Internet. Such data processing schemes are often governed by complex dataflow systems, which are deployed atop highly-scalable infrastructures that need to manage data efficiently in order to enhance performance and alleviate costs.Current workflow management systems enforce strict temporal synchronization among the various processing steps; however, this is not the most desirable functioning in a large number of scenarios. For example, considering dataflows that continuously analyze data upon the insertion/update of new entries in a data store, it would be wise to assess the level of modifications in data, before the trigger of the dataflow, that would minimize the number of executions (processing steps), reducing overhead and augmenting performance, while maintaining the dataflow processing results within certain coverage and freshness limit.Towards this end, we introduce the notion of Quality-of-Data (QoD), which describes the level of modifications necessary on a data store to trigger processing steps, and thus conveying in the level of performance specified through data requirements. Also, this notion can be specially beneficial in cloud computing, where a dataflow computing service (SaaS) may provide certain QoD levels for different budgets.In this article we propose Fluχ, a novel dataflow model, with framework and programming library support, for orchestrating data-based processing steps, over a NoSQL data store, whose triggering is based on the evaluation and dynamic enforcement of QoD constraints that are defined (and possibly adjusted automatically) for different sets of data. With Fluχ we demonstrate how dataflows can be leveraged to respond to quality boundaries that bring controlled and augmented performance, rationalization of resources, and task prioritization.

Collaboration


Dive into the João Nuno de Oliveira e Silva's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paulo Ferreira

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

Paulo Filipe

Instituto de Medicina Molecular

View shared research outputs
Top Co-Authors

Avatar

L.C. Alves

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

T. Pinheiro

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sérgio Esteves

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A.P. Gonçalves

Instituto Superior Técnico

View shared research outputs
Researchain Logo
Decentralizing Knowledge