Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jose Carlos Sancho is active.

Publication


Featured researches published by Jose Carlos Sancho.


IEEE Network | 2013

All-optical packet/circuit switching-based data center network for enhanced scalability, latency, and throughput

Jordi Perelló; Salvatore Spadaro; Sergio Ricciardi; Davide Careglio; Shuping Peng; Reza Nejabati; Georgios Zervas; Dimitra Simeonidou; Alessandro Predieri; Matteo Biancani; Harm J. S. Dorren; S Stefano Di Lucente; Jun Luo; N Nicola Calabretta; Giacomo Bernini; Nicola Ciulli; Jose Carlos Sancho; Steluta Iordache; Montse Farreras; Yolanda Becerra; Chris Liou; Iftekhar Hussain; Yawei Yin; Lei Liu; Roberto Proietti

Applications running inside data centers are enabled through the cooperation of thousands of servers arranged in racks and interconnected together through the data center network. Current DCN architectures based on electronic devices are neither scalable to face the massive growth of DCs, nor flexible enough to efficiently and cost-effectively support highly dynamic application traffic profiles. The FP7 European Project LIGHTNESS foresees extending the capabilities of todays electrical DCNs throPugh the introduction of optical packet switching and optical circuit switching paradigms, realizing together an advanced and highly scalable DCN architecture for ultra-high-bandwidth and low-latency server-to-server interconnection. This article reviews the current DC and high-performance computing (HPC) outlooks, followed by an analysis of the main requirements for future DCs and HPC platforms. As the key contribution of the article, the LIGHTNESS DCN solution is presented, deeply elaborating on the envisioned DCN data plane technologies, as well as on the unified SDN-enabled control plane architectural solution that will empower OPS and OCS transmission technologies with superior flexibility, manageability, and customizability.


european conference on parallel processing | 2010

Characterizing the impact of using spare-cores on application performance

Jose Carlos Sancho; Darren J. Kerbyson; Michael Lang

Increased parallelism on a single processor is driving improvements in peak-performance at both the node and system levels. However achievable performance, in particular from production scientific applications, is not always directly proportional to the core count. Performance is often limited by constraints in the memory hierarchy and also by a node inter-connectivity. Even on state-of-the-art processors, containing between four and eight cores, many applications cannot take full advantage of the compute-performance of all cores. This trend is expected to increase on future processors as the core count per processor increases. In this work we characterize the use of spare-cores, cores that do not provide any improvements in application performance, on current multi-core processors. By using a pulse-width modulation method, we examine the possible performance profile of using a spare-core and quantify under what situations its use will not impact application performance. We show that, for current AMD and Intel multi-core processors, sparecores can be used for substantial computational tasks but can impact application performance when using shared caches or when significantly accessing main memory.


international conference on high performance computing and simulation | 2015

Performance evaluation of Optical Packet Switches on high performance applications

Hugo Meyer; Jose Carlos Sancho; W Wang Miao; Hjs Harm Dorren; N Nicola Calabretta; Montse Farreras

This paper analyzes the performance impact of Optical Packet Switches (OPS) on parallel HPC applications. Because these devices cannot store light, in case of a collision for accessing the same output port in the switch only one packet can proceed and the others are dropped. The analysis focuses on the negative impact of packet collisions in the OPS and subsequent re-transmissions of dropped packets. To carry out this analysis we have developed a system simulator that mimics the behavior of real HPC application traffic and optical network devices such as the OPS. By using real application traces we have analyzed how message re-transmissions could affect parallel executions. In addition, we have also developed a methodology that allows to process applications traces and determine packet concurrency. The concurrency evaluates the amount of simultaneous packets that applications could transmit in the network. Results have shown that there are applications that can benefit from the advantages of OPS technology. Taking into account the applications analyzed, these applications are the ones that show less than 1% of packet concurrency; whereas there are other applications where their performance could be impacted by up to 65%. This impact is mostly dependent on application traffic behavior that is successfully characterized by our proposed methodology.


international conference on conceptual structures | 2017

Disaggregated Computing. An Evaluation of Current Trends for Datacentres

Hugo Meyer; Jose Carlos Sancho; Josue V. Quiroga; Ferad Zyulkyarov; Damian Roca; Mario Nemirovsky

Next generation data centers will likely be based on the emerging paradigm of disaggregated function-blocks-as-a-unit departing from the current state of mainboard-as-a-unit. Multiple functional blocks or bricks such as compute, memory and peripheral will be spread through the entire system and interconnected together via one or multiple high speed networks. The amount of memory available will be very large distributed among multiple bricks. This new architecture brings various benefits that are desirable in today’s data centers such as fine-grained technology upgrade cycles, fine-grained resource allocation, and access to a larger amount of memory and accelerators. An analysis of the impact and benefits of memory disaggregation is presented in this paper. One of the biggest challenges when analyzing these architectures is that memory accesses should be modeled correctly in order to obtain accurate results. However, modeling every memory access would generate a high overhead that can make the simulation unfeasible for real data center applications. A model to represent and analyze memory disaggregation has been designed and a statistics-based queuing-based full system simulator was developed to rapidly and accurately analyze applications performance in disaggregated systems. With a mean error of 10%, simulation results pointed out that the network layers may introduce overheads that degrade applications’ performance up to 66%. Initial results also suggest that low memory access bandwidth may degrade up to 20% applications’ performance.


Future Generation Computer Systems | 2017

Optical packet switching in HPC : an analysis of applications performance

Hugo Meyer; Jose Carlos Sancho; Milica Mrdakovic; Wang Miao; N Nicola Calabretta

Optical Packet Switches (OPS) could provide the needed low latency transmissions in today large data centers. OPS can deliver lower latency and higher bandwidth than traditional electrical switches. These features are needed for parallel High Performance Computing (HPC) applications. For this purpose, it has been recently designed full optical network architectures for HPC system such as the Architecture-On-Demand (AoD) network infrastructure. Although light-based transmission has its advantage over electrical-based transmissions, optical devices such as OPS cannot store light. Therefore, in case of an optical packet collision occurs for accessing the same output port in OPS only one packet can proceed and the others must be dropped, triggering afterwards a retransmission procedure. Obviously, packet retransmissions are delaying the actual transmission and also increase the buffer utilization at the network interfaces cards (NICs) that deals with retransmissions. In this paper, it is proposed a technique based on mapping application processes to servers reducing the number of simultaneous packets in the network (Concurrency) and therefore, it is able to significantly reduce optical collisions at the OPS while substantially reduces the resource needed at the NICs for retransmissions. Our proposed concurrency-aware mapping technique can reduce the extra buffer size utilization up to 4.2 times and the execution time degradation up to 2.6 times.


international conference of distributed computing and networking | 2016

Scaling architecture-on-demand based optical networks

Hugo Meyer; Jose Carlos Sancho; Milica Mrdakovic; Shuping Peng; Dimitra Simeonidou; Wang Miao; N Nicola Calabretta

This paper analyzes methodologies that allow scaling properly Architecture-On-Demand (AoD) based optical networks. As Data Centers and HPC systems are growing in size and complexity, optical networks seem to be the way to scale the bandwidth of current network infrastructures. To scale the number of servers that are connected to optical switches normally Dense Wavelength Division Multiplexing (DWDM) is used to group several servers in one fiber. Using DWDM limits the number of servers per fiber to the number of wavelengths that fiber supports, and also may increase the number of packet collisions. Our proposal focuses on using Time Division Multiplexing (TDM) to allow multiple servers per wavelength, allowing to scale to a larger number of servers per switch. Initial results have shown that when using TDM we can obtain similar results in performance when comparing it with DWDM. For some of the applications, TDM can outperform DWDM up to 2.4% taking into account execution time.


Proceedings of the 2012 Interconnection Network Architecture on On-Chip, Multi-Chip Workshop | 2012

Contention-aware node allocation policy for high-performance capacity systems

Ana Jokanovic; Cyriel Minkenberg; Jose Carlos Sancho; Ramón Beivide; German Rodriguez; Jesús Labarta

Inter-application network contention is seen as a major hurdle to achieve higher throughput in todays large-scale high-performance capacity systems. This effect is aggravated by current system schedulers that allocate jobs as soon as nodes become available, thus producing job fragmentation, i.e., the tasks of one job might be spread throughout the system instead of being allocated contiguously. This fragmentation increases the probability of sharing network resources with other applications, which produces higher inter-application network contention. In this paper, we propose the use of a contention-aware node allocation technique. This technique is based on identifying which applications are most prone to causing a big impact on inter-application contention and obtaining a more contiguous allocation for these particular workloads. We demonstrate that, although a contiguous node allocation on slimmed fat-tree topologies may increase intra-application contention, the reduction on inter-application contention is more significant. Simulation experiments on a 2,048-node system running multiple applications showed that this technique reduces contention time by up to 35%.


international conference on parallel processing | 2011

Reducing the impact of soft errors on fabric-based collective communications

Jose Carlos Sancho; Ana Jokanovic; Jesús Labarta

Collective operations might have a big impact on the performance of scientific applications, specially at large scale. Recently, it has been proposed Fabric-based collectives to address some scalability issues caused by the OS jitter. However, soft errors are becoming the next factor that significantly might degrade collectives performance at scale. This paper evaluates two approaches to mitigate the negative effect of soft errors on Fabric-based collectives. These approaches are based on replicating multiple times the individual packets of the collective. One of them replicates packets through independent output ports at every switch (spatial replication), whereas the other only uses one output port but sending consecutively multiple packets through it (temporal replication). Results on a 1,728-node cluster showed that temporal replication achieves a 50% better performance than spatial replication in the presence of random soft errors.


european conference on networks and communications | 2014

A novel SDN enabled hybrid optical packet/circuit switched data centre network: The LIGHTNESS approach

Shuping Peng; Dimitra Simeonidou; George Zervas; Reza Nejabati; Yan Yan; Yi Shu; Salvatore Spadaro; Jordi Perelló; Fernando Agraz; Davide Careglio; Harm J. S. Dorren; Wang Miao; N Nicola Calabretta; Giacomo Bernini; Nicola Ciulli; Jose Carlos Sancho; Steluta Iordache; Yolanda Becerra; Montse Farreras; Matteo Biancani; Alessandro Predieri; Roberto Proietti; Zheng Cao; Lei Liu; S. J. B. Yoo


Iet Computers and Digital Techniques | 2013

On the trade-off of mixing scientific applications on capacity high-performance computing systems

Ana Jokanovic; Jose Carlos Sancho; German Rodriguez; Cyriel Minkenberg; Ramón Beivide; Jesús Labarta

Collaboration


Dive into the Jose Carlos Sancho's collaboration.

Top Co-Authors

Avatar

Hugo Meyer

Barcelona Supercomputing Center

View shared research outputs
Top Co-Authors

Avatar

N Nicola Calabretta

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ana Jokanovic

Barcelona Supercomputing Center

View shared research outputs
Top Co-Authors

Avatar

Jesús Labarta

Barcelona Supercomputing Center

View shared research outputs
Top Co-Authors

Avatar

Montse Farreras

Barcelona Supercomputing Center

View shared research outputs
Top Co-Authors

Avatar

Wang Miao

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Davide Careglio

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Jordi Perelló

Polytechnic University of Catalonia

View shared research outputs
Researchain Logo
Decentralizing Knowledge