Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Soheil Hassas Yeganeh is active.

Publication


Featured researches published by Soheil Hassas Yeganeh.


acm special interest group on data communication | 2012

Kandoo: a framework for efficient and scalable offloading of control applications

Soheil Hassas Yeganeh; Yashar Ganjali

Limiting the overhead of frequent events on the control plane is essential for realizing a scalable Software-Defined Network. One way of limiting this overhead is to process frequent events in the data plane. This requires modifying switches and comes at the cost of visibility in the control plane. Taking an alternative route, we propose Kandoo, a framework for preserving scalability without changing switches. Kandoo has two layers of controllers: (i) the bottom layer is a group of controllers with no interconnection, and no knowledge of the network-wide state, and (ii) the top layer is a logically centralized controller that maintains the network-wide state. Controllers at the bottom layer run only local control applications (i.e., applications that can function using the state of a single switch) near datapaths. These controllers handle most of the frequent events and effectively shield the top layer. Kandoos design enables network operators to replicate local controllers on demand and relieve the load on the top layer, which is the only potential bottleneck in terms of scalability. Our evaluations show that a network controlled by Kandoo has an order of magnitude lower control channel consumption compared to normal OpenFlow networks.


IEEE Communications Magazine | 2013

On scalability of software-defined networking

Soheil Hassas Yeganeh; Amin Tootoonchian; Yashar Ganjali

In this article, we deconstruct scalability concerns in software-defined networking and argue that they are not unique to SDN. We explore the often voiced concerns in different settings, discuss scalability trade-offs in the SDN design space, and present some recent research on SDN scalability. Moreover, we enumerate overlooked yet important opportunities and challenges in scalability beyond the commonly used performance metrics.


hot topics in networks | 2012

Rethinking end-to-end congestion control in software-defined networks

Monia Ghobadi; Soheil Hassas Yeganeh; Yashar Ganjali

TCP is designed to operate in a wide range of networks. Without any knowledge of the underlying network and traffic characteristics, TCP is doomed to continuously increase and decrease its congestion window size to embrace changes in network or traffic. In light of emerging popularity of centrally controlled Software-Defined Networks (SDNs), one might wonder whether we can take advantage of the global network view available at the controller to make faster and more accurate congestion control decisions. In this paper, we identify the need and the underlying requirements for a congestion control adaptation mechanism. To this end, we propose OpenTCP as a TCP adaptation framework that works in SDNs. OpenTCP allows network operators to define rules for tuning TCP as a function of network and traffic conditions. We also present a preliminary implementation of OpenTCP in a ~4000 node data center.


asia-pacific software engineering conference | 2007

Approximation Algorithms for Software Component Selection Problem

Nima Haghpanah; Shahrouz Moaven; Jafar Habibi; Mehdi Kargar; Soheil Hassas Yeganeh

Todays software systems are more frequently composed from preexisting commercial or non-commercial components and connectors. These components provide complex and independent functionality and are engaged in complex interactions. Component-Based Software Engineering (CBSE) is concerned with composing, selecting and designing such components. As the popularity of this approach and hence number of commercially available software components grows, selecting a set of components to satisfy a set of requirements while minimizing cost is becoming more difficult. This problem necessitates the design of efficient algorithms to automate component selection for software developing organizations. We address this challenge through analysis of Component Selection, the NP-complete process of selecting a minimal cost set of components to satisfy a set of objectives. Due to the high order of computational complexity of this problem, we examine approximating solutions that make the component selection process practicable. We adapt a greedy approach and a genetic algorithm to approximate this problem. We examined the performance of studied algorithms on a set of selected ActiveX components. Comparing the results of these two algorithms with the choices made by a group of human experts shows that we obtain better results using these approximation algorithms.


very large data bases | 2013

Discovering linkage points over web data

Oktie Hassanzadeh; Ken Q. Pu; Soheil Hassas Yeganeh; Renée J. Miller; Lucian Popa; Mauricio A. Hernández; Howard Ho

A basic step in integration is the identification of linkage points, i.e., finding attributes that are shared (or related) between data sources, and that can be used to match records or entities across sources. This is usually performed using a match operator, that associates attributes of one database to another. However, the massive growth in the amount and variety of unstructured and semi-structured data on the Web has created new challenges for this task. Such data sources often do not have a fixed pre-defined schema and contain large numbers of diverse attributes. Furthermore, the end goal is not schema alignment as these schemas may be too heterogeneous (and dynamic) to meaningfully align. Rather, the goal is to align any overlapping data shared by these sources. We will show that even attributes with different meanings (that would not qualify as schema matches) can sometimes be useful in aligning data. The solution we propose in this paper replaces the basic schema-matching step with a more complex instance-based schema analysis and linkage discovery. We present a framework consisting of a library of efficient lexical analyzers and similarity functions, and a set of search algorithms for effective and efficient identification of linkage points over Web data. We experimentally evaluate the effectiveness of our proposed algorithms in real-world integration scenarios in several domains.


hot topics in networks | 2014

Beehive: Towards a Simple Abstraction for Scalable Software-Defined Networking

Soheil Hassas Yeganeh; Yashar Ganjali

Simplicity is a prominent advantage of Software-Defined Networking (SDN), and is often exemplified by implementing a complicated control logic as a simple control application on a centralized controller. In practice, however, SDN controllers turn into distributed systems due to performance and reliability limitations, and the supposedly simple control applications transform into complex logics that demand significant effort to design and optimize. In this paper, we present Beehive, a distributed control platform aiming at simplifying this process. Our proposal is built around a programming abstraction which is almost identical to a centralized controller yet enables the platform to automatically infer how applications maintain their state and depend on one another. Using this abstraction, the platform automatically generates the distributed version of each control application, while preserving its behavior. With runtime instrumentation, the platform dynamically migrates applications among controllers aiming to optimize the control plane as a whole. Beehive also provides feedback to identify design bottlenecks in control applications, helping developers enhance the performance of the control plane. Our prototype shows that Beehive significantly simplifies the process of realizing distributed control applications.


Computers & Electrical Engineering | 2010

Semantic web service composition testbed

Soheil Hassas Yeganeh; Jafar Habibi; Habib Rostami; Hassan Abolhassani

A huge amount of web services are deployed on the Web, nowadays. These services can be used to fulfill online requests. Requests are getting more and more complicated over time. So, there exists a lot of frequent request that cannot be fulfilled using just one web service. For using web services, composing individual services to create the added-value composite web service to fulfill the user request is necessary in most cases. Web services can be composed manually but it is a too tedious and time consuming task. The ability of automatic web service composition to create a new composite web service is one of the key enabling features for the future for the semantic web. There are some successful methods for automatic web service composition, but the lack of standard, open, and lightweight test environment makes the comparison and evaluation of these composition methods impossible. In this paper we propose an architecture for a light weight and scalable testbed to execute, test and evaluate automatic web service composition algorithms. The architecture provides mandatory components for implementing and evaluation of automatic web service composition algorithms. Also, this architecture provides some extension mechanisms to extend its default functionalities. We have also given reference implementations for web service matchmaking and composition. Also, some scenarios for testing and evaluating the testbed are given. We have found that the performance of the composition method will dramatically decrease as the number of web services increases.


symposium on sdn research | 2016

Beehive: Simple Distributed Programming in Software-Defined Networks

Soheil Hassas Yeganeh; Yashar Ganjali

In this paper, we present the design and implementation of Beehive, a distributed control platform with a simple programming model. In Beehive, control applications are centralized asynchronous message handlers that optionally store their state in dictionaries. Beehives control platform automatically infers the keys required to process a message, and guarantees that each key is only handled by one light-weight thread of execution (i.e., bee) among all controllers (i.e., hives) in the platform. With that, Beehive transforms a centralized application into a distributed system, while preserving the applications intended behavior. Beehive replicates the dictionaries of control applications consistently through mini-quorums (i.e., colonies), instruments applications at runtime, and dynamically changes the placement of control applications (i.e., live migrates bees) to optimize the control plane. Our implementation of Beehive is open source, high-throughput and capable of fast failovers. We have implemented an SDN controller on top of Beehive that can handle 200K of OpenFlow messages per machine, while persisting and replicating the state of control applications. We also demonstrate that, not only can Beehive tolerate faults, but also it is capable of optimizing control applications after a failure or a change in the workload.


international conference on computer communications and networks | 2012

CUTE: Traffic Classification Using TErms

Soheil Hassas Yeganeh; Milad Eftekhar; Yashar Ganjali; Ram Keralapura; Antonio Nucci

Among different traffic classification approaches, Deep Packet Inspection (DPI) methods are considered as the most accurate. These methods, however, have two drawbacks: (i) they are not efficient since they use complex regular expressions as protocol signatures, and (ii) they require manual intervention to generate and maintain signatures, partly due to the signature complexity. In this paper, we present CUTE, an automatic traffic classification method, which relies on sets of weighted terms as protocol signatures. The key idea behind CUTE is an observation that, given appropriate weights, the occurrence of a specific term is more important than the relative location of terms in a flow. This observation is based on experimental evaluations as well as theoretical analysis, and leads to several key advantages over previous classification techniques: (i) CUTE is extremely faster than other classification schemes since matching flows with weighed terms is significantly faster than matching regular expressions; (ii) CUTE can classify network traffic using only the first few bytes of the flows in most cases; and (iii) Unlike most existing classification techniques, CUTE can be used to classify partial (or even slightly modified) flows. Even though CUTE replaces complex regular expressions with a set of simple terms, using theoretical analysis and experimental evaluations (based on two large packet traces from tier-one ISPs), we show that its accuracy is as good as or better than existing complex classification schemes, i.e. CUTE achieves precision and recall rates of more than 90%. Additionally, CUTE can successfully classify more than half of flows that other DPI methods fail to classify.


asia international conference on modelling and simulation | 2007

Semantic Composability Measure for Semantic Web Services

Elham Paikari; Jafar Habibi; Soheil Hassas Yeganeh

Motivated by the problem of automatically composing network accessible services, such as those on the World Wide Web, this paper proposes an approach to exploit all semantic information available for semantic Web services to complete this task. For semantic Web services, we propose a prioritized and limited list of other Web services, with composability measure value based on feasibility of being succeeding service in a composition to fulfil the request. Due to the fact that in semantic Web services, semantic information - all description and signatures - is defined by ontology language, we use some of the mapping rules to estimate a correlation between services for composability. Then we order them based on composability measure value and choose the n top values from them, for a list, which can be referenced in planning the composition with high degree of automation

Collaboration


Dive into the Soheil Hassas Yeganeh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge