Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ryan K.L. Ko is active.

Publication


Featured researches published by Ryan K.L. Ko.


world congress on services | 2011

TrustCloud: A Framework for Accountability and Trust in Cloud Computing

Ryan K.L. Ko; Peter Jagadpramana; Miranda Mowbray; Siani Pearson; Markus Kirchberg; Qianhui Liang; Bu Sung Lee

The key barrier to widespread uptake of cloud computing is the lack of trust in clouds by potential customers. While preventive controls for security and privacy are actively researched, there is still little focus on detective controls related to cloud accountability and audit ability. The complexity resulting from large-scale virtualization and data distribution carried out in current clouds has revealed an urgent research agenda for cloud accountability, as has the shift in focus of customer concerns from servers to data. This paper discusses key issues and challenges in achieving a trusted cloud through the use of detective controls, and presents the Trust Cloud framework, which addresses accountability in cloud computing via technical and policy-based approaches.


2011 Defense Science Research Conference and Expo (DSR) | 2011

From system-centric to data-centric logging - Accountability, trust & security in cloud computing

Ryan K.L. Ko; Markus Kirchberg; Bu Sung Lee

Cloud computing signifies a paradigm shift from owning computing systems to buying computing services. As a result of this paradigm shift, many key concerns such as the transparency of data transfer and access within the cloud, and the lack of clarity in data ownership were surfaced. To address these concerns, we propose a new way of approaching traditional security and trust problems: To adopt a detective, data-centric thinking instead of the classical preventive, system-centric thinking. While classical preventive approaches are useful, they play a catch-up game; often do not address the problems (i.e. data accountability, data retention, etc) directly. In this paper, we propose a data-centric, detective approach to increase trust and security of data in the cloud. Our framework, known as TrustCloud, contains a suite of techniques that address cloud security, trust and accountability from a detective approach at all levels of granularity. TrustCloud also extends detective techniques to policies and regulations governing IT systems.


trust security and privacy in computing and communications | 2013

S2Logger: End-to-End Data Tracking Mechanism for Cloud Data Provenance

Chun Hui Suen; Ryan K.L. Ko; Yu Shyang Tan; Peter Jagadpramana; Bu Sung Lee

The inability to effectively track data in cloud computing environments is becoming one of the top concerns for cloud stakeholders. This inability is due to two main reasons. Firstly, the lack of data tracking tools built for clouds. Secondly, current logging mechanisms are only designed from a system-centric perspective. There is a need for data-centric logging techniques which can trace data activities (e.g. file creation, edition, duplication, transfers, deletions, etc.) within and across all cloud servers. This will effectively enable full transparency and accountability for data movements in the cloud. In this paper, we introduce S2Logger, a data event logging mechanism which captures, analyses and visualizes data events in the cloud from the data point of view. By linking together atomic data events captured at both file and block level, the resulting sequence of data events depicts the cloud data provenance records throughout the data lifecycle. With this information, we can then detect critical data-related cloud security problems such as malicious actions, data leakages and data policy violations by analysing the data provenance. S2Logger also enables us to address the gaps and inadequacies of existing system-centric security tools.


trust security and privacy in computing and communications | 2012

Tracking of Data Leaving the Cloud

Yu Shyang Tan; Ryan K.L. Ko; Peter Jagadpramana; Chun Hui Suen; Markus Kirchberg; Teck Hooi Lim; Bu Sung Lee; Anurag Singla; Ken Mermoud; Doron Keller; Ha Duc

Data leakages out of cloud computing environments are fundamental cloud security concerns for both the end-users and the cloud service providers. A literature survey of the existing technologies revealed the inadequacies of current technologies and the need for a new methodology. This position paper discusses the requirements and proposes a novel auditing methodology that enables tracking of data transferred out of Clouds. Initial results from our prototypes are reported. This research is aligned to our vision that by providing transparency, accountability and audit trails for all data events within and out of the Cloud, trust and confidence can be instilled into the industry as users will get to know what exactly is going on with their data in and out of the Cloud.


high performance computing and communications | 2013

Security and Data Accountability in Distributed Systems: A Provenance Survey

Yu Shyang Tan; Ryan K.L. Ko; Geoff Holmes

While provenance research is common in distributed systems, many proposed solutions do not address the security of systems and accountability of data stored in those systems. In this paper, we survey provenance solutions which were proposed to address the problems of system security and data accountability in distributed systems. From our survey, we derive a set of minimum requirements that are necessary for a provenance system to be effective in addressing the two problems. Finally, we identify several gaps in the surveyed solutions and present them as challenges that future provenance researchers should tackle. We argue that these gaps have to be addressed before a complete and fool-proof provenance solution can be arrived at in the future.


international conference on formal concept analysis | 2012

Formal concept discovery in semantic web data

Markus Kirchberg; Erwin Leonardi; Yu Shyang Tan; Sebastian Link; Ryan K.L. Ko; Bu Sung Lee

Semantic Web efforts aim to bring the WWW to a state in which all its content can be interpreted by machines; the ultimate goal being a machine-processable Web of Knowledge. We strongly believe that adding a mechanism to extract and compute concepts from the Semantic Web will help to achieve this vision. However, there are a number of open questions that need to be answered first. In this paper we will establish partial answers to the following questions: 1) Is it feasible to obtain data from the Web (instantaneously) and compute formal concepts without a considerable overhead; 2) have data sets, found on the Web, distinct properties and, if so, how do these properties affect the performance of concept discovery algorithms; and 3) do state-of-the-art concept discovery algorithms scale wrt. the number of data objects found on the Web?


ieee international conference on cloud computing technology and science | 2014

A toolkit for automating compliance in cloud computing services

Nick Papanikolaou; Siani Pearson; Marco Casassa Mont; Ryan K.L. Ko

We present an integrated approach for automating service providers’ compliance with data protection laws and regulations, business and technical requirements in cloud computing. The techniques we propose in particular include: natural language analysis (of legislative and regulatory texts, and corporate security rulebooks) and extraction of enforceable rules, use of sticky policies, automated policy enforcement and active monitoring of data, particularly in cloud environments. We currently work on developing a software tool for semantic annotation and natural language processing of cloud ToS and other related policy texts. We describe our implementations of two parts of the proposed toolkit, namely the semantic annotation editor and the EnCoRe policy enforcement framework. We also identify opportunities for future software development in the area of cloud computing compliance.


international conference on web services | 2012

Overcoming Large Data Transfer Bottlenecks in RESTful Service Orchestrations

Ryan K.L. Ko; Markus Kirchberg; Bu Sung Lee; Elroy Chew

As REST (Representational State Transfer)-ful services are closely coupled to the HTTP (Hypertext Transfer Protocol), which eventually sits above the connection-based TCP (Transmission Control Protocol), it is common for RESTful services to experience latency and transfer inefficiencies especially in situations requiring the services to transfer large-scale data (i.e. above gigabytes of data) in RESTful workflows. Such inefficiencies are undesirable and impractical, and are compounded for RESTful service orchestrations in data-intensive industries such as Big Data analytics, cloud computing and life sciences. In this paper, we propose a non-invasive novel technique, Fast-Optimised-REST (FOREST), which enables RESTful services to overcome the traditional bottlenecks experienced during transfer of large sets of data. The initial experimental results show promise and demonstrated very significant reductions of up to 80% from original REST-ful data transfer times for extremely large data sets.


ieee international conference on cloud computing technology and science | 2014

A Mantrap-Inspired, User-Centric Data Leakage Prevention (DLP) Approach

Ryan K.L. Ko; Alan Yu Shyang Tan; Ting Gao

The ease of sharing information through the Internet and Cloud Computing inadvertently introduces a growing problem of data leakages. At the same time, many end-users are unaware that their data was leaked or stolen since most data is leaked by operations running in the background. This paper introduces a novel user-centric, mantrap-inspired data leakage prevention (DLP) approach that can discover, present any sending of data -- both authorized and unauthorized -- to end-users and subsequently provide them the ability to stop the sending process. We implemented our own kernel module to work together with our user-space program in getting users approval for every sending process -- giving the user full control over all outbound data sending process in their devices. With this, the end-user can always decide which data sending process should be allowed or blocked. This overcomes the limitations of current, often inflexible and inaccurate DLP solutions depending on pre-set rules and content detection. We showcase a proof-of-concept for our new way of detecting data leakages in an end users device. This paves the way for further research covering more complex data stealing techniques, such as the use of covert channels.


international conference on cloud computing | 2014

Virtual Numbers for Virtual Machines

Alan Yu Shyang Tan; Ryan K.L. Ko; Veena B. Mendiratta

Knowing the number of virtual machines (VMs) that a cloud physical hardware can (further) support is critical as it has implications on provisioning and hardware procurement. However, current methods for estimating the maximum number of VMs possible on a given hardware is usually the ratio of the specifications of a VM to the underlying cloud hardwares specifications. Such naive and linear estimation methods mostly yield impractical limits as to how many VMs the hardware can actually support. It was found that if we base on the naive division method, user experience on VMs at those limits would be severely degraded. In this paper, we demonstrate through experimental results, the significant gap between the limits derived using the estimation method mentioned above and the actual situation. We believe for a more practicable estimation of the limits of the underlying infrastructure, dominant workload of VMs should also be factored in.

Collaboration


Dive into the Ryan K.L. Ko's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Markus Kirchberg

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yu Shyang Tan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge