Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yang Hang is active.

Publication


Featured researches published by Yang Hang.


international conference hybrid intelligent systems | 2008

Using Genetic Algorithm for Hybrid Modes of Collaborative Filtering in Online Recommenders

Simon Fong; Yvonne Ho; Yang Hang

Online recommenders are usually referred to those used in e-Commerce websites for suggesting a product or service out of many choices. The core technology implemented behind this type of recommenders includes content analysis, collaborative filtering and some hybrid variants. Since they all have certain strengths and limitations, combining them may be a promising solution provided there is a way of overcoming a large amount of input variables especially from combining different techniques. Genetic algorithm (GA) is an ideal optimization search function, for finding a best recommendation out of a large population of variables. In this paper we presented a GA-based approach for supporting combined modes of collaborative filtering. In particular, we show that how the input variables can be coded into GA chromosomes in various modes. Insights of how GA can be used in recommenders are derived through our experiments with the input data taken from Movielens and IMDB.


international conference on digital information management | 2010

Real-time business intelligence system architecture with stream mining

Yang Hang; Simon Fong

Business Intelligence (BI) capitalized on data-mining and analytics techniques for discovering trends and reacting to events with quick decisions. We argued that a new breed of data-mining, namely stream-mining where continuous data streams arrive into the system and get mined very quickly, stimulates the design of a new real-time BI architecture. In the past, stream-mining (especially in algorithmic level) and digital information system architectures have been studied separately. We attempted in this paper to present a unified view on the real-time BI system architecture powered by stream-mining. Some typical applications in which our architecture can support are described.


Journal of Information Processing Systems | 2011

Stream-based Biomedical Classification Algorithms for Analyzing Biosignals

Simon Fong; Yang Hang; Sabah Mohammed; Jinan Fiaidhi

Classification in biomedical applications is an important task that predicts or classifies an outcome based on a given set of input variables such as diagnostic tests or the symptoms of a patient. Traditionally the classification algorithms would have to digest a stationary set of historical data in order to train up a decision-tree model and the learned model could then be used for testing new samples. However, a new breed of classification called stream-based classification can handle continuous data streams, which are ever evolving, unbound, and unstructured, for instance--biosignal live feeds. These emerging algorithms can potentially be used for real-time classification over biosignal data streams like EEG and ECG, etc. This paper presents a pioneer effort that studies the feasibility of classification algorithms for analyzing biosignals in the forms of infinite data streams. First, a performance comparison is made between traditional and stream-based classification. The results show that accuracy declines intermittently for traditional classification due to the requirement of model re-learning as new data arrives. Second, we show by a simulation that biosignal data streams can be processed with a satisfactory level of performance in terms of accuracy, memory requirement, and speed, by using a collection of stream-mining algorithms called Optimized Very Fast Decision Trees. The algorithms can effectively serve as a corner-stone technology for real-time classification in future biomedical applications.


networked computing and advanced information management | 2009

A Framework of Business Intelligence-Driven Data Mining for E-business

Yang Hang; Simon Fong

This paper proposes a data mining methodology called Business Intelligence-driven Data Mining (BIdDM). It combines knowledge-driven data mining and method-driven data mining, and fills the gap between business intelligence knowledge and existent various data mining methods in e-Business. BIdDM contains two processes: a construction process of a four-layer framework and a data mining process. A methodology is established in setting up the four-layer framework, which is an important part in BIdDM. A case study of B2C e-Shop is provided to illustrate the use of BIdDM.


World Review of Science, Technology and Sustainable Development | 2010

CSET automated negotiation model for optimal supply chain formation

Yang Hang; Simon Fong; Zhuang Yan

In an effort to compose an optimal supply chain (SC), this paper tries to bring forward a new collaborative agent-based single machine earliness/tardiness (SET) model. It includes the sub-agents, which are designed for fairly coordinating and distributing job requests at the mid-stream levels. Extending from the precedent SET model, collaborative-SET (CSET) has a coordinating collaborative agent, which is responsible for optimising the information flow and scheduling of the whole SC. This is done by coordinating the information flow at the sub-agent between each two streams. In a long run, this new model makes a complex dynamic SC more efficient and shortens response time. A stimulator that implements the algorithms is programmed in order to calculate the amount of information transfer, time and cost incurred between SET and CSET model. The results generally indicate that the more streams a SC has, the better the performance gain is yielded.


IEEE Internet Computing | 2010

Investigating the Impact of Bursty Traffic on Hoeffding Tree Algorithm in Stream Mining over Internet

Yang Hang; Simon Fong

Steam data are continuous and ubiquitous in nature which can be found in many Web applications operating on Internet. Some instances of stream data are web logs, online users’ click-streams, online media streaming and Web transaction records. Stream Mining was proposed as a relatively new data analytic solution for handling such streams. It has been widely acclaimed of its usefulness in real-time decision-support applications, for example web recommenders. Hoeffding Tree Algorithm (HTA) is one of the popular choices for implementing Very-Fast-Decision-Tree in stream mining. The theoretical aspects have been studied extensively by researchers. However, the data streams that fed into HTA are usually assumed at a constant rate in the literature. HTA has yet been tested under bursty traffic such as Internet environment. This paper sheds some light into the impact of bursty traffic on the performance of HTA in stream mining.


information integration and web-based applications & services | 2008

Double-agent architecture for collaborative supply chain formation

Yang Hang; Simon Fong

Supply chains have evolved to web-applications that tap on the power of internet to expand their networks online. Recently some research attention is focused on make-to-order supply chain formation where orders are scheduled to be optimally distributed among online manufacturers and suppliers for mutual benefits. A SET model was proposed in [1] using Pareto theories. The model is then extended into a collaborative manner throughout the whole supply chain by incorporating it with the Just-in-Time (JIT) principle, known as CSET. The CSET framework was proposed, and the advantage of time efficiency was shown in [2]. The core of the CSET model is based on intelligent agent technology. Specifically the model is supported by double-agent architecture with each type of agents who makes provisional plans of order distribution by Pareto optimality and JIT coordination respectively. This paper defines such double-agent mechanisms in details, as well as demonstrating its merits via simulation study.


Archive | 2011

Enabling Real-Time Business Intelligence by Stream Data Mining

Simon Fong; Yang Hang

Traditionally Business Intelligence (BI) is defined as “a set of mathematical models and analysis methodologies that exploit the available data to generate information and knowledge useful for complex decision making processes” (Vercellis, 2009). The real-time aspect of BI seems to be missing from the classical studies. BI systems technically combine data collection, data storage, and knowledge management with analytical tools to present complex and competitive information to business strategic planner and decision makers (Negash, 2003). This type of BI systems or architectures has served for business usage for past decades (Rao, 2000). Nowadays businesses evolved to be more competitive and dynamic than the past, which demand for real-time BI and capability of making very quick decisions. With this new business market demand, recently published works (Yang & Fong, 2010; Sandu, 2008) advocated that BI should be specified in four dimensions: strategic, tactical, operational and real-time. Most of the existing decision-support systems however are strategic and tactical; BI is produced by data mining either in forms of regular reports or some actionable information in digital format within a certain frame time. Although the access to the BI database (sometimes called Knowledge base) and the decision generated from data-mining rules are instant; the underlying historical data used for analysis may not be most up-to-thelatest-minute or seconds. Compared with the operational BI, real-time BI (rt-BI) shall analyze the data as soon it enters the organization. The latency (data latency, analysis latency, decision latency) shall be zero ideally. In order to establish such real-time BI systems, relevant technologies to guarantee low/zero latency are necessary. For example, operational / real-time BI data warehouse techniques are able to provide fresh data access and update. Thus operational BI can be viewed as rt-BI as long as it can provide analytics within a very short time for decision making. The main approach is: system response time shall stay under a threshold that is less than the action taking time; and the rate of data processing shall be faster than the rate of data producing. However, there are many real-time data mining algorithms in theoretical fields, but their applicability and suitability towards various real-time applications are still vague; so far no one has conducted an in-depth study for rt-BI with consideration of streammining. We take this as the research motivation and hence the contribution of this chapter. The chapter is structured in the following way: Section 2 is an overview of rt-BI system; the high-level framework, system architecture and process are described. Section 3 is a


ieee international conference on digital ecosystems and technologies | 2009

Simulating competition schemes in agent-mediated supply chain ecosystems

Yang Hang; Simon Fong; Yain-Whar Si; Robert P. Biuk-Aghai

With the rapid advance of information technology, supply chains evolved from clusters of connected companies to a virtual e-marketplace that serves as a central hub for many companies that buy and sell. Over this large community of companies supply chains can be dynamically formed by mediator agents. From the view of an ecosystem, the companies that are connected in the e-marketplace can be viewed as individual entities that have self-interest. They however compete for survival as well as collaborate with each other for projects. This paper is concerned with simulating how dynamic make-to-order supply chains are formed based on two different job competition schemes from the perspective of a supply chain ecosystem. One can see in the simulation that the supply chain ecosystem grows in different directions. One scheme called Cost-driven principle leads to destructive competition while the other one namely Pareto-optimal evolves into a cooperative competition that tries to mutually benefit every participant. Through a visualization tool that we built we show that the Pareto-optimal principle is preferable to the Cost-driven principle in the long-term with regard to global survival.


information integration and web-based applications & services | 2008

On designing a market monitoring web agent system

Simon Fong; Yang Hang

World-Wide-Web is a huge pool of valuable information for companies to know what their competitors are doing and what products and services they offer up-to-date. Companies can gather business intelligence from the Web for planning countermeasures strategies. Hence it is crucial to have the right tool to effectively gather such information from the Web. Many information retrieval and monitoring technologies have been developed. But they are more for generally tracking changes and downloading the whole websites for offline browsing. This paper is to shed some light on specifically the design of a Web monitoring system for gathering business information relevant to a company. The Watcher Agent is a server-based system that is built with two main parts, namely Price Watcher and Market Watcher. The system will assist company users in price information collection, news information filtering, and product ranking estimation, thus saving time and effort for them.

Collaboration


Dive into the Yang Hang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge