Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Weiru Liu is active.

Publication


Featured researches published by Weiru Liu.


Artificial Intelligence | 2002

Learning Bayesian networks from data: an information-theory based approach

Jie Cheng; Russell Greiner; Jonathan Kelly; David A. Bell; Weiru Liu

This paper provides algorithms that use an information-theoretic analysis to learn Bayesian network structures from data. Based on our three-phase learning framework, we develop efficient algorithms that can effectively learn Bayesian networks, requiring only polynomial numbers of conditional independence (CI) tests in typical cases. We provide precise conditions that specify when these algorithms are guaranteed to be correct as well as empirical evidence (from real world applications and simulation tests) that demonstrates that these systems work efficiently and reliably in practice.


conference on information and knowledge management | 1997

Learning belief networks from data: an information theory based approach

Jie Cheng; David A. Bell; Weiru Liu

This paper provides algorithms that use an information-theoretic analysis to learn Bayesian network structures from data. Based on our three-phase learning framework, we develop efficient algorithms that can effectively learn Bayesian networks, requiring only polynomial numbers of conditional independence (CI) tests in typical cases. We provide precise conditions that specify when these algorithms are guaranteed to be correct as well as empirical evidence (from real world applications and simulation tests) that demonstrates that these systems work efficiently and reliably in practice.  2002 Elsevier Science B.V. All rights reserved.


Artificial Intelligence | 2006

Analyzing the degree of conflict among belief functions

Weiru Liu

The study of alternative combination rules in DS theory when evidence is in conflict has emerged again recently as an interesting topic, especially in data/information fusion applications. These studies have mainly focused on investigating which alternative would be appropriate for which conflicting situation, under the assumption that a conflict is identified. The issue of detection (or identification) of conflict among evidence has been ignored. In this paper, we formally define when two basic belief assignments are in conflict. This definition deploys quantitative measures of both the mass of the combined belief assigned to the emptyset before normalization and the distance between betting commitments of beliefs. We argue that only when both measures are high, it is safe to say the evidence is in conflict. This definition can be served as a prerequisite for selecting appropriate combination rules.


Information Sciences | 2005

Rough operations on Boolean algebras

Guilin Qi; Weiru Liu

In this paper, we introduce two pairs of rough operations on Boolean algebras. First we define a pair of rough approximations based on a partition of the unity of a Boolean algebra. We then propose a pair of generalized rough approximations on Boolean algebras after defining a basic assignment function between two different Boolean algebras. Finally, some discussions on the relationship between rough operations and some uncertainty measures are given to provide a better understanding of both rough operations and uncertainty measures on Boolean algebras.


Information Fusion | 2006

Fusion rules for merging uncertain information

Anthony Hunter; Weiru Liu

In previous papers, we have presented a logic-based framework based on fusion rules for merging structured news reports. Structured news reports are XML documents, where the textentries are restricted to individual words or simple phrases, such as names and domain-specific terminology, and numbers and units. We assume structured news reports do not require natural language processing. Fusion rules are a form of scripting language that define how structured news reports should be merged. The antecedent of a fusion rule is a call to investigate the information in the structured news reports and the background knowledge, and the consequent of a fusion rule is a formula specifying an action to be undertaken to form a merged report. It is expected that a set of fusion rules is defined for any given application. In this paper we extend the approach to handling probability values, degrees of beliefs, or necessity measures associated with textentries in the news reports. We present the formal definition for each of these types of uncertainty and explain how they can be handled using fusion rules. We also discuss the methods of detecting inconsistencies among sources.


Information Sciences | 2016

An evidential fusion approach for gender profiling

Jianbing Ma; Weiru Liu; Paul C. Miller; Huiyu Zhou

CCTV (Closed-Circuit TeleVision) systems are broadly deployed in the present world. To ensure in-time reaction for intelligent surveillance, it is a fundamental task for real-world applications to determine the gender of people of interest. However, normal video algorithms for gender profiling (usually face profiling) have three drawbacks. First, the profiling result is always uncertain. Second, the profiling result is not stable. The degree of certainty usually varies over time, sometimes even to the extent that a male is classified as a female, and vice versa. Third, for a robust profiling result in cases that a persons face is not visible, other features, such as body shape, are required. These algorithms may provide different recognition results - at the very least, they will provide different degrees of certainties. To overcome these problems, in this paper, we introduce an Dempster-Shafer (DS) evidential approach that makes use of profiling results from multiple algorithms over a period of time, in particular, Denoeuxs cautious rule is applied for fusing mass functions through time lines. Experiments show that this approach does provide better results than single profiling results and classic fusion results. Furthermore, it is found that if severe mis-classification has occurred at the beginning of the time line, the combination can yield undesirable results. To remedy this weakness, we further propose three extensions to the evidential approach proposed above incorporating notions of time-window, time-attenuation, and time-discounting, respectively. These extensions also applies Denoeuxs rule along with time lines and take the DS approach as a special case. Experiments show that these three extensions do provide better results than their predecessor when mis-classifications occur.


Journal of Information Science | 2015

A survey of location inference techniques on Twitter

Oluwaseun Ajao; Jun Hong; Weiru Liu

The increasing popularity of the social networking service, Twitter, has made it more involved in day-to-day communications, strengthening social relationships and information dissemination. Conversations on Twitter are now being explored as indicators within early warning systems to alert of imminent natural disasters such as earthquakes and aid prompt emergency responses to crime. Producers are privileged to have limitless access to market perception from consumer comments on social media and microblogs. Targeted advertising can be made more effective based on user profile information such as demography, interests and location. While these applications have proven beneficial, the ability to effectively infer the location of Twitter users has even more immense value. However, accurately identifying where a message originated from or an author’s location remains a challenge, thus essentially driving research in that regard. In this paper, we survey a range of techniques applied to infer the location of Twitter users from inception to state of the art. We find significant improvements over time in the granularity levels and better accuracy with results driven by refinements to algorithms and inclusion of more spatial features.


BMC Medical Research Methodology | 2008

Performing meta-analysis with incomplete statistical information in clinical trials

Jianbing Ma; Weiru Liu; Anthony Hunter; Weiya Zhang

BackgroundResults from clinical trials are usually summarized in the form of sampling distributions. When full information (mean, SEM) about these distributions is given, performing meta-analysis is straightforward. However, when some of the sampling distributions only have mean values, a challenging issue is to decide how to use such distributions in meta-analysis. Currently, the most common approaches are either ignoring such trials or for each trial with a missing SEM, finding a similar trial and taking its SEM value as the missing SEM. Both approaches have drawbacks. As an alternative, this paper develops and tests two new methods, the first being the prognostic method and the second being the interval method, to estimate any missing SEMs from a set of sampling distributions with full information. A merging method is also proposed to handle clinical trials with partial information to simulate meta-analysis.MethodsBoth of our methods use the assumption that the samples for which the sampling distributions will be merged are randomly selected from the same population. In the prognostic method, we predict the missing SEMs from the given SEMs. In the interval method, we define intervals that we believe will contain the missing SEMs and then we use these intervals in the merging process.ResultsTwo sets of clinical trials are used to verify our methods. One family of trials is on comparing different drugs for reduction of low density lipprotein cholesterol (LDL) for Type-2 diabetes, and the other is about the effectiveness of drugs for lowering intraocular pressure (IOP). Both methods are shown to be useful for approximating the conventional meta-analysis including trials with incomplete information. For example, the meta-analysis result of Latanoprost versus Timolol on IOP reduction for six months provided in [1] was 5.05 ± 1.15 (Mean ± SEM) with full information. If the last trial in this study is assumed to be with partial information, the traditional analysis method for dealing with incomplete information that ignores this trial would give 6.49 ± 1.36 while our prognostic method gives 5.02 ± 1.15, and our interval method provides two intervals as Mean ∈ [4.25, 5.63] and SEM ∈ [1.01, 1.24].ConclusionBoth the prognostic and the interval methods are useful alternatives for dealing with missing data in meta-analysis. We recommend clinicians to use the prognostic method to predict the missing SEMs in order to perform meta-analysis and the interval method for obtaining a more cautious result.


Artificial Intelligence Review | 2006

A revision-based approach to handling inconsistency in description logics

Guilin Qi; Weiru Liu; David A. Bell

Recently, the problem of inconsistency handling in description logics has attracted a lot of attention. Many approaches have been proposed to deal with this problem based on existing techniques for inconsistency management. In this paper, we first define two revision operators in description logics; one is called a weakening-based revision operator and the other is its refinement. Based on the revision operators, we then propose an algorithm to handle inconsistency in a stratified description logic knowledge base. We show that when the weakening-based revision operator is chosen, the resulting knowledge base of our algorithm is semantically equivalent to the knowledge base obtained by applying refined conjunctive maxi-adjustment (RCMA) which refines disjunctive maxi-adjusment (DMA), known to be a good strategy for inconsistency handling in classical logic.


International Journal of Approximate Reasoning | 2011

A Syntax-based approach to measuring the degree of inconsistency for belief bases

Kedian Mu; Weiru Liu; Zhi Jin; David A. Bell

Measuring the degree of inconsistency of a belief base is an important issue in many real-world applications. It has been increasingly recognized that deriving syntax sensitive inconsistency measures for a belief base from its minimal inconsistent subsets is a natural way forward. Most of the current proposals along this line do not take the impact of the size of each minimal inconsistent subset into account. However, as illustrated by the well-known Lottery Paradox, as the size of a minimal inconsistent subset increases, the degree of its inconsistency decreases. Another lack in current studies in this area is about the role of free formulas of a belief base in measuring the degree of inconsistency. This has not yet been characterized well. Adding free formulas to a belief base can enlarge the set of consistent subsets of that base. However, consistent subsets of a belief base also have an impact on the syntax sensitive normalized measures of the degree of inconsistency, the reason for this is that each consistent subset can be considered as a distinctive plausible perspective reflected by that belief base, whilst each minimal inconsistent subset projects a distinctive view of the inconsistency. To address these two issues, we propose a normalized framework for measuring the degree of inconsistency of a belief base which unifies the impact of both consistent subsets and minimal inconsistent subsets. We also show that this normalized framework satisfies all the properties deemed necessary by common consent to characterize an intuitively satisfactory measure of the degree of inconsistency for belief bases. Finally, we use a simple but explanatory example in requirements engineering to illustrate the application of the normalized framework.

Collaboration


Dive into the Weiru Liu's collaboration.

Top Co-Authors

Avatar

Jun Hong

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar

Jianbing Ma

Bournemouth University

View shared research outputs
Top Co-Authors

Avatar

Paul C. Miller

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David A. Bell

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar

Kevin McAreavey

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar

Anthony Hunter

University College London

View shared research outputs
Top Co-Authors

Avatar

David Bell

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anbu Yue

Queen's University Belfast

View shared research outputs
Researchain Logo
Decentralizing Knowledge