Featured Researches

Databases

Improving Locality Sensitive Hashing by Efficiently Finding Projected Nearest Neighbors

Similarity search in high-dimensional spaces is an important task for many multimedia applications. Due to the notorious curse of dimensionality, approximate nearest neighbor techniques are preferred over exact searching techniques since they can return good enough results at a much better speed. Locality Sensitive Hashing (LSH) is a very popular random hashing technique for finding approximate nearest neighbors. Existing state-of-the-art Locality Sensitive Hashing techniques that focus on improving performance of the overall process, mainly focus on minimizing the total number of IOs while sacrificing the overall processing time. The main time-consuming process in LSH techniques is the process of finding neighboring points in projected spaces. We present a novel index structure called radius-optimized Locality Sensitive Hashing (roLSH). With the help of sampling techniques and Neural Networks, we present two techniques to find neighboring points in projected spaces efficiently, without sacrificing the accuracy of the results. Our extensive experimental analysis on real datasets shows the performance benefit of roLSH over existing state-of-the-art LSH techniques.

Read more
Databases

In-Order Sliding-Window Aggregation in Worst-Case Constant Time

Sliding-window aggregation is a widely-used approach for extracting insights from the most recent portion of a data stream. The aggregations of interest can usually be expressed as binary operators that are associative but not necessarily commutative nor invertible. Non-invertible operators, however, are difficult to support efficiently. In a 2017 conference paper, we introduced DABA, the first algorithm for sliding-window aggregation with worst-case constant time. Before DABA, if a window had size n , the best published algorithms would require O(logn) aggregation steps per window operation---and while for strictly in-order streams, this bound could be improved to O(1) aggregation steps on average, it was not known how to achieve an O(1) bound for the worst-case, which is critical for latency-sensitive applications. This article is an extended version of our 2017 paper. Besides describing DABA in more detail, this article introduces a new variant, DABA Lite, which achieves the same time bounds in less memory. Whereas DABA requires space for storing 2n partial aggregates, DABA Lite only requires space for n+2 partial aggregates. Our experiments on synthetic and real data support the theoretical findings.

Read more
Databases

Incremental Lossless Graph Summarization

Given a fully dynamic graph, represented as a stream of edge insertions and deletions, how can we obtain and incrementally update a lossless summary of its current snapshot? As large-scale graphs are prevalent, concisely representing them is inevitable for efficient storage and analysis. Lossless graph summarization is an effective graph-compression technique with many desirable properties. It aims to compactly represent the input graph as (a) a summary graph consisting of supernodes (i.e., sets of nodes) and superedges (i.e., edges between supernodes), which provide a rough description, and (b) edge corrections which fix errors induced by the rough description. While a number of batch algorithms, suited for static graphs, have been developed for rapid and compact graph summarization, they are highly inefficient in terms of time and space for dynamic graphs, which are common in practice. In this work, we propose MoSSo, the first incremental algorithm for lossless summarization of fully dynamic graphs. In response to each change in the input graph, MoSSo updates the output representation by repeatedly moving nodes among supernodes. MoSSo decides nodes to be moved and their destinations carefully but rapidly based on several novel ideas. Through extensive experiments on 10 real graphs, we show MoSSo is (a) Fast and 'any time': processing each change in near-constant time (less than 0.1 millisecond), up to 7 orders of magnitude faster than running state-of-the-art batch methods, (b) Scalable: summarizing graphs with hundreds of millions of edges, requiring sub-linear memory during the process, and (c) Effective: achieving comparable compression ratios even to state-of-the-art batch methods.

Read more
Databases

Index Selection for NoSQL Database with Deep Reinforcement Learning

We propose a new approach of NoSQL database index selection. For different workloads, we select different indexes and their different parameters to optimize the database performance. The approach builds a deep reinforcement learning model to select an optimal index for a given fixed workload and adapts to a changing workload. Experimental results show that, Deep Reinforcement Learning Index Selection Approach (DRLISA) has improved performance to varying degrees according to traditional single index structures.

Read more
Databases

Index-based Solutions for Efficient Density Peak Clustering

Density Peak Clustering (DPC), a popular density-based clustering approach, has received considerable attention from the research community primarily due to its simplicity and fewer-parameter requirement. However, the resultant clusters obtained using DPC are influenced by the sensitive parameter d c , which depends on data distribution and requirements of different users. Besides, the original DPC algorithm requires visiting a large number of objects, making it slow. To this end, this paper investigates index-based solutions for DPC. Specifically, we propose two list-based index methods viz. (i) a simple List Index, and (ii) an advanced Cumulative Histogram Index. Efficient query algorithms are proposed for these indices which significantly avoids irrelevant comparisons at the cost of space. For memory-constrained systems, we further introduce an approximate solution to the above indices which allows substantial reduction in the space cost, provided that slight inaccuracies are admissible. Furthermore, owing to considerably lower memory requirements of existing tree-based index structures, we also present effective pruning techniques and efficient query algorithms to support DPC using the popular Quadtree Index and R-tree Index. Finally, we practically evaluate all the above indices and present the findings and results, obtained from a set of extensive experiments on six synthetic and real datasets. The experimental insights obtained can help to guide in selecting a befitting index.

Read more
Databases

Indexing Data on the Web: A Comparison of Schema-level Indices for Data Search -- Extended Technical Report

Indexing the Web of Data offers many opportunities, in particular, to find and explore data sources. One major design decision when indexing the Web of Data is to find a suitable index model, i.e., how to index and summarize data. Various efforts have been conducted to develop specific index models for a given task. With each index model designed, implemented, and evaluated independently, it remains difficult to judge whether an approach generalizes well to another task, set of queries, or dataset. In this work, we empirically evaluate six representative index models with unique feature combinations. Among them is a new index model incorporating inferencing over RDFS and owl:sameAs. We implement all index models for the first time into a single, stream-based framework. We evaluate variations of the index models considering sub-graphs of size 0, 1, and 2 hops on two large, real-world datasets. We evaluate the quality of the indices regarding the compression ratio, summarization ratio, and F1-score denoting the approximation quality of the stream-based index computation. The experiments reveal huge variations in compression ratio, summarization ratio, and approximation quality for different index models, queries, and datasets. However, we observe meaningful correlations in the results that help to determine the right index model for a given task, type of query, and dataset.

Read more
Databases

Indexing Metric Spaces for Exact Similarity Search

With the continued digitalization of societal processes, we are seeing an explosion in available data. This is referred to as big data. In a research setting, three aspects of the data are often viewed as the main sources of challenges when attempting to enable value creation from big data: volume, velocity and variety. Many studies address volume or velocity, while much fewer studies concern the variety. Metric space is ideal for addressing variety because it can accommodate any type of data as long as its associated distance notion satisfies the triangle inequality. To accelerate search in metric space, a collection of indexing techniques for metric data have been proposed. However, existing surveys each offers only a narrow coverage, and no comprehensive empirical study of those techniques exists. We offer a survey of all the existing metric indexes that can support exact similarity search, by i) summarizing all the existing partitioning, pruning and validation techniques used for metric indexes, ii) providing the time and storage complexity analysis on the index construction, and iii) report on a comprehensive empirical comparison of their similarity query processing performance. Here, empirical comparisons are used to evaluate the index performance during search as it is hard to see the complexity analysis differences on the similarity query processing and the query performance depends on the pruning and validation abilities related to the data distribution. This article aims at revealing different strengths and weaknesses of different indexing techniques in order to offer guidance on selecting an appropriate indexing technique for a given setting, and directing the future research for metric indexes.

Read more
Databases

Informal Data Transformation Considered Harmful

In this paper we take the common position that AI systems are limited more by the integrity of the data they are learning from than the sophistication of their algorithms, and we take the uncommon position that the solution to achieving better data integrity in the enterprise is not to clean and validate data ex-post-facto whenever needed (the so-called data lake approach to data management, which can lead to data scientists spending 80% of their time cleaning data), but rather to formally and automatically guarantee that data integrity is preserved as it transformed (migrated, integrated, composed, queried, viewed, etc) throughout the enterprise, so that data and programs that depend on that data need not constantly be re-validated for every particular use.

Read more
Databases

Interactive Query Formulation using Point to Point Queries

Effective information disclosure in the context of databases with a large conceptual schema is known to be a non-trivial problem. In particular the formulation of ad-hoc queries is a major problem in such contexts. Existing approaches for tackling this problem include graphical query interfaces, query by navigation, and query by construction. In this article we propose the point to point query mechanism that can be combined with the existing mechanism into an unprecedented computer supported query formulation mechanism. In a point to point query a path through the information structure is build. This path can then be used to formulate more complex queries. A point to point query is typically useful when users know some object types which are relevant for their information need, but do not (yet) know how they are related in the conceptual schema. Part of the point to point query mechanism is therefore the selection of the most appropriate path between object types (points) in the conceptual schema. This article both discusses some of the pragmatic issues involved in the point to point query mechanism, and the theoretical issues involved in finding the relevant paths between selected object types.

Read more
Databases

Interactive and Explainable Point-of-Interest Recommendation using Look-alike Groups

Recommending Points-of-Interest (POIs) is surfacing in many location-based applications. The literature contains personalized and socialized POI recommendation approaches which employ historical check-ins and social links to make recommendations. However these systems still lack customizability (incorporating session-based user interactions with the system) and contextuality (incorporating the situational context of the user), particularly in cold start situations, where nearly no user information is available. In this paper, we propose LikeMind, a POI recommendation system which tackles the challenges of cold start, customizability, contextuality, and explainability by exploiting look-alike groups mined in public POI datasets. LikeMind reformulates the problem of POI recommendation, as recommending explainable look-alike groups (and their POIs) which are in line with user's interests. LikeMind frames the task of POI recommendation as an exploratory process where users interact with the system by expressing their favorite POIs, and their interactions impact the way look-alike groups are selected out. Moreover, LikeMind employs "mindsets", which capture actual situation and intent of the user, and enforce the semantics of POI interestingness. In an extensive set of experiments, we show the quality of our approach in recommending relevant look-alike groups and their POIs, in terms of efficiency and effectiveness.

Read more

Ready to get started?

Join us today