Aleksandar Ignjatovic
University of New South Wales
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Aleksandar Ignjatovic.
IEEE Internet Computing | 2013
Mohammad Allahbakhsh; Boualem Benatallah; Aleksandar Ignjatovic; Hamid Reza Motahari-Nezhad; Elisa Bertino; Schahram Dustdar
As a new distributed computing model, crowdsourcing lets people leverage the crowds intelligence and wisdom toward solving problems. This article proposes a framework for characterizing various dimensions of quality control in crowdsourcing systems, a critical issue. The authors briefly review existing quality-control approaches, identify open issues, and look to future research directions. In the Web extra, the authors discuss both design-time and runtime approaches in more detail.
IEEE Transactions on Dependable and Secure Computing | 2015
Mohsen Rezvani; Aleksandar Ignjatovic; Elisa Bertino; Sanjay K. Jha
Due to limited computational power and energy resources, aggregation of data from multiple sensor nodes done at the aggregating node is usually accomplished by simple methods such as averaging. However such aggregation is known to be highly vulnerable to node compromising attacks. Since WSN are usually unattended and without tamper resistant hardware, they are highly susceptible to such attacks. Thus, ascertaining trustworthiness of data and reputation of sensor nodes is crucial for WSN. As the performance of very low power processors dramatically improves, future aggregator nodes will be capable of performing more sophisticated data aggregation algorithms, thus making WSN less vulnerable. Iterative filtering algorithms hold great promise for such a purpose. Such algorithms simultaneously aggregate data from multiple sources and provide trust assessment of these sources, usually in a form of corresponding weight factors assigned to data provided by each source. In this paper we demonstrate that several existing iterative filtering algorithms, while significantly more robust against collusion attacks than the simple averaging methods, are nevertheless susceptive to a novel sophisticated collusion attack we introduce. To address this security issue, we propose an improvement for iterative filtering techniques by providing an initial approximation for such algorithms which makes them not only collusion robust, but also more accurate and faster converging.
asia and south pacific design automation conference | 2006
Andhi Janapsatya; Aleksandar Ignjatovic; Sri Parameswaran
Modern embedded system execute a single application or a class of applications repeatedly. A new emerging methodology of designing embedded system utilizes configurable processors where the cache size, associativity, and line size can be chosen by the designer. In this paper, a method is given to rapidly find the L1 cache miss rate of an application. An energy model and an execution time model are developed to find the best cache configuration for the given embedded application. Using benchmarks from Mediabench, we find that our method is on average 45 times faster to explore the design space, compared to Dinero IV while still having 100% accuracy
asia and south pacific design automation conference | 2006
Andhi Janapsatya; Aleksandar Ignjatovic; Sri Parameswaran
Scratchpad memory has been introduced as a replacement for cache memory as it improves the performance of certain embedded systems. Additionally, it has also been demonstrated that scratchpad memory can significantly reduce the energy consumption of the memory hierarchy of embedded systems. This is significant, as the memory hierarchy consumes a substantial proportion of the total energy of an embedded system. This paper deals with optimization of the instruction memory scratchpad based on a methodology that uses a metric which we call the concomitance. This metric is used to find basic blocks which are executed frequently and in close proximity in time. Once such blocks are found, they are copied into the scratchpad memory at appropriate times; this is achieved using a special instruction inserted into the code at appropriate places. For a set of benchmarks taken from Mediabench, our scratchpad system consumed just 59% (avg) of the energy of the cache system, and 73% (avg) of the energy of the state of the art scratchpad system, while improving the overall performance. Compared to the state of the art method, the number of instructions copied into the scratchpad memory from the main memory is reduced by 88%.
international conference on computer aided design | 2004
Andhi Janapsatya; Sri Parameswaran; Aleksandar Ignjatovic
We propose a methodology for energy reduction and performance improvement. The target system comprises of an instruction scratchpad memory instead of an instruction cache. Highly utilized code segments are copied into the scratchpad memory, and are executed from the scratchpad. The copying of code segments from main memory to the scratchpad is performed during runtime. A custom hardware controller is used to manage the copying process. The hardware controller is activated by strategically placed custom instructions within the executing program. These custom instructions inform the hardware controller when to copy during program execution. Novel heuristic algorithms are implemented to determine locations within the program to insert these custom instructions, as well as to choose the best sets of code segments to be copied to the scratchpad memory. For a set of realistic benchmarks, experimental results indicate the method uses 50.7% lower energy (on average) and improves performance by 53.2% (on average) when compared to a traditional cache system which is identical in size. Cache systems compared had sizes ranging from 256 to 16K bytes and associativities ranging from 1 to 32.
international conference on computer aided design | 2008
Jude Angelo Ambrose; Sri Parameswaran; Aleksandar Ignjatovic
Side channel attack based upon the analysis of power traces is an effective way of obtaining the encryption key from secure processors. Power traces can be used to detect bitflips which betray the secure key. Balancing the bitflips with opposite bitflips have been proposed, by the use of opposite logic. This is an expensive solution, where the balancing processor continues to balance even when encryption is not carried out in the processor.
Annals of Pure and Applied Logic | 1995
Samuel R. Buss; Aleksandar Ignjatovic
Abstract This paper deals with the weak fragments of arithmetic PV and S 2 i and their induction-free fragments PV − and S 2 −1 . We improve the bootstrapping of S 2 1 , which allows us to show that the theory S 2 1 can be axiomatized by the set of axioms BASIC together with any of the following induction schemas: ∑ 1 b -PIND, ∑ 2 b -LIND, Π 1 b -PIND or Π 1 b -LIND. We improve prior results of Pudlak, Buss and Takeuti establishing the unprovability of bounded consistency of S 2 −1 in S 2 by showing that, if S 2 i proves ∀ xϑ ( x ) with ϑ a ∑ 0 b (∑ b i )-formula, then S 2 1 proves that each instance of ϑ( x ) has a S 2 −1 -proof in which only ∑ 0 b (∑ 1 b )-formulas occur. Finally, we show that the consistency of the induction-free fragment PV − of PV is not provable in PV .
IEEE Transactions on Parallel and Distributed Systems | 2013
Chun Tung Chou; Aleksandar Ignjatovic; Wen Hu
Wireless sensor networks (WSNs) enable the collection of physical measurements over a large geographic area. It is often the case that we are interested in computing and tracking the spatial-average of the sensor measurements over a region of the WSN. Unfortunately, the standard average operation is not robust because it is highly susceptible to sensor faults and heterogeneous measurement noise. In this paper, we propose a computational efficient method to compute a weighted average (which we will call robust average) of sensor measurements, which appropriately takes sensor faults and sensor noise into consideration. We assume that the sensors in the WSN use random projections to compress the data and send the compressed data to the data fusion centre. Computational efficiency of our method is achieved by having the data fusion centre work directly with the compressed data streams. The key advantage of our proposed method is that the data fusion centre only needs to perform decompression once to compute the robust average, thus greatly reducing the computational requirements. We apply our proposed method to the data collected from two WSN deployments to demonstrate its efficiency and accuracy.
web intelligence | 2008
Aleksandar Ignjatovic; Norman Foo; Chung Tong Lee
Agents from a community interact in pairwise transactions across discrete time. Each agent reports its evaluation of another agent with which it has just had a transaction to a central system. This system uses these time-sequences of experience evaluations to infer how much the agents trust each another. Our paper proposes rationality assumptions (also called axioms or constraints) that such inferences must obey, and proceeds to derive theorems implied by these assumptions. A basic representation theorem is proved. The system also uses these pairwise cross-agent trustworthiness to compute a reputation rank for each agent. Moreover, it provides with each reputation rank an estimate of the reliability, which we call weight of evidence. This paper is different from much of the current work in that it examines how a central system which computes trustworthiness, reputation and weight of evidence is constrained by such rationality postulates.
IEEE Transactions on Parallel and Distributed Systems | 2015
Mohammad Allahbakhsh; Aleksandar Ignjatovic
In this paper we introduce an iterative voting algorithm and then use it to obtain a rating method which is very robust against collusion attacks as well as random and biased raters. Unlike the previous iterative methods, our method is not based on comparing submitted evaluations to an approximation of the final rating scores, and it entirely decouples credibility assessment of the cast evaluations from the ranking itself. The convergence of our algorithm relies on the existence of a fixed point of a continuous mapping which is also a stationary point of a constrained optimization objective. We have implemented and tested our rating method using both simulated data as well as real world data. In particular, we have applied our method to movie evaluations obtained from MovieLens and compared our results with IMDb and Rotten Tomatoes movie rating sites. Not only are the ratings provided by our system very close to IMDb rating scores, but when we differ from the IMDb ratings, the direction of such differences is essentially always towards the ratings provided by the critics in Rotten Tomatoes. Our tests demonstrate high efficiency of our method, especially for very large online rating systems, for which trust management is both of the highest importance and one of the most challenging problems.Online rating systems are widely used to facilitate making decisions on the web. For fame or profit, people may try to manipulate such systems by posting unfair evaluations. Therefore, determining objective rating scores of products or services becomes a very important yet difficult problem. Existing solutions are mostly majority based, also employing temporal analysis and clustering techniques. However, they are still vulnerable to sophisticated collaborative attacks. In this paper we propose an iterative rating algorithm which is very robust against collusion attacks as well as random and biased raters. Unlike previous iterative methods, our method is not based on comparing submitted evaluations to an approximation of the final rating scores, and it entirely decouples credibility assessment of the cast evaluations from the ranking itself. This makes it more robust against sophisticated collusion attacks than the previous iterative filtering algorithms. We provide a rigorous proof of convergence of our algorithm based on the existence of a fixed point of a continuous mapping which also happens to be a stationary point of a constrained optimization objective. We have implemented and tested our rating method using both simulated data as well as real world movie rating data. Our tests demonstrate that our model calculates realistic rating scores even in the presence of massive collusion attacks and outperforms well-known algorithms in the area. The results of applying our algorithm on the real-world data obtained from MovieLens conforms highly with the rating scores given by Rotten Tomatoes movie critics as domain experts for movies.