Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Iosif-Viorel Onut is active.

Publication


Featured researches published by Iosif-Viorel Onut.


international conference on software testing verification and validation | 2010

Solving Some Modeling Challenges when Testing Rich Internet Applications for Security

Suryakant Choudhary; Mustafa Emre Dincturk; Gregor von Bochmann; Guy-Vincent Jourdan; Iosif-Viorel Onut; Paul Ionescu

Web-based applications are becoming more ubiquitous day by day, and among these applications, a new trend is emerging: rich Internet applications (RIAs), using technologies such as Ajax, Flex, or Silverlight, break away from the traditional approach of Web applications having server-side computation and synchronous communications between the web client and servers. RIAs introduce new challenges, new security vulnerabilities, and their behavior makes it difficult or impossible to test with current web-application security scanners. A new model is required to enable automated scanning of RIAs for security. In this paper, we evaluate the shortcomings of current approaches, we elaborate a framework that would permit automated scanning of RIAs, and we provide some directions to address the open problems.


ACM Transactions on The Web | 2014

A Model-Based Approach for Crawling Rich Internet Applications

Mustafa Emre Dincturk; Guy-Vincent Jourdan; Gregor von Bochmann; Iosif-Viorel Onut

New Web technologies, like AJAX, result in more responsive and interactive Web applications, sometimes called Rich Internet Applications (RIAs). Crawling techniques developed for traditional Web applications are not sufficient for crawling RIAs. The inability to crawl RIAs is a problem that needs to be addressed for at least making RIAs searchable and testable. We present a new methodology, called “model-based crawling”, that can be used as a basis to design efficient crawling strategies for RIAs. We illustrate model-based crawling with a sample strategy, called the “hypercube strategy”. The performances of our model-based crawling strategies are compared against existing standard crawling strategies, including breadth-first, depth-first, and a greedy strategy. Experimental results show that our model-based crawling approach is significantly more efficient than these standard strategies.


2013 Eighth International Conference on P2P, Parallel, Grid, Cloud and Internet Computing | 2013

Dist-RIA Crawler: A Distributed Crawler for Rich Internet Applications

Seyed M. Mirtaheri; Di Zou; Gregor von Bochmann; Guy-Vincent Jourdan; Iosif-Viorel Onut

Crawling web applications is important for indexing, accessibility and security assessment. Crawling traditional web applications is an old problem, as old as the web itself. Crawling Rich Internet Applications (RIA) quickly and efficiently, however, is an open problem. Technologies such as AJAX and partial Document Object Model (DOM) updates only makes the problem of crawling RIA more time consuming to the web crawler. To reduce the time to crawl a RIA, this paper presents a new distributed algorithm to crawl a RIA in parallel with multiple computers, called Dist-RIA Crawler. Dist-RIA Crawler uses the JavaScript events in the DOM structure to partition the search space. This paper illustrates a prototype implementation of Dist-RIA Crawler and inspect empirical performance measurements.


web information systems engineering | 2014

PDist-RIA Crawler: A Peer-to-Peer Distributed Crawler for Rich Internet Applications

Seyed M. Mirtaheri; Gregor von Bochmann; Guy-Vincent Jourdan; Iosif-Viorel Onut

Crawling Rich Internet Applications (RIAs) is important to ensure their security, accessibility and to index them for searching. To crawl a RIA, the crawler has to reach every application state and execute every application event. On a large RIA, this operation takes a long time. Previously published GDist-RIA Crawler proposes a distributed architecture to parallelize the task of crawling RIAs, and run the crawl over multiple computers to reduce time. In GDist-RIA Crawler, a centralized unit calculates the next task to execute, and tasks are dispatched to worker nodes for execution. This architecture is not scalable due to the centralized unit which is bound to become a bottleneck as the number of nodes increases. This paper extends GDist-RIA Crawler and proposes a fully peer-to-peer and scalable architecture to crawl RIAs, called PDist-RIA Crawler. PDist-RIA doesn’t have the same limitations in terms scalability while matching the performance of GDist-RIA. We describe a prototype showing the scalability and performance of the proposed solution.


international world wide web conferences | 2016

D-ForenRIA: Distributed Reconstruction of User-Interactions for Rich Internet Applications

Salman Hooshmand; Akib Mahmud; Gregor von Bochmann; Muhammad Faheem; Guy-Vincent Jourdan; Russ Couturier; Iosif-Viorel Onut

We present D-ForenRIA, a distributed forensic tool to automatically reconstruct user-sessions in Rich Internet Applications (RIAs), using solely the full HTTP traces of the sessions as input. D-ForenRIA recovers automatically each browser state, reconstructs the DOMs and re-creates screenshots of what was displayed to the user. The tool also recovers every action taken by the user on each state, including the user-input data. Our application domain is security forensics, where sometimes months-old sessions must be quickly reconstructed for immediate inspection. We will demonstrate our tool on a series of RIAs, including a vulnerable banking application created by IBM Security for testing purposes. In that case study, the attacker visits the vulnerable web site, and exploits several vulnerabilities (SQL-injections, XSS...) to gain access to private information and to perform unauthorized transactions. D-ForenRIA can reconstruct the session, including screenshots of all pages seen by the hacker, DOM of each page and the steps taken for unauthorized login and the inputs hacker exploited for the SQL-injection attack. D-ForenRIA is made efficient by applying advanced reconstruction techniques and by using several browsers concurrently to speed up the reconstruction process. Although we developed D-ForenRIA in the context of security forensics, the tool can also be useful in other contexts such as aided RIAs debugging and automated RIAs scanning.


web information systems engineering | 2014

Towards Real Time Contextual Advertising

Abhimanyu Panwar; Iosif-Viorel Onut; James Miller

The practice of placement of advertisements on a target webpage which are relevant to the page’s subject matter is called contextual advertising. Placement of such ads can lead to an improved user experience and increased revenue to the webpage owner, advertisement network and advertiser. The selection of these advertisements is done online by the advertisement network. Empirically, we have found that such advertisements are rendered later than the other content of the webpage which lowers the quality of the user experience and lessens the impact of the ads. We propose an offline method of contextual advertising where a website is classified into a particular category according to a given taxonomy. Upon a request from any web page under its domain, an advertisement is served from the pool of advertisements which are also classified according to the taxonomy. Experiments suggest that this approach is a viable alternative to the current form of contextual advertising.


international conference on digital forensics | 2016

Reconstructing Interactions with Rich Internet Applications from HTTP Traces

Sara Baghbanzadeh; Salman Hooshmand; Gregor von Bochmann; Guy-Vincent Jourdan; Seyed M. Mirtaheri; Muhammad Faheem; Iosif-Viorel Onut

This chapter describes the design and implementation of ForenRIA, a forensic tool for performing automated and complete reconstructions of user sessions with rich Internet applications using only the HTTP logs. ForenRIA recovers all the application states rendered by the browser, reconstructs screenshots of the states and lists every action taken by the user, including recovering user inputs. Rich Internet applications are deployed widely, including on mobile systems. Recovering information from logs for these applications is significantly more challenging compared with classical web applications. This is because HTTP traffic predominantly contains application data with no obvious clues about what the user did to trigger the traffic. ForenRIA is the first forensic tool that specifically targets rich Internet applications. Experiments demonstrate that the tool can successfully handle relatively complex rich Internet applications.


european symposium on research in computer security | 2018

Phishing Attacks Modifications and Evolutions

Qian Cui; Guy-Vincent Jourdan; Gregor von Bochmann; Iosif-Viorel Onut; Jason Flood

So-called “phishing attacks” are attacks in which phishing sites are disguised as legitimate websites in order to steal sensitive information.


Journal of Internet Services and Applications | 2018

Recovering user-interactions of Rich Internet Applications through replaying of HTTP traces

Salman Hooshmand; Gregor von Bochmann; Guy-Vincent Jourdan; Russell Couturier; Iosif-Viorel Onut

In this paper, we study the “Session Reconstruction” problem which is the reconstruction of user interactions from recorded request/response logs of a session. The reconstruction is especially useful when the only available information about the session is its HTTP trace, as could be the case during a forensic analysis of an attack on a website. Solutions to the reconstruction problem do exist for “traditional” Web applications. However, these solutions cannot handle modern “Rich Internet Applications” (RIAS). Our solution is implemented in the context of RIAs in a tool called D-ForenRIA. Our tool is made of a proxy and a set of browsers. Browsers are responsible for trying candidate actions on each DOM, and the proxy, which contains the observed HTTP trace, is responsible for responding to browsers’ requests and validating attempted actions on each DOM. D-ForenRIA has a distributed architecture, a learning mechanism to guide the session reconstruction process efficiently, and can handle complex user-inputs, client-side randomness, and to some extents actions that do not generate any HTTP traffic. In addition, concurrent reconstruction makes the system scalable for real-world use. The results of our evaluation on several RIAs show that D-ForenRIA can efficiently reconstruct user-sessions in practice.


International Journal of Systems and Service-oriented Engineering | 2017

On the Concept of Automatic User Behavior Profiling of Websites

Abhimanyu Panwar; Iosif-Viorel Onut; Michael R. Smith; James Miller

Userbehaviorprofilingofwebsitescanprovideanoperatorwithanestimateofwhat isactually transpiringontheirsite.Thistypeofinformationisessentialtokeepaheadofthecurveinacommercial environmentwherecompetitionisextremelyfierceandcontinuouslyevolving.Theauthorspresent anautomatedmethodologythatuseseconomicallyavailablewebserverlogstomineUserBehavior Profiles(UBP)withoutaddingsignificantoverheadtoanexistingwebsystem.Theyprepareuser tracesfromthelogfilesbasedonthe35mostcommonactionsfoundonpopularwebsites,and9user behaviorprofileswhichdescribethemajorityofcurrentactivitypatternsidentifiedfromthosesites. TheyclassifytheusertraceintoaUBPviaaHiddenMarkovModel(HMM)basedclassification approach.Theauthorsappliedthismethodologytothelogsofavirtuale-commercewebsite,andan industrialcasestudytodemonstratethevalidityoftheproposedapproach. KEyWORdS E-Commerce, Experimentation, Hidden Markov Models, Industrial Case Study, Modeling, User Behavior, User Sessions

Collaboration


Dive into the Iosif-Viorel Onut's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qian Cui

University of Ottawa

View shared research outputs
Researchain Logo
Decentralizing Knowledge