Ciaran Hughes
Valeo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ciaran Hughes.
IEEE Transactions on Intelligent Transportation Systems | 2015
Shane Tuohy; Martin Glavin; Ciaran Hughes; Edward Jones; Mohan M. Trivedi; Liam Kilmartin
Automotive electronics is a rapidly expanding area with an increasing number of safety, driver assistance, and infotainment devices becoming standard in new vehicles. Current vehicles generally employ a number of different networking protocols to integrate these systems into the vehicle. The introduction of large numbers of sensors to provide driver assistance applications and the associated high-bandwidth requirements of these sensors have accelerated the demand for faster and more flexible network communication technologies within the vehicle. This paper presents a comprehensive overview of current research on advanced intra-vehicle networks and identifies outstanding research questions for the future.
Applied Optics | 2010
Ciaran Hughes; Patrick Eoghan Denny; Edward Jones; Martin Glavin
The majority of computer vision applications assumes that the camera adheres to the pinhole camera model. However, most optical systems will introduce undesirable effects. By far, the most evident of these effects is radial lensing, which is particularly noticeable in fish-eye camera systems, where the effect is relatively extreme. Several authors have developed models of fish-eye lenses that can be used to describe the fish-eye displacement. Our aim is to evaluate the accuracy of several of these models. Thus, we present a method by which the lens curve of a fish-eye camera can be extracted using well-founded assumptions and perspective methods. Several of the models from the literature are examined against this empirically derived curve.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010
Ciaran Hughes; Patrick Eoghan Denny; Martin Glavin; Edward Jones
In this paper, we describe a method to photogrammetrically estimate the intrinsic and extrinsic parameters of fish-eye cameras using the properties of equidistance perspective, particularly vanishing point estimation, with the aim of providing a rectified image for scene viewing applications. The estimated intrinsic parameters are the optical center and the fish-eye lensing parameter, and the extrinsic parameters are the rotations about the world axes relative to the checkerboard calibration diagram.
IEEE Transactions on Intelligent Transportation Systems | 2016
Damien Dooley; Brian McGinley; Ciaran Hughes; Liam Kilmartin; Edward Jones; Martin Glavin
This paper proposes a novel approach for detecting and tracking vehicles to the rear and in the blind zone of a vehicle, using a single rear-mounted fisheye camera and multiple detection algorithms. A maneuver that is a significant cause of accidents involves a target vehicle approaching the host vehicle from the rear and overtaking into the adjacent lane. As the overtaking vehicle moves toward the edge of the image and into the blind zone, the view of the vehicle gradually changes from a front view to a side view. Furthermore, the effects of fisheye distortion are at their most pronounced toward the extremities of the image, rendering detection of a target vehicle entering the blind zone even more difficult. The proposed system employs an AdaBoost classifier at distances of 10-40 m between the host and target vehicles. For detection at short distances where the view of a target vehicle has changed to a side view and the AdaBoost classifier is less effective, identification of vehicle wheels is proposed. Two methods of wheel detection are employed: at distances between 5 and 15 m, a novel algorithm entitled wheel arch contour detection (WACD) is presented, and for distances less than 5 m, Hough circle detection provides reliable wheel detection. A testing framework is also presented, which categorizes detection performance as a function of distance between host and target vehicles. Experimental results indicate that the proposed method results in a detection rate of greater than 93% in the critical range (blind zone) of the host.
international conference on intelligent transportation systems | 2015
Jonathan Horgan; Ciaran Hughes; John McDonald; Senthil Yogamani
Vision-based driver assistance systems is one of the rapidly growing research areas of ITS, due to various factors such as the increased level of safety requirements in automotive, computational power in embedded systems, and desire to get closer to autonomous driving. It is a cross disciplinary area encompassing specialised fields like computer vision, machine learning, robotic navigation, embedded systems, automotive electronics and safety critical software. In this paper, we survey the list of vision based advanced driver assistance systems with a consistent terminology and propose a taxonomy. We also propose an abstract model in an attempt to formalize a top-down view of application development to scale towards autonomous driving system.
Archive | 2011
Ciaran Hughes; Ronan O’Malley; Diarmaid O’Cualain; Martin Glavin; Edward Jones
When discussing vehicular safety, there are two key concepts: The first is the concept of Primary Safety, which can be defined as ‘the vehicle engineering aspects which as far as possible reduce the risk of an accident occurring’ (DfT (UK), 2008a); in contrast, Secondary Safety can be defined as ‘all structural and design features that reduce the consequences of accidents as far as possible’ (DfT (UK), 2008b). It is important to note that these two aspects of safety sometimes interact in conflicting ways. For example, to improve secondary safety in vehicles, manufacturers often strengthen and increase the size of a vehicle’s A-pillar (the vertical or near vertical shaft of material that supports the vehicle roof on either side of the wind-shield). However, this can decrease the visibility of the vehicle’s immediate environment to a driver (i.e. increase the vehicle’s blindzones1), which has a negative impact on the primary safety of the vehicle. In this chapter, we will discuss the role of automotive vision systems in improving the primary safety of vehicles. The development of electronic vision systems for the automotive market is a strongly growing area of development, driven in particular by consumer demand for increased safety in vehicles, both for drivers and for other road users, including Vulnerable Road Users (VRUs), such as pedestrians, cyclists or motorcyclists. Consumer demand is matched by legislative developments in a number of key automotive markets; for example Europe, Japan and the US have either introduced or are in the process of introducing legislation with the intention of reducing the number of VRU fatalities, with some emphasis on the use of vision systems. There are several areas in which electronic vision systems can be utilised. These can be broadly divided into two applications: visual display applications for passive human
international conference on control, automation, robotics and vision | 2014
Damien Dooley; Brian McGinley; Liam Kilmartin; Edward Jones; Martin Glavin; Ciaran Hughes
This paper proposes a performance test framework for vision based vehicle detection systems implemented on road-going vehicles. An extensive literature review outlines the evolution of test frameworks used by a number of recently published vehicle detection systems. The proposed test framework determines the effectiveness of a detection algorithm as a function of distance between the host and target vehicles. The framework assists the characterisation of an algorithms performance over the full range of distances in which a vehicle can be detected in an image. The test framework is designed for use in blind spot detection, forward collision warning and rear end collision warning applications.
First Annual International Symposium on Vehicular Computing Systems | 2008
Ciaran Hughes; Martin Glavin; Edward Jones; Patrick Eoghan Denny
development of vehicular electronic vision systems for the automotive market is a growing field, driven in partic- ular by customer demand to increase the safety of vehicles both for drivers, and for other road users, including Vulner- able Road Users (VRUs), such as pedestrians. Close-range automotive camera systems are designed to display the ar- eas in the close vicinity of the vehicle to the driver, typically covering the blind-zones of the vehicle. Customer demand is matched by legislative developments in a number of key automotive markets; for example Europe, Japan and the United States are in the process of introducing legislation to aid in the prevention of fatalities to vulnerable road users, with emphasis on the use of vision systems. In this paper we discuss some of the factors that have promoted the in- troduction of this legislation. We show also that, by the use of wide-angle camera systems, these legislative requirements can be met.
Signal, Image and Video Processing | 2018
Martin Gallagher; Sunil Chandra; Petros Kapsalas; Ciaran Hughes; Martin Glavin; Edward Jones
This paper discusses the problem of reliably estimating motion in video sequences. A core issue in this application is the registration of successive images in an image sequence of a dynamic scene. The paper examines the core characteristics of the Fourier Mellin transform (FMT) when applied to this task in the automotive environment. Of particular interest are the transformational, scale and rotational invariances of the transform. The objective of the paper is to examine the behaviour of the algorithm under wide variations in these three parameters. Images from a range of automotive scenarios are considered in this evaluation. Our main contributions are the experimental evaluations carried out on various images with a range of known translations, rotations and scale changes. The results of the experimental process allow the determination of the relationship between the transformation of image patches, and the resulting level of error in motion estimation. This helps to inform the application of the FMT, when it is effective, and where its limitations occur. The results of the experimental process described in this paper may be applied in several ways in practice. The applicability of the method may be extended through the addition of environmental variables from external sensors, i.e. CAN bus data, GPS or spatial feature ego-motion. This allows adaptive execution of the transform.
international conference on intelligent transportation systems | 2015
Duong Nguyen; Ciaran Hughes; Jonathan Horgan
This paper introduces a novel approach for a moving static separation, which can be efficiently used for outlier removal within optical flow-based 3D reconstruction or moving object detection even in the case of a moving observer. The contributions of the proposed approach include an efficient implementation, high accuracy and embedded friendliness. These characteristics make the approach very attractive for automotive safety applications based on camera systems. The paper also highlights some possible applications where the use of the moving static separation can play a key role on improving the performance.