Christine Buisson
University of Lyon
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christine Buisson.
Transportation Research Record | 2009
Christine Buisson
Recently, some authors have provided experimental evidence of the existence of an urban-scale macroscopic fundamental diagram (MFD). Their convincing results were obtained on the basis of 500 urban fixed detectors placed 100 m upstream of most major intersections in the city of Yokohama, Japan. Those authors assume that the network in which data are collected is homogeneous in regard to congestion occurrence. This paper is devoted to exploring the impact of heterogeneity on the existence of an MFD. All data available for a medium-size French city are used. The data set encompasses measurements on highways, urban center streets (congested during business hours), and residential area streets. Data were collected by loop detectors with a distance from a downstream signal that can vary from 1,000 to 10 m. Heterogeneity is examined here in various aspects: differences between the surface and highway network, impact of the distance between the loop detector and the traffic signal in the surface network, and differences between penetrating roads and the ring road in the highway network. It is proved in this paper that heterogeneity has a strong impact on the shape of the macroscopic fundamental diagram.
IEEE Transactions on Intelligent Transportation Systems | 2009
Nicolas Chiabaut; Christine Buisson; Ludovic Leclercq
Classically, fundamental diagrams are estimated from aggregated data at a specific location. Such a measurement method may lead to inconsistency, which mainly explains the current controversy about their shape. This paper proposes a new estimation method based on passing rate measurements along moving observer paths. Under specific assumptions, it can be proved that in congestion, the passing rate is independent of the traffic flow states. This property allows 1) proof that a linear fundamental diagram is suitable to represent traffic flow behavior involved in the next generation simulation (NGSim) data set and 2) fitting of its two parameters, i.e., the congested wave speed and the jam density.
Transportation Research Record | 2003
Stéphane Chanut; Christine Buisson
A new first-order traffic flow model is introduced that takes into account the fact that various types of vehicles use the roads simultaneously, particularly cars and trucks. The main improvement this model has to offer is that vehicles are differentiated not only by their lengths but also by their speeds in a free-flow regime. Indeed, trucks on European roads are characterized by a lower speed than that of cars. A system of hyperbolic conservation equations is defined. In this system the flux function giving the flow of heavy and light vehicles depends on total and partial densities. This problem is partly solved in the Riemann case in order to establish a Godunov discretization. Some model output is shown stressing that speed differences between the two types of vehicles and congestion propagation are sufficiently reproduced. The limits of the proposed model are highlighted, and potential avenues of research in this domain are suggested.
international conference on intelligent transportation systems | 2010
Victor L. Knoop; Aurélien Duret; Christine Buisson; Bart van Arem
The congestion at on-ramps of motorways is due to too many vehicles wanting to merge onto the same lane. Ramp metering is usually used as control measure to influence the flows, but a variable speed limit can also have large consequences for the merging process. This paper discusses the change in lane distribution due to a VSL and explicitly considers the influence of an on-ramp. To this end, the lane distribution just upstream of an on-ramp is compared with the lane distribution elsewhere. Just upstream of an on-ramp, a significantly lower fraction of the flow uses the outside (right) compared to a part of the road without any ramps. This holds both for a situation with VSL as without VSL. Besides, VSL increases the use of the outside lane near capacity. This way, VSL influences not only the speed but also the lane distribution, and thereby possibly also the merging ratio. The consequences of this changed lane distribution are site-dependent and should be taken into account when deciding on installing a system of variable speed limits.
Transportation Research Record | 2008
Aurélien Duret; Christine Buisson; Nicolas Chiabaut
Capturing variability within flow is an important task for traffic flow models. The linearity of the congested part of the fundamental diagram induces a linear speed-spacing relationship at an individual level, characterized by two parameters. This study assumes that most intervehicle variability can be accounted for by estimating these two parameters for each vehicle. Two methods are presented to quantify individual linear speed-spacing relationships. The first method is based on data: it estimates the speed-spacing relationship by fitting the experimental speed-spacing scatter plot with a straight line. The second method is based on simulation: it computes the optimum parameters so that the simulated trajectories obtained by Newells car-following algorithm reproduce as closely as possible the experimental vehicles trajectories. Both proposed methods are implemented on the Next Generation Simulation trajectory data set recorded on I-80. The individual parameters for the speed-spacing relationship are quantified, and their distributions are specified. The need to distinguish driver behavior on a lane-by-lane basis is discussed. The results tend to prove that taking into account individual variability between drivers can improve the accuracy of simulated trajectories.
Computer-aided Civil and Infrastructure Engineering | 2011
Aurélien Duret; Soyoung Ahn; Christine Buisson
Abstract: Passing rate measurements of backward-moving kinematic waves in congestion are applied to quantify two traffic features; a relaxation phenomenon of vehicle lane-changing and impact of lane-changing in traffic streams after the relaxation process is complete. The relaxation phenomenon occurs when either a lane-changer or its immediate follower accepts a short spacing upon insertion and gradually resumes a larger spacing. A simple existing model describes this process with few observable parameters. In this study, the existing model is reformulated to estimate its parameter using passing rate measurements. Calibration results based on vehicle trajectories from two freeway locations indicate that the revised relaxation model matches the observation well. The results also indicate that the relaxation occurs in about 15 seconds and that the shoulder lane exhibits a longer relaxation duration. The passing rate measurements were also employed to quantify the postrelaxation impact of multiple lane-changing maneuvers within a platoon of 10 or more vehicles in queued traffic stream. The analysis of the same data sets shows that lane-changing activities do not induce a long-term change in traffic states; traffic streams are perturbed temporarily by lane-changing maneuvers but return to the initial states after relaxations.
Transportation Research Record | 2010
Aurélien Duret; Jacques Bouffier; Christine Buisson
Low-speed merging maneuvers performed within a free-flow stream are believed to trigger congestion. These accelerating moving bottlenecks introduce local constraints that can disturb the flow at a local or global scale. Low-speed merging maneuvers are also suspected to cause capacity drop. Using the kinematic wave theory, this paper explores the analytical solution of a simple first-order model when moving boundary conditions are introduced. The paper shows that shock waves initiated by low-speed merging maneuvers are a linear transformation of the moving boundary conditions, no matter the shape of the moving boundary condition. These results are then applied to typical situations to show that the interaction of two moving boundaries can modify the analytical solution of the problem. The results are then extended to multiple merging maneuvers to show that they can interact. Every possible interaction between two identical merging maneuvers is explored to identify the conditions that lead to global congestion. Finally, these results are used to propose an analytical formulation of the capacity drop for multiple merging maneuvers at a single location. It is shown that capacity drop is related to the demands on the minor and major streams and to the speed of the merging vehicle.
Transportation Research Record | 2012
Florian Marczak; Christine Buisson
Dynamic traffic simulation tools are increasingly being used to help traffic managers and urban planners to make decisions. Therefore, simulation tool users require a validated methodology guaranteeing that simulation results can be trusted. This study contributes to the identification and correction of a possible deficiency in detailed calibration and validation of car-following models: the data errors of individual trajectory data. Some studies addressed the problem of filtering trajectory data. A new filtering technique to reduce the measurement errors on trajectories, speed profiles, and acceleration profiles is proposed here. This technique is based on some piecewise polynomials termed “splines.” The proposed technique is compared with a set of filtering techniques found in the literature. A complete trajectory data set available within the NGSIM program is used. As a quality indicator of the various filtering techniques, velocity distribution, acceleration distribution, and jerk analysis are used for the whole data set. Also, analyzing acceleration standard deviations for each trajectory of the data set is suggested. The main findings are as follows: (a) of the methods compared within this work, the I-spline method with the action points most reduces the spikes in the velocity distribution; (b) moreover, the I-spline method most reduces the percentage of jerk values higher than 15 m/s3 as well as the percentage of the 1-s windows with more than one sign inversion of the jerk; and (c) in some cases, this method increases the acceleration variability of smoothed trajectories.
Transportation Research Record | 2006
Christine Buisson
As the capabilities of automated license plate recognition (ALPR) devices increase, their applicability to measure individual travel times becomes appealing and the question of sizing the total number of devices becomes an important one. ALPR cameras have been deployed along a road of about 50 km in the French Alps. This road is highly congested, and users experience travel times of 35 to 120 min. Traffic managers plan to inform road users of their travel times on variable message signs. Slicing the complete journey into small sections (between two cameras) makes it possible to be more reactive to sudden changes in travel times. This is especially true if those changes are due to a congestion occurring at the beginning or in the middle of the journey. But increasing the total number of cameras is not cost-effective and must be avoided as much as possible. This paper presents the formulation and results of a simplified model for studying the impact of the total number of cameras on the precision of displayed travel times. The model was kept as simple as possible. Maximum use was made of the small amounts of data available. Comparison with on-site preliminary results proves that the model gives robust results.
IEEE Transactions on Intelligent Transportation Systems | 2015
Victor L. Knoop; Christine Buisson
Lane changes (LCs) are important in traffic flow operations. They cause differences in flow over lanes and in some cases determine the start of congestion. Whereas calibration and validation are commonly used with car-following models, this is not common practice with LC models. Even then, it is not clear what calibration and validation entails for probabilistic LC models. Therefore, this paper reviews methodologies to calibrate and validate probabilistic LC models, both microscopically and macroscopically. A likelihood is often used in calibration but does not intuitively show the quality of the model. An example shows that it is possible to have the model calibrated and validated with accurate parameters all having the same error in the validation as in the calibration, but the quality of the model is still bad. Using a likelihood ensures the stochastic effects are well captured, but the conclusion is that for validation purposes, one can better use a measure that has physical interpretation and gives a value indicating the quality of the model for the purpose for which it needs to be used.