Performance and Scaling of Collaborative Sensing and Networking for Automated Driving Applications
Yicong Wang, Gustavo de Veciana, Takayuki Shimizu, Hongsheng Lu
PPerformance and Scaling of Collaborative Sensingand Networking for Automated DrivingApplications
Yicong Wang * , Gustavo de Veciana * , Takayuki Shimizu † , and Hongsheng Lu † * Department of Electrical and Computer Engineering, The University of Texas at Austin † TOYOTA InfoTechnology Center, U.S.A., Inc., Mountain View, CA
Abstract —A critical requirement for automated driving sys-tems is enabling situational awareness in dynamically chang-ing environments. To that end vehicles will be equipped withdiverse sensors, e.g., LIDAR, cameras, mmWave radar, etc.Unfortunately the sensing ‘coverage’ is limited by environmentalobstructions, e.g., other vehicles, buildings, people, objects etc.A possible solution is to adopt collaborative sensing amongstvehicles possibly assisted by infrastructure. This paper introducesnew models and performance analysis for vehicular collaborativesensing and networking. In particular, coverage gains are quan-tified, as are their dependence on the penetration of vehiclesparticipating in collaborative sensing. We also evaluate theassociated communication loads in terms of the Vehicle-to-Vehicle(V2V) and Vehicle-to-Infrastructure (V2I) capacity requirementsand how these depend on penetration. We further explorehow collaboration with sensing capable infrastructure improvessensing performance, as well as the benefits in utilizing spatio-temporal dynamics, e.g., collaborating with vehicles moving inthe opposite direction. Collaborative sensing is shown to greatlyimprove sensing performance, e.g., improves coverage from to with a penetration. In scenarios with limitedpenetration and high coverage requirements, infrastructure canbe used to both help sense the environment and relay data.Once penetration is high enough, sensing vehicles provide goodcoverage and data traffic can be effectively ‘offloaded’ to V2Vconnectivity, making V2I resources available to support otherin-car services. I. I
NTRODUCTION
In future automated driving systems, vehicles will needto maintain real-time situational awareness in dynamicallychanging environments. Despite vehicles being equipped withmultiple sensors, e.g., radar, LIDAR, cameras etc., the sensing‘coverage’ of a single vehicle is limited. Indeed such sensorstypically rely on a Line-Of-Sight (LOS) to detect and trackobjects, so their performance is fragile in obstructed environ-ments, e.g., a vehicle may have limited visibility of what ishappening several cars ahead of it. Such information could beneeded for path planning, determining car-following distance,taking critical safety manouvers, etc. Further without accessto diverse points of view of an object, it may be difficult toquickly detect and recognize what it is, e.g., a cyclist viewedonly from the front may look like a pedestrian.To overcome this problem researchers and industry are con-sidering enabling distributed collaborative sensing amongstneighboring vehicles, and possibly infrastructure, e.g., Road Side Units (RSUs) and/or base stations. The idea is to enableautomated vehicles to exchange High Resolution (HD) and/orprocessed data from vehicles and/or RSUs to enhance timelyperception of the environment, see e.g., [1][2]. The benefits ofthis approach will depend on the penetration of collaboratingvehicles/RSUs as well as the density and character of obstruc-tions in the environment. The communication loads to sharesensed information can be high and will need to be met byenabling new forms of connectivity.Collaborative sensing is likely to be one of key functional-ities for cooperative automated driving [2], and one of thethree most important use cases of future 5G systems [3].Thus a basic understanding of sensing performance and trafficscaling is of great interest. It may involve substantial data ratesper vehicle, e.g., Mbps, for highly automated driving, andrequire low end-to-end delays, e.g., ms or less dependingon the use case [4]. At high vehicle densities realizing suchdata exchanges via Vehicle-to-Infrastructure (V2I) resourcesis not likely to be possible, e.g., there could be tens tohundreds of vehicles sharing a base station. A possible solutionis to leverage direct data exchanges amongst vehicles. Inparticular short range millimeter wave (mmWave) based LOSVehicle-to-Vehicle (V2V) links can support exceedingly highdata rates. Unfortunately such links are also susceptible toobstructions, and thus, not unlike collaborative sensing itself,the connectivity of such V2V networks is limited by thepenetration of vehicles with such communication capabilitiesand obstructions in the environment. Thus in order to be viable(and reliable) collaborative sensing applications will leverage amix of V2V and V2I connectivity, likely attempting to offloadas much traffic as possible to the V2V networks.The aim of this paper is to develop initial models and analy-sis of the benefits, communication loads and requirements forvehicular collaborative sensing and networking. We focus ontwo intertwined classes of questions:
1. What are simple and tractable metrics for collaborativesensing performance in obstructed environments? How doesperformance scale with the penetration of collaborating vehi-cles and density of obstructions?2. What are the network connectivity-capacity requirementsto support collaborative sensing on V2V/V2I networks as afunction of the penetration and density of vehicles? a r X i v : . [ ee ss . SP ] D ec ote that while our focus will be on vehicular networks, otherdistributed autonomous systems built on wireless systemsshare similar characteristics, including, e.g., robotic or possiblyemerging aerial drone applications. Contributions.
The key contributions of this paper are asfollows. • We introduce a stochastic geometric model to studycollaborative sensing in obstructed environments alongwith associated performance metrics capturing sensingcoverage. • We quantify the performance of collaborative sensing forvarying coverage requirements, vehicle/object densitiesand penetration of collaborative sensing vehicles. • We explore heterogeneous architectures of sensing andcommunication combining vehicles and infrastructure.Our study on the sensing performance and capacityrequirements exhibits the critical role that infrastructureassistance might need to play in improving sensing cov-erage and providing reliable communication especially atthe early stages of collaborative sensing at low penetra-tions. • We show that exploiting spatio-temporal dynamics col-laboration with flows of vehicles moving in differentdirections improves the performance of collaborativesensing, yet the benefit is limited at high penetrationsof collaborating vehicles.Our analytical results are based on simple/tractable modelsthat capture the essence of such systems. We further conductsimulations of typical road traffic scenarios to validate ouranalysis and provide additional quantitative assessments.
Related work.
Collaborative sensing is likely to be one ofthe key enabling technologies for automated driving systems.Vehicles can exchange real time sensor information with vehi-cles/RSUs to enhance their view of an obstructed environment[5][6][7][8][9]. An analysis of the scaling and performance ofsuch systems has however not been done before.Currently available communication protocols such as Dedi-cated Short-Range Communication (DSRC) [10] have limiteddata rates, e.g., IEEE 802.11p supports 3–27 Mbps (typically6Mbps). LTE systems are evolving to support safety-relatedV2X applications [11], but still provide limited capacity andface challenges associated with the high densities of UEs. Toserve the requirements of collaborative sensing, 3GPP definedvarious use cases and requirements in [4][12]. Also mmWavetechnology is being considered to support the sharing of HDsensor data [13][14].The capacity of Vehicular Ad Hoc Networks (VANETs) hasbeen studied in a variety of works, see e.g., [15][16][17][18].The communication requirements for collaborative sensing,i.e., each vehicle requiring local many-to-many informationsharing, is different from that typically considered in VANETstudies where the source and destination of data need not beclose by. The existing capacity analysis needs to be adaptedto this many-to-many setting. The authors of [19] study thecommunication loads on a single vehicle, but obstructions andnetworking are not considered. To our knowledge, our work on modeling and assessing theperformance of collaborative sensing is novel. It can be viewedas a stochastic version of what are referred to as the art galleryproblem(s) [20]. These problems typically address questionssuch as the number and placement of cameras/guards in a fixedenvironment to meet a pre-specified coverage criterion. Hencethis paper also contributes new results of this type but forrandom sensor placement and obstructed environments. Suchresults are more appropriate towards understanding vehicularsystems “in the wild”.
Organization.
We begin by proposing a 2D model for sens-ing in obstructed environments in Section II. We then quantifythe benefits that collaborative sensing would afford in terms ofsensing coverage in Section III. In Section IV we analyze thecapacity requirements on V2V and V2I networks. In Section Vwe study the performance of collaborative sensing in thepresence of spatio-temporal dynamics. We conclude the paperin Section VI.II. M
ODELING C OLLABORATIVE S ENSING IN O BSTRUCTED E NVIRONMENTS
We begin by introducing a simple stochastic geometricmodel to study the character of collaborative sensing. A. Obstructed Environments and Sensing Capabilities
The environment includes all objects, i.e., vehicles, pedes-trians, buildings, etc. In some settings there may be substantiala priori knowledge regarding the environment, e.g., staticelements that are part of a previously computed HD maps[21]. While the presence of such objects is already knownthey still impact collaborative sensing as they can obstruct asensor’s field of view, e.g., a building may obstruct a vehicle’sview when entering an intersection. For simplicity we shall notdifferentiate among static and dynamic objects, and focus onsensing at a snapshot in time .The centers of objects are located on 2-D plane accordingto a Homogeneous Poisson Point Process (HPPP) Φ withintensity λ , i.e., Φ = { X i | X i ∈ R , i = N + } ∼ HPPP ( λ ) , where X i is the location of object i , and N + is the set ofpositive integers. Each object, say i , has a shape modeled bya random closed convex set denoted A i ⊂ R referenced to theorigin and independent of X i . We let E i denote the regionit occupies which is given by E i = { X i } ⊕ A i = ∆ { X i + x | x ∈ A i } , i.e., the object’s shape A i shifted to its location X i , where ⊕ is the Minkowski sum, see Fig. 1a. Thus E = ∞ ∪ i =1 E i denotesthe region occupied by objects in the environment. We refer tothe region not occupied by objects, E c = { x | x / ∈ E } , as the void space . Fig. 1b illustrates our model for the environment. Note that we will focus on describing a 2D model although it can beextended to 2.5D or 3D. In practice collaborative sensing system will track objects over time. Thustaking the snapshot point of view can be considered “worst case” assumption. a) Model for object i . (b) Model for environment.Fig. 1. Model for environment based on randomly located and shaped objects. It is unavoidable that initially as automated driving tech-nologies are introduced, only a fraction of vehicles will beequipped with sensors and/or participate in collaborative sens-ing. Thus only the subset equipped with sensors can participatein collaborative sensing – we shall refer to such objects assensors. Each object has an independent probability p s ofbeing a sensor. Thus the locations of sensors, Φ s , correspondto an independent thinning [22] of Φ , and so Φ s ∼ HPPP ( λ s ) where λ s = p s λ . For such objects we assume for simplicitythat each has one sensor, and denote by Y i ∈ R the relativeplacement of the sensor on object i referenced to X i , so thelocation of sensor i is given by X i + Y i ∈ E i . Each sensor i isassumed to have a radial sensing support S i ⊂ R referencedto the location of the sensor which is defined as follows. Definition 1. (Radial sensing support) The radial sensingsupport of a sensor i referenced to the origin, S i , is the set oflocations that can be viewed if the sensor is located at andthe LOS to the location is not obstructed. The set S i can berepresented in polar coordinates as follows, S i = (cid:8) ( r, θ ) (cid:12)(cid:12) r ∈ [0 , r i max ( θ )] , θ ∈ [0 , π ] (cid:9) , (1)where r i max ( θ ) is the maximum sensing range in direction θ . (a) Radial sensing support S i (b) Sensing support S i Fig. 2. (a) Radial support referenced to the origin and (b) the sensingsupport of sensor i . Fig. 2 illustrates examples of sector and omni-directionalradial sensing supports. We denote by S i = { X i + Y i } ⊕ S i the sensing support of sensor i. For an object, say j , whichis not a sensor, we let Y j = 0 and S j = ∅ . The environmentand the sensing field are thus modeled by an IndependentlyMarked PPP (IMPPP), ˜Φ , which associates independent marks M i = ( A i , Y i , S i ) to each object i , i.e., ˜Φ = (cid:8)(cid:0) X i , M i (cid:1) , i ∈ N + (cid:9) . Note that ( A i , Y i , S i ) is independent of X i , but A i , Y i , S i need not be mutually independent. Indeed if i is a sensor, Y i ∈ A i since the sensor should be mounted on the object. Also the distribution of the shape of objects with sensors, e.g., vehicles,can be different from that of other objects, e.g., pedestrians.The aim of such general IMPPP model is to model all theobjects in the environment, including vehicles, pedestrians,motorcycles, buildings, etc., thus we use a generalized HPPPmodel for the objects. Note that in practice vehicles follow thelanes on roads or parking lots, yet the analysis for such settingsis similar to the simplified setting we consider. Furthermorecomparisons via simulation of a detailed freeway model vali-date that the proposed HPPP model is a good approximation tostudy the performance of sensing in typical freeway and otherscenarios. Our model may also apply to other (collaborative)sensing systems relying on wireless communication, but themodel and analysis in this paper focuses on the uniquecharacteristics of vehicular sensing, i.e., vehicles play the roleof sensor, obstruction, and objects of interest at the same time. B. Model for Vehicle’s Region of Interest
We shall assume each sensing vehicle is interested in infor-mation within a certain range around it – usually measured intime, e.g., t interest sec. The actual spatial range depends onthe vehicle’s speed s and is given by s · t interest . We model asensing vehicle’s region of interest as follows. Definition 2. (Region of interest) The region of interest forsensor vehicle i , D i , is modeled for simplicity as a disc, b ( X i , r ) , centered at X i with radius r = s · t interest .Note that for a vehicle located at the center of a multi-lane road, its region of interest can also be approximated bya rectangular set [ − s · t interest , s · t interest ] × [ − w road , w road ] ,where w road denotes the width of the road. C. Collaborative Sensing in an Obstructed Environment
Next we define a sensor’s coverage set given the environ-ment and sensor model ˜Φ as follows – see Fig. 3. Definition 3. (Sensor coverage set) For sensor i with radialsensing support S i in the environment and sensor model ˜Φ ,we let E − i = ∪ j : j (cid:54) = i E j denote the environment excluding E i .The coverage set of sensor i , C i ( ˜Φ) , is then given by C i ( ˜Φ) = (cid:8) x ∈ S i (cid:12)(cid:12) x ∈ E i or l X i + Y i ,x ∩ E − i ⊆ { x } (cid:9) , (2)where l y,z denotes the closed line segment between y, z ∈ R .The coverage area of sensor i is the area of its coverage setwhich we denote | C i ( ˜Φ) | .In the above definition, we assume that a sensor is awareof E i , the space it occupies, i.e. no “self blocking”. Also l X i + Y i ,x ∩ E − i ⊆ { x } verifies that the LOS channel betweenthe sensor at X i + Y i and location x is not blocked by otherobjects. A location x ∈ C i ( ˜Φ) may be in the void space or onthe surface of an object. The coverage set of sensor i representsthe surrounding environment that it is able to view (on its own)under environmental obstructions.The expected coverage area of a typical sensor is given inthe following theorem, where C denotes the coverage set of a3 ig. 3. Coverage set of sensor i in ˜Φ . typical sensor shifted to the origin and A , Y and S are theassociated shape, location of sensor, and radial support set, re-ferred to the origin. The set { Y }⊕ S ∩ A denotes the region,if any, in the sensing support overlapping with the object, while ( { Y } ⊕ S ) \ A = { x | x ∈ { Y } ⊕ S , x / ∈ A } is the regionin the sensing support excluding the sensing object. Finally A denotes a random set with the same distribution as the shape ofobjects and is independent of A . Their distributions may bedifferent, since the latter is conditioned on an environmentalobject being a sensor, i.e., being a sensing vehicle. Theorem 1.
Under our environment and sensor model ˜Φ theexpected coverage area of a typical sensor is given by E (cid:2) | C | (cid:3) = E[ | ( { Y } ⊕ S ) ∩ A | ]+ E (cid:20) (cid:90) ( { Y }⊕ S ) \ A e − λ · E[ | l Y ,x ⊕ ˇ A | ] dx (cid:21) , (3) where ˇ A = { x | − x ∈ A } . For example if objects are modeled as discs of radius r , i.e., A = b (0 , r ) , with probability , and the sensor is mounted atthe center, i.e., Y = 0 , we have that | l ,x ⊕ ˇ A | = πr +2 r ·| x | (see [22]), so E (cid:2) | C | (cid:3) is straightforward to compute. The the-orem shows how the coverage area of a single sensor decreasesin the object density λ since the probability of sensing a givenlocation (the term inside integral) decreases exponentially in λ .The proof of Theorem 1 leverages straightforward stochasticgeometric results and is relegated to the appendix. D. Sensor Coverage Area: Numerical and Simulation Results
Below we verify the robustness of our idealized analyticalmodel by comparing the analytical results to a simulation ofvehicles on a freeway. For the analytical model, the shapeof all objects (vehicles) is a disc of radius .
67 m , roughlycorresponding to the area of a vehicle, and each has an omni-directional sensing support with radius
100 m . For the typicalvehicle we limit its sensing support and coverage set to arectangular region of interest centered on the vehicle, say i ,such that D i = b ( X i ,
100 m) ∩ ([ −∞ , ∞ ] × [ X i −
12 m , X i +12 m]) . (4)This is geared at capturing the fact that vehicles are mainlyinterested in sensing nearby road and sidewalks and
12 m isroughly the width of three lanes. Its distribution is formally referred to as the Palm distribution [22]. (a) Analytical model(b) Freeway simulationFig. 4. Sensing of a typical vehicle in (a) analytical model and (b) freewaysimulation model. The green shapes are reference objects, the red shapesare obstructions, light green represents sensed region, light red indicatesobstructed region. N o r m a li z ed C o v e r age A r ea SimulationAnalysis
Fig. 5. Coverage area of a typical vehicle normalized by the area of itssensing support.
Our simulations are based on the freeway scenario specifiedin [12] with lanes in each direction and lane widths of m. Vehicles are placed on each lane according to a linearMat´ern process [23], i.e., randomly located but ensuring aminimum gap of m among the centers of vehicles on thesame lane. Vehicles are modeled as . m × . m rectangles,and the distance from the center locations to the lane center areuniformly distributed unif[ − , m. The coverage area doesnot include the region off the road.Fig. 4 illustrates an example of sensing in our simplifiedanalytical model and freeway simulation. The sensed and ob-structed regions in the two models share similar characteristics.Fig. 5 exhibits analytical and simulation results for thevehicle’s coverage area normalized by the area of sensingsupport scales versus vehicle density λ . Confidence intervalsare not shown as they are negligible. As expected, withincreased vehicle density, the sensor coverage area decreasesdue to increased obstructions. To reduce boundary effects, thesimulation results correspond to the average sensor coverage4rea for vehicles in the two most central lanes. As can be seenthe analytical and simulation results exhibit similar trends. Athigh vehicle densities, the coverage area of a single vehicleis heavily limited, i.e., covering less than of the sensingsupport. In an obstructed environment collaborative sensingwill be critical to achieve better coverage over each vehicle’sregion of interest. We consider this next.III. B ENEFITS OF C OLLABORATIVE S ENSING
The benefits of collaborative sensing are twofold: (1) itincreases sensing redundancy/diversity leading to improvedcoverage, and (2) it improves coverage by effectively extend-ing the sensing range. We consider two metrics for the per-formance of collaborative sensing: redundancy and coverage . Sensing redundancy.
We define sensing redundancy as thenumber of collaborative sensing vehicles that can view alocation/object. The task of detecting/recognizing and track-ing objects is facilitated if multiple sensors’ point of vieware available, providing greater coverage and robustness tosensor/communication link failures.
Definition 4. (Sensing redundancy for a location) Given anenvironment and sensing field, ˜Φ , and a subset of collaboratingsensors K ⊆ Φ s , the sensing redundancy for a location x isthe number of sensors in K that view x , denoted by R ( ˜Φ , K, x ) = (cid:88) i : X i ∈ K (cid:0) x ∈ C i ( ˜Φ) (cid:1) . (5)In the most optimistic case K = Φ s , i.e., all sensorscollaborate. The expected redundancy of a location in the voidspace is given by the results in the following theorem. Theorem 2.
Given an environment and sensing field ˜Φ andassuming all sensors collaborate, K = Φ s , the expectedredundancy of a typical location x in the void space is E[ R ( ˜Φ , Φ s , x ) | x / ∈ E ] = p s · λ · E[ | C \ A | ] e − λ · E[ | A | ] , (6) where E[ | C \ A | ] is given in the second term in Eq. 3. The proof of this theorem included in the appendix followsfrom the definition of redundancy and a sensor’s coverage set.Fig. 6 exhibits the expected sensing redundancy of a typicallocation in the void space. As can be gleaned from our analyt-ical results, sensing redundancy for a location is proportionalto p s so we only exhibit results for p s = 1 . At small densitiessensors are not likely to be blocked thus redundancy firstincreases in the density of objects λ . However, at higherdensities, the objects obstruct each other reducing the coveragearea of each sensor and the resulting sensing redundancy. Thesimulation results show the expected redundancy of a randomlocation in the central two lanes (see Section II-D), andexhibit similar trends as the analysis. Overall one can concludethat collaborative sensing will provide highest redundancy atmoderate densities, i.e., this is where in principle collaborativesensing is most reliable and robust to sensor/communicationfailures. S en s i ng R edundan cy simulationanalysis Fig. 6. The expected sensing redundancy of a random void location versusobject density. All vehicles participate in collaborating, i.e., K = Φ s = Φ . Collaborative sensing coverage.
A location in a vehicle’sregion of interest is covered by collaborative sensing if thelocation can be reliably sensed, i.e., sensed by a sufficientnumber of collaborating sensors. We define the collaborativesensing coverage for a vehicle as follows.
Definition 5. (Collaborative sensing coverage) Given an en-vironment and sensing field ˜Φ , a minimum redundancy re-quirement γ ∈ N + for reliable sensing of a location, a subsetof collaborating sensors, K ⊆ Φ s , and sensor i ’s region ofinterest D i , the γ -coverage set of sensor i is the region within D i , which is covered by at least γ sensors in K , denoted by C c ( ˜Φ , K, D i , γ ) = ∆ (cid:8) x (cid:12)(cid:12) x ∈ D i , R ( ˜Φ , K, x ) ≥ γ (cid:9) . (7)The γ -coverage of sensor i is the area of the γ -coverageset, | C c ( ˜Φ , K, D i , γ ) | . The normalized γ -coverage is the γ -coverage normalized by the area of the region of interest, | C c ( ˜Φ , K, D i , γ ) | / | D i | .The normalized γ -coverage can be interpreted as the frac-tion of i ’s region of interest that can be reliably sensed. Denoteby D the possibly random region of interest associated witha typical sensing vehicle, and A s ⊂ R a random set havingthe same distribution of the shape within a the region occupiedby the sensor and is covered in the sensor’s support, i.e., { Y } ⊕ S ∩ A . Approximation of the normalized γ -coverage. Denote by Q ( k, m ) = P( N ( m ) ≥ k ) , where N ( m ) is a Poisson randomvariable with mean m . Denote by R void = E[ R ( ˜Φ , Φ s , x ) | x / ∈ E ] the expected redundancy of a location in the void space asgiven in Eq. 6. The average γ -coverage can be approximated Recall the region may depend on the vehicle’s speed. ig. 7. Decomposition of D for collaborative sensing coverage approxima-tion. by E (cid:2) | C c ( ˜Φ , Φ s , D , γ ) | (cid:3) ≈ E[ | D ∩ C ∩ A | ] · Q ( γ − , λ s · E[ | A s | ])+ E[ | D ∩ C \ A | ] · Q ( γ − , R void )+ E[ | D \ A | ] · Q ( γ, λ s · E[ | A s | ])+ (E[ | D \ A | ] · e − λ · E[ | A | ] − E[ | D ∩ C \ A | ]) · Q ( γ, R void ) . (8)This approximation is based on decomposing D intovarious sets, see Fig. 7. In particular, D ∩ C ∩ A is theset occupied and sensed by the object, D ∩ C \ A is the setoutside the object but sensed by the object, D ∩ E \ C is theset occupied by objects but not in C , and D \ ( E ∪ C ) isthe void space excluding C .By Slivynak-Mecke theorem [22], the other objects as seenby the reference sensor follow an IMPPP with the samedistribution as ˜Φ , so the locations of the other sensors followHPPP ( λ s ) . The region covered by objects and sensors willeach form a Boolean process [22]. For a random location x ,the number of sensors occupying and sensing x has a Poissondistribution with mean λ s · E[ | A s | ] , the number of objectsoccupying x has a Poisson distribution with mean λ · E[ | A | ] .For a location in the void spacer, we will approximate thedistribution of the redundancy by a Poisson distribution withmean R void . R void is not conditioned on there being attypical sensor, thus R void can be different from the expectedredundancy at a location x ∈ D \ E . In C the referenceobject provides redundancy and other sensors should provide ( γ − redundancy, while in D \ C the other sensors mustprovide γ redundancy.Based on the above approximation, the components of Eq. 8are interpreted as follows: • E[ | D ∩ C ∩ A | ] · Q ( γ − , λ s · E[ | A s | ]) is the area in A that is occupied (and sensed) by γ − other sensors. • E[ | D ∩ C \ A | ] · Q ( γ − , R void ) is the area void spacespace in C that is covered by γ − other sensors. • E[ | D \ A | ] · Q ( γ, λ s · E[ | A s | ]) is the area in D \ A thatis occupied (and sensed) by γ sensors. • E[ | D \ A | ] · e − λ · E[ | A | ] − E[ | D ∩ C \ A | ] is the areaof void space in D \ A excluding C , and Q ( γ, R void ) is the probability that a location is covered by γ othersensors.Fig. 8 illustrates collaborative sensing for our analyticalmodel and the freeway simulation. In the analytical model,objects are modeled as randomly distributed discs and may (a) Sensing in analytical model(b) Sensing in freeway scenarioFig. 8. Collaborative sensing in (a) analytical model, and (b) freewaysimulation. Dark green shapes represent sensors, red shapes are non-sensingobjects. Light green region can be sensed via collaborative sensing, whilelight red region are obstructed and not sensed. overlap. The objects are randomly placed thus the regioncovered by collaborative sensing will also be the realizationof a random shape. In the freeway simulation, vehicles arerandomly distributed along lanes, such that there is no overlap.The environment is more ‘structured’, and thus so is thecollaborative sensing coverage set, e.g., the space betweenlanes is less likely to be obstructed.We validate the accuracy of our approximation in Fig. 9a,in which we consider a 2D infinite plane with λ = 0 . / m .As can be seen the approximation in Eq. 8 is a good match forthe analytical model. Fig. 9b exhibits the freeway simulationresults, which show the same qualitative trend as the analyticalresults. As expected the minimum penetration to achieve acertain level of γ -coverage increases in the required diversity γ . Fig. 10 exhibits the expected normalized -coverage forvarying penetrations p s and vehicle densities λ . The freewaysimulation results show the same trend as analytical results.Note that Eq. 8 is an approximation of the analytical model,which is different from our simulation of the freeway scenario.As expected, coverage increases monotonically in p s . Moreimportantly, collaborative sensing can greatly improve cover-age even with a small penetration of collaborating vehicles,e.g., over . coverage when of vehicles collaborate ascompared to . coverage without collaboration at a vehicledensity λ = 0 . / m . Such results indicate that it wouldbe beneficial to share sensor data even with only a subset ofneighboring vehicles.Despite the performance gains associated with vehicularcollaborative sensing, achieving a high γ -coverage at lowpenetrations is difficult, especially for γ > . Joint collab-orative sensing with Road Side Units (RSUs) having sensing6 Penetration Ratio N o r m a li z ed - C o v e r age =1=2=3=4=5 (a) Approximation validation Penetration Ratio N o r m a li z ed - C o v e r age = 1 = 2 = 3 = 4 = 5 (b) Freeway simulation modelFig. 9. Normalized γ -coverage for different redundancy requirements γ . In(a) curves represent results from our approximation in Eq. 8 , markers aresimulation of the analytical model. (b) are freeway simulation results. capabilities can help improve coverage, e.g., and RSU infras-tructure could provide -coverage if located above afreeway (no obstruction) if their sensing support covers thefreeway. If γ rsu denotes the redundancy provided by RSUinfrastructure, the gain in γ -coverage associated with jointvehicle/RSU collaboration is given by E (cid:2) | C c ( ˜Φ , Φ s , D , γ − γ rsu ) | (cid:3) − E (cid:2) | C c ( ˜Φ , Φ s , D , γ ) | (cid:3) . (9)In Fig. 9b, for γ rsu = 1 and p s = 0 . , collaborationwith RSUs improves -coverage by over . . In summarythe possibility of combining vehicular collaborative sensingwith infrastructure (RSU) based sensing provides a naturalavenue to improve coverage, especially at low penetrations,but possibly also at higher penetrations if γ = 2 or higherdiversity is desired.Another setting our analytical framework can shed lighton is how the collaborative sensing coverage scales in theobstruction density when the sensor density is fixed. Oneexample of such a setting might be some freeway on rampswhere vehicles entering the freeway are primarily non-sensingcapable. Fig. 11 exhibits how the -coverage scales in theobstruction density based on the approximation in Eq. 8. Ascan be seen the -coverage decreases approximately linearly Penetration Ratio N o r m a li z ed - C o v e r age =0.0025=0.0075=0.0125=0.0175 (a) Analytical Approximation Penetration Ratio N o r m a li z ed - C o v e r age =0.0025=0.0075=0.0125=0.0175 (b) Freeway simulation modelFig. 10. Normalized -coverage: (a) based on analytical approximation inEq. 8, and (b) obtained by simulation of freeway scenario. Obstruction Density (1 / m ) N o r m a li z ed - C o v e r age s =0.001 s =0.003 s =0.005 Fig. 11. Normalized -coverage for different obstruction densities, λ − λ s .Sensor density λ s is fixed. in the obstruction density. Collaborative sensing with RSUsmay be required to ensure coverage in such scenarios.IV. N ETWORK C APACITY S CALING FOR C OLLABORATIVE S ENSING A PPLICATIONS
In this section we study the network capacity requirementsfor collaborative sensing. We envisage both V2V and V2Iconnectivity might be used to enable collaborative sensing in7 ig. 12. Collaborative sensing of vehicles in a single lane with V2V + V2Inetwork. Vehicle uses V2I to relay data when LOS V2V links are blocked. automotive settings. This might be critical to meet reliabil-ity and coverage requirements as we transition from legacysystems. In particular when the penetration of collaborativesensing vehicles is limited, the V2V links/paths required toshare collaborative sensing data may be blocked / unavailable,particularly when line of sight based links are used such asmmWave or optical based links. When this is the case, V2Iconnectivity, e.g., LTE based links, could serve as the fallbackto share critical sensing/manouvering information. Below westudy the V2I fallback capacity scaling for collaborative sens-ing settings.We consider vehicles on a single lane assisted by infras-tructure deployed along the road. Vehicles move at a constantvelocity s . Each sensing vehicle has a region of interest, t interest sec, in both forward and backward directions. Weshall consider the worst case scenario, i.e., the density ofvehicles is high and the gap between (the centers of) vehiclesin the same lane is the minimum gap for safe driving, t gap secand the inter-vehicle gap is s · t gap m . The density of vehicleis thus given by λ v = s · t gap . We assume vehicles need toreceive data from all vehicles in their range of interest, andby symmetry vehicle also need to send data to all vehicles intheir range of interest. A sensing vehicle thus needs to senddata to η = (cid:98) t interest t gap (cid:99) other vehicles in front and behind it,see Fig. 12.A vehicle has LOS V2V communication channels to theneighboring vehicles in front and back. A non collaboratingvehicle thus blocks the V2V relay path along the chain ofvehicles. If a LOS V2V relay path is not available, we assumethe reference vehicle relays data through the infrastructureand the receiving vehicle can then further relay data to othervehicles via available V2V links (V2I + V2V relay). Weassume the message a vehicle sends to other vehicles is thesame, thus a vehicle only needs to upload its data to theinfrastructure at most once. The infrastructure can then relaythe message to other vehicles requiring the message via eitherunicast or broadcast. If using unicast, infrastructure needs tosend the message to every vehicle located in its service region,which requires the message and cannot get the message viaV2V / V2I + V2V relay.Let N UL and N UDL be random variables denoting the numberof uplink and unicast downlink V2I transmissions requiredto share data of a typical sensing vehicle. The expectedrequired V2I uplink capacity c UL and V2I downlink capacityfor broadcast, c BDL , and unicast, c UDL , are given in the following
Fig. 13. Collaborative sensing of vehicles in a single lane using V2V + V2I,with V2V relay assistance from vehicles in the two neighboring lanes. theorem.
Theorem 3.
Consider a single lane model, with a density ofvehicles is λ v , where each sensing vehicle share data with η = (cid:98) t interest /t gap (cid:99) vehicles in front and back. The V2Icapacity requirements on a infrastructure serving the linearroad segment of length d m are given by c UL = c BDL = p s · λ v · d · E[ N UL ] · ν, (10) c UDL = p s · λ v · d · E[ N UDL ] · ν, (11) where E[ N UL ] = 1 − (cid:0) η (cid:88) k =0 p ks · (1 − p s ) η − k (cid:1) , (12) E[ N UDL ] = (cid:40) η − p s (1 − p s ) , if η ≥ , , otherwise . (13)The development of this result can be found in the appendix.The above results convey the average capacity requirementson V2I infrastructure. Unfortunately in a single lane settinga single non-collaborating vehicle can block the V2V LOSlinks/paths amongst a large number of vehicles and result in a burst of V2I traffic especially at high penetrations, e.g., whenvehicles in front and back of the non-collaborating vehicle areall collaborating. The required V2I capacity to handle suchbursts can thus be much higher.The single lane relaying scenario studied above is a worstcase, i.e., data can only be relayed by vehicles on the samelane. One can also consider scenarios where in addition collab-orative vehicles on either of two neighboring lanes participatein V2V relaying. LOS links among vehicles on neighboringlanes are less likely to be blocked, but LOS links to distantvehicles in neighboring lanes will see larger path loss andmay experience more interference, e.g., from transmissions ofvehicles in the same lane. Thus for simplicity suppose vehiclesonly communicate with the closest vehicle in a neighboringlane and consider the simple grid connectivity model shownin Fig. 13. Each node on the grid corresponds to a vehicle,and each row represents a lane. Vehicles have LOS channelsto neighboring vehicles on the grid. For comparison purposeswe suppose, as before, that a reference vehicle needs to senddata to η vehicles in front and back in the same lane. Vehiclescan receive data via V2V links if there is an LOS V2V relaypath on the grid. To limit the number of hops and associateddelays we assume that a relay path can not include links in both forward and backward directions.8ased on this model, whether vehicles in the ( k + 1) th column from the reference vehicle can receive data via V2Vlinks depends on whether the vehicles in the ( k + 1) th columnare collaborating and can get data from vehicles in the k th column. In this setting one can again compute the expectedV2I capacity requirements to deliver data to vehicles in eachcolumn and thus the total capacity requirements as a functionof η and p s – a detailed analysis is included in the appendix. A. Numerical Results
Fig. 14 exhibits how the V2I capacity, c UL , c BDL and c UDL ,normalized by λ v · d · ν and the average V2V throughput persensing vehicle normalized by V2V throughput at p s = 1 , varywith p s in single lane setting and in the single lane assisted byvehicles in neighboring lanes setting. The results correspond tothe case where η = 5 . An increase in p s causes an increase inthe number of vehicles participating in collaborative sensingbut also results in improved V2V connectivity. When p s issmall, both the number of collaborative sensing vehicles andthe capacity per sensing vehicle increase, thus V2I trafficincreases. However at higher penetrations, V2V connectivityimproves and the V2I capacity requirements of a sensingvehicle decreases, resulting in lower and eventually negligibleV2I traffic. Comparing the results with and without assistancefrom vehicles in neighboring lanes, we observe, as expected,that V2I traffic is smaller when vehicles in neighboring lanescan help relay data. The V2V throughput per sensing vehicleincreases with p s . However if vehicles in neighboring lanesassist with V2V relaying, the V2V throughput is higher thanthat in the single lane scenario, and the c V2V can be higherthan the V2V throughput at full penetration.In summary the V2I traffic resulting from collaborativesensing data would be highest at intermediate penetrations,e.g., ranging from . to . , but eventually would declineonce most vehicles participate in both collaborative sensingand V2V networking. This suggests an evolution path whereV2I resources are initially critical to safety-related serviceslike collaborative sensing, but eventually at high penetrationsof sensing vehicles, traffic can be effectively offloaded to theV2V network, e.g., in the single lane assisted by neighboringlanes, c UL , c BDL per vehicle is less than . ν if p s > . ,and the infrastructure may transition to supporting non-safety-related services, e.g., mobile high data rate entertainment anddynamic digital map updates. These results are likely robustto improved models, yet more detailed analysis based on moreaccurate V2V mmWave channel and networking models wouldbe required to provide more accurate quantitative assessment.V. I MPACT OF D YNAMICS ON C OLLABORATIVE S ENSING
In the previous sections we studied how collaborative sens-ing improves coverage for a snapshot of the environment byproviding spatial diversity in sensing, i.e., sensor data forlocations and objects from different points of view. In addition,collaborative sensing can improve sensing performance byutilizing temporal diversity in sensing. Objects in the envi-ronment are moving thus the environment is dynamic, e.g.,
Penetration Ratio N o r m a li z ed C apa c i t y c UL , c DLB c DLU c V2V (a) Single lane
Penetration Ratio N o r m a li z ed C apa c i t y c UL , c DLB c DLU c V2V (b) Single lane assisted by vehicles in neighboring lanesFig. 14. How V2I capacity requirements, normalized by λ v · d · ν , scale with p s in (a) single lane and (b) single lane assisted by vehicles in neighbor lanes. c UL is uplink capacity, c BDL and c UDL are downlink capacity using broadcastand unicast. c V2V is V2V throughput per sensing vehicle normalized by theV2V throughput at full penetration, p s = 1 . vehicles’ regions of interest, blockage fields, and the sensorcoverage sets are varying with time. Sensor data measured atdifferent time provides possibly different information regard-ing the environment, thus sensors can exploit temporal diver-sity for sensing and tracking of objects in the environment. A. Temporal Dynamic Environment and Sensing Model
We shall consider extending the environment and sensingmodel proposed in Section II to capture temporal dynamics.We let X i be the location of object i at time , and denote by Φ d ( t ) = { X di ( t ) , i ∈ N + } the locations of objects at time t , where X di ( t ) is the locationof object i at time t . Suppose the movements of objects areIID and independent of the locations of objects (during thetime interval of interest). Since the objects’ locations follow Φ ∼ HPPP( λ ) it follows by the Displacement Theorem [22]that the locations of objects at any time t will remain an HPPPprocess, i.e., for all t > , Φ d ( t ) ∼ HPPP( λ ) . M i =( A i , Y i , S i ) , do not change with time, e.g., objects do notrotate. Denote by E di ( t ) = X di ( t ) ⊕ A i (14)the region occupied by object i at time t , and S di ( t ) = X di ( t ) ⊕ S i (15)the sensing support of sensor i at t . The environment andsensing sensing field at t is then given by ˜Φ d ( t ) = (cid:8)(cid:0) X di ( t ) , ( A i , Y i , S i ) (cid:1) , i ∈ N + (cid:9) , and the model for the temporal dynamics of the environmentand sensing capabilities is denoted by ˜Φ d = (cid:0) ˜Φ d ( t ) , t ∈ R + (cid:1) . We let Φ s denote the locations of collaborating sensors at time . The coverage set of sensor i at time t , denoted C di ( ˜Φ d , t ) ,is given by C di ( ˜Φ d , t ) = (cid:8) x ∈ S di ( t ) (cid:12)(cid:12) x ∈ E di ( t ) or l X di ( t )+ Y i ,x ∩ E − i,d ( t ) ⊆ { x } (cid:9) , (16)where E − i,d ( t ) = ∪ j (cid:54) = i E dj ( t ) is the blockage set associatedwith objects other than i at time t .We let D di ( t ) ⊆ R denote sensor i ’s region of interest attime t . We shall define the objects that a sensor needs to senseat time t as follows. Definition 6. (Objects of interest at time t ) The objects ofinterest of sensor i at time t are the objects which overlapwith sensor i ’s region of interest at t , denoted by O di ( t ) , andgiven by O di ( t ) = (cid:8) j ∈ N + (cid:12)(cid:12) E dj ( t ) ∩ D di ( t ) (cid:54) = ∅ (cid:9) . (17) B. Sensing Redundancy and Coverage Resulting from Tempo-ral Dynamics
We suppose an object i is sensed by object j at time t ifsensor j senses any part of i , i.e., C dj ( ˜Φ d , t ) ∩ E di ( t ) (cid:54) = ∅ . Sensors can track the states of objects in the environment,e.g., locations, velocity, acceleration, etc, and thus have a goodestimate of the objects even when the objects are blocked forsome time. For simplicity we assume an object is tracked bya sensor at t if the object has been sensed in time interval [ t − τ, t ] , where τ is the maximum time window for reliabletracking without new sensor data.The spatio-temporal sensing redundancy of an object canthen be defined as follows. Definition 7. (Spatio-temporal object sensing redundancy)Given an environment and sensing model ˜Φ d , a fixed subset of collaborating sensors, K ⊆ Φ s , and assuming an objectcan be sensed if it has been sensed within a time period τ , the object sensing redundancy of sensor i at time t is given by R o ,d ( ˜Φ d , K, i, t, τ ) = (cid:88) j : X j ∈ K (cid:0) ∃ z ∈ [ t − τ, t ] s.t. E di ( z ) ∩ C dj ( ˜Φ d , z ) (cid:54) = ∅ (cid:1) . (18)Given the above definition of spatio-temporal sensing re-dundancy we can define the ( γ, τ ) -object coverage as follows. Definition 8. ( ( γ, τ ) -object coverage) Given an environmentand sensing field ˜Φ d , a minimum redundancy requirement γ ∈ N + for reliable sensing of an object, a subset of collaboratingsensors, K ⊆ Φ s , and sensor i ’s objects interest O di , the γ -coverage object set of sensor i is the set of objects of interestat time t which are covered by at least γ sensors in K , denotedby C dc ( ˜Φ d , K, O di , γ, t, τ ) = ∆ (cid:8) j ∈ O di ( t ) (cid:12)(cid:12) R d ( ˜Φ d , K, j, t, τ ) ≥ γ (cid:9) . (19)The ( γ, τ ) -object coverage is proportion of the objects ofinterest that are in the γ -coverage set, i.e., | C c ( ˜Φ , K, O di , γ, t, τ ) || O di ( t ) | . (20) C. Performance of Collaborative Sensing Utilizing Spatio-temporal Diversity
The relative movement of neighboring vehicles driving inthe same direction would typically be small, e.g., the relativelocations of vehicles in a fleet may be stable most time. Suchslow relative movement facilitates the communication amongstthe vehicles, but limits the temporal diversity in the sensing ofvehicles moving in the same direction. The sensing coverage ofcollaborative sensing for vehicles moving in the same directionmay fail to change quickly with time and obstructed vehicleswill remain unseen. By comparison RSUs and vehicles movingin the opposite direction will see fast relative movements toa given flow of vehicles and have improved sensing coveragewith temporal diversity. We have shown in [24] that RSUscan have an almost unobstructed view of the road if locatedwell above the vehicles. In practice, RSUs may be low, e.g.,to save cost, and vehicles are of different dimension, thus thesensing of vehicles can be obstructed. However RSUs maybenefit from temporal sensing diversity with respect to a flowof vehicles. The relative velocity of vehicles moving in theopposite direction is large, i.e., twice the typical speed ofa vehicle, which increases temporal diversity. However suchhigh relative speeds can make it difficult to establish reliablehigh rate links, e.g., in the mmWave band.Let us evaluate the performance of collaborative sensing inthe presence of such relative motions via simulation in a free-way scenario. We extend the simulation setting in Section II-D.Sensing and communication capable RSUs are located alongone side of the road at an even spacing, denoted by d rsu . TheRSUs are at a distance d road from the edge of the road and10 ig. 15. Freeway simulation scenario for RSU assisted collaborative sensingwith temporal dynamics. the height of RSUs is h rsu . Denote by r rsu the sensing rangeof RSUs, r v the sensing range of vehicles. Both RSUs andvehicles have the same communication range r comm . Vehiclesare moving at the same speed s . We shall refer to the directionof the lanes close to RSUs as the ‘nearby’ direction, and theother direction as the ‘opposite’ direction, see Fig. 15.We consider different collaborative sensing schemes, i.e., 1)base case: collaborate with only vehicles moving in the samedirection. The communication channel is stable, yet the set ofcollaborating sensors is limited. 2) RSU: in addition vehiclescommunicate with sensing capable RSUs. 3) opposite: vehiclescommunicate with vehicles moving in the same direction andin the opposite direction.Fig. 16 illustrates the (1 , τ ) -object coverage of collaborativesensing under our three different collaboration schemes anddifferent τ . RSUs are uniformly deployed along the road,providing a -coverage of the road, e.g., r rsu = 200 m , d rsu = 400 m . We assume r interest = 200 m , d road = 2 m , r v = 200 m , s = 20 m /s . The communication range is r comm = 500 m , which is enough for a vehicle to commu-nicate with all sensors having relevant sensor data. We set h rsu = 1 m , which is lower than the heights of vehicles, i.e.,typically . for sedans or higher for other vehicles. Suchassumption on h rsu is mainly used to make the sensors subjectto obstructions to study the impact of temporal diversity.The base case is that vehicles collaborate with other vehiclesmoving in the same direction.First let us consider collaborative sensing without temporaldiversity, i.e., τ = 0 sec . From the simulation results in Fig. 16we can see sensing coverage increases with spatial diversity,i.e., collaboration with RSUs and/or vehicles in the oppositedirection improves the sensing coverage. If we compare thecoverage when only RSUs or neighbor vehicles are used, wecan see that collaborating with only RSUs provides larger gainat low penetrations while collaborating with neighbor vehiclesdirection works better at high penetrations. As expected,collaborating with both RSUs and vehicles in the oppositedirection provides most temporal diversity and thus most gain.When temporal diversity in sensing is utilized, i.e., RSUsand vehicles in the opposite direction track objects usingprevious measurements, coverage can be further improved. Infact the coverage increases with τ . Note that collaboratingwith RSUs and utilizing temporal diversity alone can already Penetration Ratio - O b j e c t C o v e r age baseRSU =0sopposite =0sRSU+opposite =0sRSU =2sopposite =2sRSU+opposite =2s (a) Collaboration with RSUs and/or vehicles in thenearby direction Penetration Ratio - O b j e c t C o v e r age baseRSU =0sopposite =0sRSU+opposite =0sRSU =2sopposite =2sRSU+opposite =2s (b) Collaboration with RSRUs and/or vehicles in theopposite directionFig. 16. The (1 , τ ) -object coverage of collaborative sensing with vehiclesdriving in the same direction and RSUs for vehicles moving in (a) the originaldirection, and (b) the opposite direction. provide a relative high coverage, e.g., over . This indicatesthat RSUs can have a good coverage of the environmentby tracking objects even when RSUs are not located higherthan all objects and are subject to objects. A comparison ofcoverage for vehicles moving in different directions shows thatRSUs provide better temporal diversity for sensing vehiclesmoving in the further away lanes. The reason is that the ob-structions in the nearby lanes have larger relative movements,thus RSUs will see larger temporal diversity in the obstructionfield. VI. C ONCLUSION
Collaborative sensing can greatly improve a vehicle’s sens-ing coverage. V2V collaborative sensing could improve thesensing coverage from to at penetration and wecan further improve the coverage using both V2V and V2I col-laborative sensing. However, collaborative sensing suffers atlow penetrations due to, both a lack of available collaborators,and communication blockages for (mmWave) V2V relayingpaths. Access to V2I connectivity will thus be important toprovide communication for collaborative sensing when V2Vrelaying paths are unavailable. At higher penetrations, theaverage V2I traffic is low, but the infrastructure should still11ave the ability to support traffic bursts when the V2V networkof collaborating vehicles becomes disconnected.To provide higher coverage one might consider supportingjoint collaborative sensing amongst vehicles and RSUs withboth sensing and communication capabilities. With sufficientRSU density and unobstructed placements, one can ensure -coverage by collaborating only with RSUs. Theassociated capacity requirement can also be much smallerthan collaboration with vehicles: vehicles receive data fromone RSU instead of all neighboring vehicles. However sensingbased only on RSUs deployed with -coverage mightnot provide enough sensing redundancy and deploying evenmore RSUs to provide diversity would be costly. Furthermore,in order to navigate in a variety of environments, vehicleswill need to have their own sensing capabilities which shouldclearly be leveraged. Thus we see the combination of ve-hicular/RSU collaborative sensing as the most cost effectiveway to achieve high coverage in vehicular automated drivingapplications – in particular say for high speed automatedhighways. A PPENDIX
A. Proof of Theorem 1
The locations associated marks of the objects, ˜Φ , follow anIMPPP, thus the occupied space can be modeled by a BooleanProcess [22]. One can also formally define the distribution asseen by a typical vehicle referred to the origin . Let Z =(0 , M ) , M = ( A , Y , S ) , denote the typical vehicle. Welet f ( x, z , ˜ φ \{ z } ) = ( x ∈ c ) (21)be the indicator function that location x is in the coverageset of the typical sensor z , where ˜ φ \{ z } denotes the otherobjects in the environment excluding z . The expected area ofcoverage set of a typical vehicle is then given by, E[ | C | ] = E (cid:20) (cid:90) x ∈ R f ( x, Z , ˜Φ \{ Z } ) dx (cid:21) = (cid:90) x ∈ R (cid:90) m ∈ M (cid:90) ˜ φ f ( x, ˜ φ \{ z } , z ) P ! z ( d ˜ φ ) F M ( dm ) dx, (22)where M is the support of M , P ! z ( · ) is the reduced Palmdistribution of ˜Φ given a typical object is z = (0 , m ) , i.e.,the distribution of other objects in the environment as seenby a typical object[22]. For a Boolean Process, it follows bySlivnyak-Mecke theorem [22] that the reduced Palm distribu-tion is the same as that of the original Boolean Process. Thuswe have (cid:90) ˜ φ f ( x, z, ˜ φ \{ z } ) P ! z ( d ˜ φ ) = (cid:90) ˜ φ f ( x, z, ˜ φ ) P ˜Φ ( d ˜ φ ) (1) = ( x ∈ ( { y } ⊕ s ) ∩ a ) + ( x ∈ ( { y } ⊕ s ) \ a ) e − λ E A [ | l y,x ⊕ ˇ A | ] , (23)where ( { y } ⊕ s ) \ a is the sensing support of the typical sensorexcluding the region a covered by the sensor itself. In equality (1) we have used the fact that for a Boolean Process, thenumber of objects intersecting a compact convex shape, e.g., l y,x , has a Poisson distribution with mean λ · E A [ | l y,x ⊕ ˇ A | ] , ˇ A = { x | − x ∈ A } [22]. Thus substituting the result in Eq. 23into Eq. 22 we get Eq. 3. B. Proof of Theorem 2
The locations of the objects follow an HPPP and the envi-ronment can be modeled as an IMPPP thus the environment ishomogeneous in space. Without loss of generality we considerthe redundancy of location . By definition, we have E[ R ( ˜Φ , Φ s , | / ∈ E ] = E[ R ( ˜Φ , Φ s , · ( x / ∈ E )]P(0 / ∈ E ) (24)Since the region occupied by objects follows the BooleanProcess thus the probability that is not occupied by objectsis given by, see [22], P(0 / ∈ E ) = e − λ · E[ | A | ] . (25)We let h ( x , x, m, ˜ φ \{ ( x, m ) } ) be the indicator function thatlocation x is in the void space and sensed by object ( x, m ) ,for the given environment excluding the reference object, i.e., ˜ φ \{ ( x, m ) } . E[ R ( ˜Φ , Φ s , · (0 / ∈ E )] is then given by, E[ R ( ˜Φ , Φ s , · (0 / ∈ E )]= E (cid:20) (cid:88) ( X i ,M i ) ∈ ˜Φ ,X i ∈ Φ s h (cid:0) , X i , M i , ˜Φ \{ ( X i , M i ) } (cid:1)(cid:21) = p s λ (cid:90) x ∈ R (cid:90) m ∈ M (cid:90) ˜ φ h (0 , x, m, ˜ φ ) P !( x,m ) ( d ˜ φ ) F M ( dm ) dx (1) = p s λ (cid:90) x ∈ R E M, ˜Φ (cid:2) h (0 , x, M, ˜Φ) (cid:3) dx (2) = p s λ (cid:90) x ∈ R E M, ˜Φ (cid:2) h ( − x, , M, ˜Φ) (cid:3) dx (3) = p s λ · E[ | C \ A | ] . (26)The equality (1) follows for Slivnyak-Mecke theorem [22].Equality (2) follows from the spatial homogeneity of theenvironmental model thus we have that E M, ˜Φ (cid:2) h (0 , x, M, ˜Φ) (cid:3) = E M, ˜Φ (cid:2) h ( − x, , M, ˜Φ) (cid:3) . Equality (3) follows from the result characterizing E[ | C | ] inThm. 1. Note that function h is not the same as f introducedin the proof of Thm. 1, i.e., a point on the boundary of anobject can be in the coverage set but can not be in the voidspace. However the area of the set of such points is , thusequality (3) holds. Combining the above results finishes theproof. C. Proof of Theorem 3
We shall consider the expected number of V2I transmissionsrequired by a typical sensing vehicle.12
2I uplink.
The probability that the V2I link will berequired to share sensor data with collaborating vehicles inone direction, e.g., forward direction, is given by p front ( η, p s ) = 1 − η (cid:88) k =0 p ks · (1 − p s ) η − k . (27)This expression can be interpreted as one minus the probability(associated with the sum) that the V2I link is not required.The V2I link will not be required if the first k vehicles arecollaborative and can thus perform V2V relaying, and theremaining ν − k are not sensing vehicles and so do not requirethe data. The forward and backward directions are independentand symmetric, thus the probability that V2I resources will berequired is p V2I ( η, p s ) = 1 − (cid:0) − p front ( η, p s ) (cid:1) . (28)Note that data need only be sent up once irrespective ofwhether one or more sharing paths are blocked thus E[ N UL ] = p V2I . V2I downlink.
If broadcast downlink is used, we have N BDL = N UL , thus E[ N BDL ] = E[ N UL ] . If only a unicastdownlink is available, a V2I downlink is required for everycollaborative vehicle where no LOS V2V relay path is avail-able. Given our modeling assumption that vehicles receivingdata from infrastructure can further relay data via V2V links,the ( k + 1) th collaborative vehicle requires a downlink trans-mission if the k th vehicle is not sensing. E[ N UDL ] is thus thesum of the expected number of unicast downlink transimissionrequired by each k th vehicle and thus we have Eq. 13.Given the expected numbers of V2I transmissions for atypical sensing vehicle, we get the associated capacity c UL , c BDL , and c UDL accordingly.
D. V2I Capacity with Assistance from Neighbor Lanes
Consider the vehicles in front of a reference vehicle placedin column of the grid. Let S k = ( S k , S k , S k ) , S ik ∈ { , } ,denote whether the vehicles in the k th column from the refer-ence vehicle ( , , denotes vehicles from top row to bottomrow) are collaborating where denotes a non-collaboratingvehicle and the opposite. Denote by X k = ( X k , X k , X k ) , X ik ∈ { , } , the state of the vehicles in the k th column areboth collaborating and can receive data from the referencevehicle. We denote by Y k ∈ { , } whether the V2I downlinkis required to relay sensing data to vehicles in the first k columns. The state of the k th column is given by Z k = ( X k , Y k ) . (29)Based on our assumption that relaying paths can not containlinks in both forward and backward directions, X k +1 onlydepends on X k and S k +1 . Since whether a vehicle is col-laborating is independent from other vehicles, the probabilitydistribution of Z k +1 depends on that of Z k and p s . Denote by P the state transition probability of a transitionfrom Z k to Z k +1 , k ≥ . The probability distribution of S k +1 is given by, P (cid:0) S k +1 = ( s k , s k , s k ) (cid:1) = p s k + s k + s k s · (1 − p s ) − s k − s k − s k . (30)Denote by ˜ X k +1 the indicator that vehicles in the k th columncan send data to vehicles in the ( k + 1) th column via V2Vlinks. Denote by ∧ a logical AND, and by ∨ a logical OR,then we have that ˜ X k +1 = ( X k ∧ S k +1 , X k ∧ S k +1 , X k ∧ S k +1 ) . (31)Further consider the communication amongst vehicles in thesame column. Denote by ˆ X k +1 the state of vehicles aftervehicles in the ( k +1) th column share data amongst themselvesvia V2V links; then we have ˆ X k +1 = ˜ X k +1 ∨ ( S k +1 ∧ ( ˜ X k +1 ∨ ( ˜ X k +1 ∧ S k +1 ))) , (32) ˆ X k +1 = ˜ X k +1 ∨ ( S k +1 ∧ ( ˜ X k +1 ∨ ˜ X k +1 )) , (33) ˆ X k +1 = ˜ X k +1 ∨ ( S k +1 ∧ ( ˜ X k +1 ∨ ( ˜ X k +1 ∧ S k +1 ))) , (34)i.e., a sensing vehicle can also receive data from other collab-orating vehicles in the same column via V2V relaying.For V2I relaying, we denote by ˜ Y k +1 whether V2I relayingis required by the k + 1 th column. This occurs if the vehiclein the central lane is collaborating but can not receive data viaV2V links, i.e., when ( S k +1 = 1) and ( ˆ X k +1 = 0) , (35)we have ˜ Y k +1 = 1 . The vehicle can further relay data toneighboring collaborative vehicles in the ( k + 1) th column.The state transition is now given by Y k +1 = Y k ∨ ˜ Y k +1 , (36) X k +1 = (cid:40) ˆ X k +1 , if ˜ Y k +1 = 0 S k +1 , otherwise . (37)Based on the above state transition rules, we can compute P as a function of p s . Denote by Z the support of Z k , π k =( π k , π k , . . . , π |Z| k ) the probability distribution of Z k , where π ik is the probability of state i at column k . We have that π k = P k · π . (38)Denote by Z V2I ⊆ Z the set of states with Y = 1 . Theprobability that V2I communication is required to relay datato vehicles in the forward direction, conditioning on theprobability distribution of column Z being π , is given by p front ( η, p s , π ) = (cid:88) i ∈ Z V2I π iη , (39)where π η = P η π . Conditioning that the reference vehicle isa sensing vehicle, we can compute π based on p s . p V2I isthus given by p V2I = (cid:88) i =1 ,..., |Z| π i · (cid:0) − (1 − p front ( η, p s , e i )) (cid:1) , (40)13 Penetration Ratio N o r m a li z ed V I C apa c i t y p V2I c UL , c DLB c DLU
Fig. 17. How p V2I ( η, p s ) , normalized c UL , c BDL , and c UDL change with p s when vehicles send data to vehicles in the same lane and the two neighboringlanes. where e i ∈ { , } |Z| , e ii = 1 and e ji = 0 for j (cid:54) = i .For the number of E[ N UDL ] , we can define Y k as the state fornumber of V2I unicast downlinks required by vehicles in eachcolumn. Similarly as above, we can compute the correspondingstate transition probability and E[ N UDL ] is given by E[ N UDL ] = η (cid:88) k =1 E[ Y k ] (41)In the above analysis we assume the reference vehicle onlyneeds to share data to vehicles in the same lane, e.g., vehiclesare moving in platoons and mainly require data from thesame platoon. In fact, vehicles may also need to share datawith vehicles in neighboring lanes for applications such asadvanced automated driving and collaborative sensing [4].In this case we can analyze the required capacity on V2Inetwork following similar steps. One major difference is thatthe condition in Eq. 35 should be replaced by ∃ i ∈ { , , } s.t. ˆ X ik +1 (cid:54) = S ik +1 , (42)i.e., there is a sensing vehicle not receiving the sensor datavia V2V relay. Also in Eq. 37 we have X k +1 = S k +1 , i.e., allsensing vehicles would get the data by either V2V or V2I.In Fig. 17 we exhibit the result when vehicles need toshare data with vehicles on neighboring lanes. Comparedwith the case that vehicles need to share data with onlyvehicles in the same lane, the V2I capacity requirements hereis much higher. Such a result was to be expected as morevehicles require sensing data. Note that p V2I is almost fora large range of penetrations, e.g., from . to . . Thisindicates that assistance from V2I would be necessary forreliable collaborative sensing from the early stages when thepenetration of automated driving vehicles is low.R EFERENCES[1] S.-W. Kim, B. Qin, Z. J. Chong, X. Shen, W. Liu, M. H. Ang, E. Fraz-zoli, and D. Rus, “Multivehicle cooperative driving using cooperativeperception: Design and experimental validation,”
IEEE Trans. Intell.Transp. Syst. , vol. 16, no. 2, pp. 663–680, 2015. [2] L. Hobert, A. Festag, I. Llatser, L. Altomare, F. Visintainer, and A. Ko-vacs, “Enhancements of V2X communication in support of cooperativeautonomous driving,”
IEEE Commun. Mag. , vol. 53, no. 12, pp. 64–70,2015.[3] “5G automotive vision,” , October 2015.[4] “3GPP TS 22.186 v15.1.0 study on enhancement of 3GPP support forV2X scenarios; stage 1 (release15),” June 2016.[5] A. Rauch, F. Klanner, and K. Dietmayer, “Analysis of V2X commu-nication parameters for the development of a fusion architecture forcooperative perception systems,” in
Intelligent Vehicles Symposium (IV),2011 IEEE . IEEE, 2011, pp. 685–690.[6] H. Li and F. Nashashibi, “Multi-vehicle cooperative perception andaugmented reality for driver assistance: A possibility to seethroughfront vehicle,” in
Intelligent Transportation Systems (ITSC), 2011 14thInternational IEEE Conference on . IEEE, 2011, pp. 242–247.[7] A. Rauch, F. Klanner, R. Rasshofer, and K. Dietmayer, “Car2x-basedperception in a high-level fusion architecture for cooperative perceptionsystems,” in
Intelligent Vehicles Symposium (IV), 2012 IEEE . IEEE,2012, pp. 270–275.[8] S.-W. Kim, Z. J. Chong, B. Qin, X. Shen, Z. Cheng, W. Liu, and M. H.Ang, “Cooperative perception for autonomous vehicle control on theroad: Motivation and experimental results,” in
Intelligent Robots andSystems (IROS), 2013 IEEE/RSJ International Conference on . IEEE,2013, pp. 5059–5066.[9] X. Zhao, K. Mu, F. Hui, and C. Prehofer, “A cooperative vehicle-infrastructure based urban driving environment perception method usinga D-S theory-based credibility map,”
Optik-International Journal forLight and Electron Optics , vol. 138, pp. 407–415, 2017.[10] J. B. Kenney, “Dedicated short-range communications (DSRC) standardsin the united states,”
Proc. IEEE , vol. 99, no. 7, pp. 1162–1182, 2011.[11] H. Seo, K.-D. Lee, S. Yasukawa, Y. Peng, and P. Sartori, “LTE evolutionfor vehicle-to-everything services,”
IEEE Commun. Mag. , vol. 54, no. 6,pp. 22–28, 2016.[12] “3GPP TR 22.886 v15.1.0 study on enhancement of 3GPP support for5G V2X services (release15),” March 2017.[13] V. Va, T. Shimizu, G. Bansal, R. W. Heath Jr et al. , “Millimeter wavevehicular communications: A survey,”
Foundations and Trends® inNetworking , vol. 10, no. 1, pp. 1–113, 2016.[14] J. Choi, V. Va, N. Gonzalez-Prelcic, R. Daniels, C. R. Bhat, and R. W.Heath, “Millimeter-wave vehicular communication to support massiveautomotive sensing,”
IEEE Commun. Mag. , vol. 54, no. 12, pp. 160–167, 2016.[15] G. Zhang, Y. Xu, X. Wang, X. Tian, J. Liu, X. Gan, H. Yu, and L. Qian,“Multicast capacity for VANETs with directional antenna and delayconstraint,”
IEEE J. Sel. Areas Commun. , vol. 30, no. 4, pp. 818–833,2012.[16] S. Kwon, Y. Kim, and N. B. Shroff, “Analysis of connectivity andcapacity in 1-D vehicle-to-vehicle networks,”
IEEE Trans. WirelessCommun. , vol. 15, no. 12, pp. 8182–8194, 2016.[17] X. He, H. Zhang, W. Shi, T. Luo, and N. C. Beaulieu, “Transmissioncapacity analysis for linear VANET under physical model,”
ChinaCommunications , vol. 14, no. 3, pp. 97–107, 2017.[18] A. T. Giang, A. Busson, A. Lambert, and D. Gruyer, “Spatial capacityof IEEE 802.11 p-based VANET: Models, simulations, and experimen-tations,”
IEEE Trans. Veh. Technol. , vol. 65, no. 8, pp. 6454–6467, 2016.[19] G. Ozbilgin, U. Ozguner, O. Altintas, H. Kremo, and J. Maroli,“Evaluating the requirements of communicating vehicles in collaborativeautomated driving,” in
Intelligent Vehicles Symposium (IV), 2016 IEEE .IEEE, 2016, pp. 1066–1071.[20] J. O’rourke,
Art gallery theorems and algorithms . Oxford UniversityPress Oxford, 1987, vol. 57.[21] H. G. Seif and X. Hu, “Autonomous driving in the iCity–HD maps asa key challenge of the automotive industry,”
Engineering , vol. 2, no. 2,pp. 159–162, 2016.[22] S. N. Chiu, D. Stoyan, W. S. Kendall, and J. Mecke,
Stochastic geometryand its applications . John Wiley & Sons, 2013.[23] B. Mat´ern,
Spatial variation . Springer Science & Business Media,2013, vol. 36.[24] Y. Wang, G. de Veciana, T. Shimizu, and H. Lu, “Deployment andperformance of infrastructure to assist vehicular collaborative sensing,”in
Vehicular Technology Conference (VTC-Spring), 2018 IEEE 87th .IEEE, 2018, pp. 1–5..IEEE, 2018, pp. 1–5.