QQuasar lensing
Introduction
Strong gravitational lensing is a phenomenon which occurs when the lines of sight to a fore-ground and background object nearly coincide, resulting in multiple imaging of the backgroundobject. Although quite rare, it offers an important diagnostic of masses and mass distribu-tions in foreground objects ranging from stars to clusters of galaxies, and has the importantadvantage of being sensitive to all kinds of matter, both baryonic and dark. Strong lensingalso gives magnified views of background objects, allowing easy study of otherwise inaccessiblequantities, and potentially cosmological information owing to its sensitivity to combinations ofmass density and lengths within the universe.Quasars are relatively rare phenomena, and lensed quasars, in which a foreground galaxy pro-vides the gravitational deflection in the right place, are correspondingly rare. However, theyhave some unique advantages: quasars allow easy access to the high redshift universe; quasarsare bright, allowing easy study of subtle effects and potentially allowing complete samples tobe built; quasars are variable, allowing cosmological information to be derived from time de-lays; quasars emit at multiple wavelengths, allowing detailed study of propagation effects. Thisreview gives an overview of the history of quasar lensing, and a summary of its main applica-tions. The applications fall into three parts: the use of quasars as probes of lens galaxies, boththeir stellar content via microlensing and their dark matter content via the fitting of modelsto lensed images; the use of quasar lenses to probe the structure and properties of the quasarsthemselves; and the use of quasar lenses for cosmology. A number of previous reviews haveaddressed some or all of these issues; see, for example Wambsganss (1998), Claeskens & Surdej(2002), Courbin, Saha & Schechter (2002), Kochanek & Schechter (2004), Kochanek (2004),Wambsganss (2004), Jackson (2007), Zackrisson & Riehm (2010), Bartelmann (2010), Schmidt& Wambsganss (2010). In this review individual theoretical results will be presented as needed,without derivation; the interested reader can refer to the standard text by Schneider, Ehlers &Falco (1992) for more detail. Finally I outline the possible future applications of quasar lensing,and the observational programmes which will develop the subject in the coming years.
Historical introduction
Lensing remained an interesting theoretical possibility for most of the twentieth century. Thedeflection of light by a mass was calculated classically by Soldner (1801) and correctly byEinstein as a consequence of general relativity, and famously observed by Eddington in 1919using position shifts of stars close to the Sun during a solar eclipse. The possibility of multipleimages formed by individual objects was considered by Chwolson (1924). However, it was notuntil 1979 that the first gravitational lens system was actually found.The first lens system, Q0957+561, was discovered by Walsh et al. (1979) during the course ofoptical followup of sources found in a radio survey at 966 MHz with the Jodrell Bank telescope.The source object in this system is a radio-loud quasar at redshift 1.41, doubly imaged by agalaxy at redshift 0.36 (Fig. 1). The large (6 (cid:48)(cid:48) ) separation of the two images in this system isdue to the fact that the lensing galaxy is assisted by a cluster at the same redshift; this allowedWalsh et al. to measure separately the redshifts of the two lensed images, whose spectralsimilarity supported the hypothesis that their light originates from the same background object.In the next few years, further objects were discovered, many of them in radio surveys; thesehave the advantage that the sources they contain are predominantly non-thermal emitters suchas quasars, without contamination from stellar processes. A typical non-thermal radio source1 a r X i v : . [ a s t r o - ph . C O ] A p r igure 1: A modern view of the Q0957+561 lens system, made with the e-MERLIN telescopeat 5GHz. The quasar consists of a core, radio jet (top left), and lobe (bottom right). Thesecond image of the core (bottom of the image) is close to the lensing galaxy. (Image credit:Muxlow, Beswick & Richards, Manchester).consists of a central bright flat-spectrum core, corresponding to the active nucleus at the centreof the host galaxy, and extended steep-spectrum emission in lobes which result from ejectionof material from the active centre. Although the Q0957+561 system has a double image of thecore, further quasar lenses were found in which the extended radio emission was gravitationallyimaged, resulting in rings (e.g. MG1131+0456, Hewitt et al. 1988).Searches for lensed radio sources divided into two main areas. The first, following the success ofthe MG survey, targeted extended radio lobes, looking for rings associated with the presence ofgalaxies in front of them (Leh´ar et al. 2001, Haarsma et al. 2005). The second method involvedsystematic targeting of flat-spectrum radio sources, in which the central bright point componentdominates the radio emission. This makes lensing relatively easy to recognise, although the lowoptical depth to lensing means that a large number of candidates must be examined to findrelatively few lenses. The CLASS survey (Myers et al. 2003, Browne et al. 2003) is still thelargest systematic radio survey, and produced 22 lens systems from a parent sample of 16503northern objects initially observed with the VLA and followed up at higher resolution using2erlin and VLBI. A southern extension of this survey also exists (e.g. Winn et al. 2001) whichdiscovered a further 4 lens systems. Most of the lenses are elliptical galaxies, since these aregenerally more massive and hence dominate the lensing cross-section, but a significant minorityare later types. Subsamples of these 22 are still important for many of the astrophysical andcosmological applications that will be discussed later in this review.About 90% of quasars do not have bright radio emission. Searches for lensed radio-quiet quasarsbegan soon after the Walsh et al. (1979) initial discovery, most successfully using the HST.Lensed optical quasars began to be discovered in substantial numbers following the availabilityof the Sloan Digital Sky Survey (SDSS, York et al. 2000) and subsequently of quasar cataloguesderived from it. The Sloan Quasar Lens Search (SQLS; Oguri et al. 2006, Inada et al. 2008,Inada et al. 2010, Inada et al. 2012) has discovered the majority of these, by following upquasars which show extended emission on SDSS images, suggesting the presence of secondarylensed images, or of a lensing galaxy, or both. The SQLS has produced about 30 quasar lensesin the most recent catalogue, including some of the widest-separation lenses known (Inada et al.2003, Inada et al. 2006). In addition a number of smaller surveys have increased the numberof lenses, using various methods. These include the use of higher-image quality supplementarysurveys such as UKIDSS (Lawrence et al. 2007) to find small-separation or high-flux-ratiolenses (Jackson et al. 2009, 2012).The most complete current database, the Masterlens project (Moustakas et al. 2012), lists120 quasar lenses of which 71 result from two surveys (SQLS and CLASS) and the remainderare serendipitous discoveries or result from smaller surveys. It is this sample that forms thecurrent basis of the scientific results discussed in this review. Future instruments will increasethis sample by many orders of magnitude, allowing correspondingly more detailed and wide-ranging science which is discussed in the final section. QL as probe of lens galaxies
Introduction
Any lens system allows the projected mass distribution of the lens to be probed. Since the lensis typically an elliptical galaxy at significant ( ∼ Lensed substructure
The main attraction of quasar lenses is that they provide a probe of sub-galactic-scale matterstructure, which is in turn relevant to a strong prediction of CDM models of structure formation.In such scenarios, structure in the Universe forms in a hierarchical way, with small dark matterclumps coalescing into larger haloes as time passes to form steadily larger agglomerations(White & Rees 1978). Baryons lose potential energy by non-gravitational means and thereforesettle in to the resulting potential wells, forming galaxies, groups and clusters. Because manyphysical processes are involved, how this happens is quite complicated, and in practice semi-empirical recipes are used for describing the process (Blumenthal et al. 1986, Ryden 1988,Gnedin et al. 2004). There are also ongoing processes which may rearrange the baryonicmaterial in galaxy-sized haloes, including the influence of supernovae in lower-mass haloes(Heckman et al. 1990) and of periodic ejections of matter from active nuclei in the centres(e.g. Begelman et al. 1991, Croton et al. 2006); these processes are collectively known asfeedback. Complications associated with baryon physics cannot be avoided in lensing systems,because typical gravitationally lensed images form at radii of about an arcsecond, correspondingto 5-10kpc in projection against the lens galaxy. At this radius, dark matter is expected tocontribute at the level of a few tens of percent to the projected matter distribution, and baryonprocesses are therefore dominant.The substructure debate matters, because CDM works so well on cluster and superclusterscales that its predictions on smaller scales are one of its few possible failure modes. In our ownGalaxy, the observation that fewer luminous satellites were found than CDM subhalo modelswould predict (Moore et al. 1999, Klypin et al. 1999), generated a vast literature reflecting theimportance of the problem. Possible ways out of the problem included finding at least someof the missing satellites (Belokurov et al. 2006, 2007; Zucker et al. 2006a,b), or finding someway in which they might be present but not accrete gas and form stars (Bullock et al. 2000).The current situation is that it is difficult to explain both the incidence of substructure andits dynamical properties and stellar content (e.g. Boylan-Kolchin, Bullock & Kaplinghat 2011,Boylan-Kolchin, Bullock & Kaplinghat 2012) although possibilities exist including detailedadjustments corresponding to detailed treatments of the physics, or gross changes such as analteration in the overall mass of the Milky Way and thus of its expected subhalo content.Curiously, despite the apparent lack of substructure on sub-Magellanic Cloud mass scales, thepresence of substructure on scales as massive as the Magellanic Clouds themselves is mildlyanomalous, unless the Milky Way has a mass towards the upper end of the allowed range(Boylan-Kolchin et al. 2010). 4 ensed substructure in other galaxies
Relief from detailed arguments about CDM substructure in our own Galaxy can be had byconsidering substructure probes in other galaxies, in which details can be swept under theobservational carpet - or less cynically, a larger number of objects can be probed in less detail,thus accounting for the possibility that our Galaxy may be untypical. The main mass probe inother galaxies at cosmological distance is gravitational lensing.The observation of a gravitational lens system yields a set of observed positions and flux den-sities of lensed images. As already discussed, this set of observables does not give uniqueinformation about the mass distribution, and in general a large number of macromodels (aterm generally used for overall mass distributions, or at any rate for components of spatialfrequency in mass distribution of a few kpc or larger) are compatible with images of a singleobject. Some plausibility arguments can be used to restrict the available set of macromodels,mainly the observation that well-constrained lens systems, with stellar dynamics and extendedimages, suggest approximately isothermal distributions of mass (e.g. Cohn et al. 2001, Rusinet al. 2003, Koopmans et al. 2006). This density profile corresponds to a flat rotation curve.On top of the macromodel, any smaller-scale perturbations will affect the image positions andfluxes. Since image positions depend on the first derivative of the projected potential distri-bution and fluxes on the second, smaller perturbations are expected to be detectable in imagefluxes.A number of relations between fluxes of individual lensed images exist which are independentof, or at least relatively insensitive to, the details of the macromodel. For example, “cusp”configuration lens systems, in which the source lies close to the cusp of the astroid causticproduced by the lens galaxy (Fig. 2), produce three bright images close together on the oppositeside of the Einstein ring from a single faint image. The brightness of the central image of thesethree should be equal to the combined brightness of the outer two (Schneider & Weiss 1992; seealso Keeton, Gaudi & Petters 2003,2005; Congdon, Keeton & Nordgren 2008 for more detailedtreatment of other cases), and any departure from this indicates a non-smooth mass model. Therelation holds because the images form at a very similar, and relatively flat, part of the Fermatsurface in which unphysically large changes in the macromodel would be needed to producedisagreements – known in the literature as “cusp violations” – with the expected relation. Onthe other hand, small-scale structure can produce cusp violations relatively easily. Constrainingsmall-scale structure is in principle a matter of counting the number and magnitude of cuspviolations in a sample of quasar lenses.There are a number of reasons why the problem is not that easy. The first is that anomalousfluxes can be produced not only by CDM substructure on 10 -10 M (cid:12) scales, but also by themovements of individual stars in the lensing galaxy which create a caustic pattern of differen-tial magnification which tracks across the field in timescales of years, with individual eventshappening on shorter timescales. This phenomenon, microlensing, described in detail later, isitself extraordinarily interesting, but a contamination for the current purpose. It can be gotaround by using sources which have large sizes relative to the scale of the microlensing caustic This corresponds to a surface density profile Σ ∝ r − , or a 3-dimensional density profile ρ ∝ r − . The Fermat surface is a very useful way of thinking about gravitational lens optics. Imagine a source,viewed by the observer in projection on to the lens plane, with contours drawn according to the light traveltime of rays originating in the source, bending in the lens plane and reaching the observer. These contours aresimply concentric circles centred on the source, with a central stationary point (a minimum) at which Fermat’sprinciple dictates the formation of an image. If we then introduce a galaxy, which distorts these contours, weeventually reach a point at which further stationary points (a maximum and saddle point) simultaneously form. µ as. The cores and VLBI-scale jets of radio sources, witha typical intrinsic angular size of about 1 mas, fulfil this condition, but optical quasars donot. The second problem, even for radio quasars, concerns propagation effects. Radio wavescan be scattered while propagating through ionized media; the details are complicated (Rickett1977) but the effect is to produce flux variations, which can be seen on timescales of weeks toyears if the scattering screen is in our own Galaxy. By definition, there is a possible source offoreground screen in gravitational lens systems, in the form of the lensing galaxy, as well as anearer screen in our own galaxy. Koopmans et al. (2003) found evidence for this in at leastone object during a monitoring campaign of some radio lenses, but it is likely to be a smalleffect compared with most of the observed flux anomalies. The third possible problem is theeffects of intrinsic variation of the quasar, coupled with a differential time delay between theimages. Even though time delays and flux density variation are useful for measuring the Hubbleconstant, for the present purpose they are a nuisance. Again, however, the level of variationof most radio sources does not seem to be significant enough to be a major problem, and canbe averaged out if observations are made for periods much longer than the time delay. In theoptical, extinction is present, and can be used to probe the properties of the dust in the lensinggalaxy by using the fact that the same object’s light path passes through two different regionsof the galaxy (El´ıasd´ottir et al. 2007).The first obvious flux anomaly was pointed out by Mao & Schneider (1998) in the CLASS lenssystem B1422+231; this is a cusp system which produces a violation which requires a signif-icant amount of substructure (about that predicted by CDM) in order to give a significantchance of reproducing the observed anomaly. Other examples of flux anomalies which defiedsmooth macromodels soon followed (Fassnacht et al. 1999, Metcalf & Zhao 2002, Chiba 2002,Saha, Williams & Ferreras 2007), leading to the first attempt to address the overall statistics(Dalal & Kochanek 2002, see also Kochanek & Dalal 2004). Using seven four-image lenses, The size of a compact radio source is controlled by where the optical depth to synchrotron self-absorptionbecomes 1. This is typically about 1 mas for a source of around 1 Jy, although it becomes smaller with increasingfrequency, and it decreases as the square root of the flux. Sources typically found by the Square Kilometre Array,which will be sensitive to sources of 1 µ Jy, may therefore show radio microlensing. σ confidence) insubstructures between 10 M (cid:12) and 10 M (cid:12) , in rough agreement with the overall predictions ofCDM. However, this substructure appears to be in the wrong place (Mao et al. 2004); darkmatter, and hence dark matter substructure, should in CDM models be less centrally concen-trated than the baryons, and such levels of substructure at projected radii of 5-10 kpc are thussurprisingly high – a curious contrast to the “missing satellite” problem in our own Galaxy.Incidentally, the presence of a tension between lensing observations and CDM probably gives asevere problem for models involving significant amounts of warm dark matter (WDM), whichwould predict even less substructure (Miranda & Macci`o 2007).A more sophisticated approach to CDM testing can be taken, if instead of calculating an averagecontribution of substructure “expected” at the projected Einstein radius, we instead take anactual CDM halo simulation and investigate its lensing properties. Early attempts, with lowerresolution simulations, produced mixed results (Bradaˇc et al. 2004, Macci`o et al. 2006, Amara2006), but generally confirmed the picture of an excess of flux anomalies compared to theexpected incidence in ΛCDM. As better simulations became available, they have been used forthese comparisons (Xu et al. 2009, see also Chen, Koushiappas & Zentner 2011 for more detailedtreatment of halo-to-halo variations) using, for example, the Aquarius dark-matter simulations.There are a number of limitations with such investigations. The two main problems are thatthe simulations are being pushed to the limit of their resolution, since they are being askedquestions about mass condensations on scales down to 10 M (cid:12) , comparable to the lowest massesconsidered in the simulations, and that the effect of baryons in modifying the structure of thesubhaloes is not taken into account. Until higher-resolution simulations with extra physics areavailable, however, this is the best that can be done. Xu et al.’s conclusion was that the cuspviolations in existing lenses clearly exceeded the level of violation which would be expectedin the dark matter simulations. Two important caveats to this emerged in subsequent work,however. Firstly, detection of substructure along the line of sight means just that, and thesubstructures which produce the flux anomalies do not have to be within the lensing galaxy(Metcalf 2005a,b, Inoue & Takahashi 2012). Xu et al. (2012) suggested that 20-30% of thesubstructure could be outside the lensing galaxy, somewhere along the line of sight. If correct,this would potentially alleviate the tension between lensing observations and CDM, althoughit is probably fair to say that more work, both theoretical and observational, is needed beforethis can be regarded as well established. Secondly, finite source-size effects may modify thestatistics of substructure detection (Dobler & Keeton 2006; Metcalf & Amara 2012).The sample of seven radio-loud lenses used for substructure studies has remained largely un-changed in the last decade, owing to the current difficulty of finding significant numbers ofnew radio lenses with existing telescopes. There are a number of alternative approaches. Thefirst involves the use of observational brute force; to target radio-quiet lenses, but observe fluxdensities in parts of the electromagnetic spectrum where the source has significantly greatersize than the microlensing characteristic size of 1 µ as. The obvious choice is the mid infra-red,where the source is expected to consist of a more extended thermal component than the accre-tion disk which radiates in the optical and ultraviolet. Despite the difficulties of observing inthis waveband, a number of successful programmes have been carried out (Chiba et al. 2005,Fadely & Keeton 2011; Fadely & Keeton 2012) resulting in the detection of a number of otherflux anomalies, and measurement of their likely masses. These range from the 10 . M (cid:12) and10 . M (cid:12) clumps found in MG0414+0534 and HE0435 − M (cid:12) in SDSS J1029+2623 and 2 × M (cid:12) in1938+666; Kratzer et al. 2011, Vegetti et al. 2012). By contrast, the substructure identified ingalaxy-galaxy lenses is often larger (e.g. Vegetti et al. 2010). Radio-quiet quasars are also not7adio-silent, and flux densities have been measured for a number of such lens systems (Kratzeret al. 2011, Jackson 2011). Indeed, one would expect that all quasars emit measurable fluxdensity at radio frequencies (White et al. 2007) with current instruments such as the EVLAand e-MERLIN.An alternative approach is to use the presence of additional observational constraints, such asthose provided by radio jets in quasars, to give additional observational constraints, in the casewhere the jets can be detected in more than one lensed image. This was first attempted byMetcalf (2002) in the case of the lens system CLASS B1152+199 (Myers et al. 1999, Rusin etal. 2002) and a detection was claimed in this case. With further investigations using VLBI,other lenses have been shown to require substructure (MacLeod et al. 2012) and this may bea promising path to more detailed substructure measurements in the future, given sensitiveVLBI observations and high resolution (Zackrisson et al. 2012). Currently, one of the mostpuzzling cases is the four-image lens system CLASS B0128+437 (Phillips et al. 2000), in whichthe source consists of three radio components separated by a few milliarcseconds and resolvablewith VLBI. Attempts to fit the positions of the twelve resulting images fail badly (Biggs et al.2004). In this object, also, the SIE macromodel which properly fits the four images on arcsecondscales contains an implausibly large amount of external shear, which is inconsistent with theobserved number of surrounding galaxies, and also does not fit the extended structure aroundthe images seen by adaptive optics observations (Lagattuta et al. 2010). A fruitful area of futureinvestigation may well be to try and combine the flux and astrometric anomalies in a sampleof lenses; although in the case of astrometric anomalies, unlike flux anomalies, the anomaly isalways likely to be underestimated since it can be absorbed into the macromodel (Chen et al.2007). An alternative issue for the future is the proposal that time delay measurements mayalso be useful for measuring the effects of substructure, which can in extreme cases change thesign of the differential time delay between two images (Keeton & Moustakas 2009).Having detected mass substructures, we can ask whether they consist purely of dark matteror whether they contain stars. In many cases, flux anomalies can be explained by a masscontribution from a substructure which corresponds to an observed luminous satellite galaxy(Schechter & Moore 1993, Ros et al. 2000, McKean et al. 2007, Macleod, Kochanek & Agol2009), although in some cases (McKean et al. 2007) the mass model of the satellite is contrived,indicating that further mass structures may be needed. The number of bright subsidiary deflec-tors may be larger than expected from simulations (Shin & Evans 2008), a problem which maybe resolvable if some of them are actually line-of-sight structures, or explicable as a selectioneffect if brighter condensations are rendered more effective at causing flux anomalies becausethey have higher central densities. Black holes and central potentials of lens galaxies
A further important astrophysical application of quasar lens systems - and in particular, ofradio quasar lens systems - is the detection and study of “odd” images. All gravitational lenssystems have an odd number of images, usually 3 or 5, resulting from the properties of thelens Fermat surface. One of these images is always a Fermat maximum which forms very closeto the centre of the lens galaxy. For most realistic mass distributions, this maximum in thesurface is very sharp, which implies that the corresponding image is very faint (Wallington &Narayan 1993, Rusin & Ma 2001, Keeton 2003). How faint it is depends on the geometry ofthe lens system and the degree to which the central potential is singular; if the potential isdominated by a massive black hole, the corresponding image can be hugely demagnified andfor all practical purposes invisible. The geometry of the lens system has an effect because this8etermines the separation of the central image from the centre of the lensing galaxy. Three-image lens systems, particularly those with high primary-secondary flux ratios, create centralimages further from the lens centre and which are thus less demagnified. Five-image systems,with four bright images, are expected to nearly always contain an undetectably faint fifth imagebecause the symmetry of the lens configuration places it close to the galaxy centre.Propagation effects are likely to be particularly acute when trying to detect central images,because the light path passes straight through the lens galaxy centre where the concentrationof dust and ionized gas is high. The use of radio lenses, where the galaxy is unlikely to bevisible, is required, but relies on the expectation or hope that the central image will not bescattered out of existence. Nevertheless, detection of a faint radio image is not the end of thestory, since it may result from low-level radio emission from the core of the lensing galaxy. Inprinciple this can be distinguished by observation at different frequencies and examination ofthe radio spectrum to see if it differs from the other images.Observations to detect odd images are very difficult, because they require a combination ofhigh resolution and extreme radio sensitivity. A comprehensive theoretical study (Keeton2003) of likely lens mass profiles, based upon HST observations of Virgo ellipticals (Faberet al. 1997) showed that central image detections were likely only once flux density levels of10-100 µ Jy were reached. This level is only now becoming routine thanks to high-bandwidthupgrades to the VLA (now the JVLA) and MERLIN (now e-MERLIN). With older instruments,there is only one secure detection of a central image in a galaxy-mass lensing system, namelyPMNJ 1632 − − × M (cid:12) ) and a lower limit on the central surface mass density.These can alternately be rewritten as joint limits on the index of the central mass power lawand black hole mass. In principle, the degeneracy between these two parameters can be brokenin the case where the third image can itself be split by the combined lensing effect of the blackhole and central stellar cusp into two images (Mao, Witt & Koopmans 2001). Such a detection,although very much more difficult and requiring another factor of 10 in sensitivity, would bevery exciting because it would enable the immediate measurement of the black hole mass andcentral stellar cusp density separately. Even the detection of third images, or significant limitsthereon, in a number of radio lenses would give a powerful indication of the evolution of thecentral regions of elliptical galaxies between z = 0 . QL microlensing: a probe of quasar structure
The combined effect of many stars within the lensing galaxy is to produce a maze of caustics,elongated regions of high magnification with dimensions of microarcseconds which form anintricate pattern across which the source moves. Sources with angular sizes smaller than thecharacteristic scales of this pattern suffer time-dependent magnification as the pattern movesacross them, and consequently the brightness of each lensed image varies as its line of sightcrosses the caustic pattern. The details of the resulting effects on the image lightcurves werecalculated in the years following the discovery of Q0957+561 (Chang & Refsdal 1979, Paczynski1986, Kayser et al. 1986, Kayser & Refsdal 1989). It was first observed by Irwin et al. (1989)in the lens system Q2237+0305 (the “Einstein cross”, Huchra et al. 1985) which is a four-image lensed system produced by a low-redshift spiral galaxy with a high central stellar density9round the lensed images; the system is also useful because the time delays are small, muchless than the timescale of variations due to microlensing.The most basic information carried by the microlensing lightcurves is a combination of sourcesize and mass of the microlensing objects (Schmidt & Wambsganss 1998, Wyithe et al. 2000,Yonehara 2001, Kochanek 2004). However, the fact that sources of different sizes responddifferently to microlensing by the stars in the lens galaxy offers an opportunity to study sourcesin great detail (Wambsganss & Paczynski 1991), as well as a way to infer the presence ofmicrolensing by differences in spectra between one image and another (e.g. Wisotzki et al.2003).The central region of quasars contain an accretion disk close to the central supermassive blackhole, with temperatures of over 10000 K and producing hard UV and X-ray emission. Furtherfrom the nucleus are found broad-line regions, showing typical velocity widths of a few thousandkm s − ; reverberation mapping studies of local broad-line AGN have yielded typical size scalesof a few light-weeks for these areas. On larger scales still are likely to lie tori of material whichreprocess the quasar radiation and re-emit photons in the infrared, together with narrow-lineemission regions a few hundred parsecs from the centre.Interesting results began to emerge about a decade ago as lensed quasars were monitoredextensively at optical wavebands. It is expected that the source sizes are different in differentoptical colours, because the temperature of the accretion disk increases as its radius decreases,and this should show up as a chromatic microlensing signal (Wambsganss & Paczynski 1991)which was duly found in observational programmes (e.g. Wisotzki et al. 1993, Claeskens etal. 2001, Burud et al. 2002). This effect can be used to estimate the accretion disk size andstructure (Poindexter, Morgan & Kochanek 2008, Morgan et al. 2008, Poindexter et al. 2010,Hutsemekers et al. 2010, Dai et al. 2010, Blackburne et al. 2011, Mu˜noz et al. 2011); thepicture that emerges in some cases is of a scale-size of a few light days and a temperature-radiusprofile that is consistent with a standard Shakura-Sunyaev thin disk (Poindexter et al. 2008).But this is by no means a universal result, and in many cases the inferred size is bigger or thetemperature profile is different. For example, Blackburne et al. (2011) analyse multiwavelengthobservations of a sample of lensed quasars and find that the microlensing properties of manyof the objects imply accretion disk sizes of up to a factor of 10 larger than standard disks.Comparison of the spectra of the broad emission lines in different images of quasar lens systemshave shown that the BLR is also microlensed (Abajas et al. 2002, Richards et al. 2004, Waythet al. 2005, Keeton et al. 2006, Abajas et al. 2007, Sluse et al. 2007, Hutsemekers et al. 2010,Sluse et al. 2012) as originally predicted 30 years ago (Nemiroff 1988, Schneider & Wambsganss1990). Like the continuum microlensing studies, these are very important clues to the structureof the emitting object. Results from this work include the determination of the overall size scaleof the BLR, ranging from < QL as probe of cosmology
Cosmological parameters
Well before the discovery of the first lensed quasar, Refsdal (1964) pointed out that a lensed10uasar system could be used to measure the Hubble constant. The basic idea is simple. Lighttravels along two or more different paths from the source to the observer, via deflections atdifferent points in the lens plane. The resulting path difference can be measured if the back-ground source is variable by comparing light curves from the two images and multiplying by thespeed of light. If the source and lens redshifts are known, this then gives an absolute measureof distance together with redshift in the system, the combination of which gives the distancescale. Fortunately, the expected time delays from typical lens configurations are on timescalesof weeks to months. In principle, this method offers a clean determination of the Hubble con-stant on cosmological scales in a one-step method. Even better, in principle, measurements ina number of different lens systems at different redshifts could also allow measurement of H ( z )and hence other cosmological parameters .Historically, ecstasy at the cosmological prospects after the 1979 detection of Q0957+561quickly turned to agony, both because of the long path to the secure determination of a timedelay in Q0957+561 itself (Kundic et al. 1997), but also following the appreciation of the extentof the major systematic of this method. The systematic is closely related to the problem ofdetermining the macromodel in a lensed system, and is that the derived Hubble constant iseffectively degenerate with the macro-properties of the lens model, in the sense that steepermass profiles produce lower H for a given time delay . Worse still, the mass-sheet degeneracycauses rescaling of the time delay for the same image positions and fluxes, and thus has aneffect on H which is unknown in a single-source system, unless a census of all the mass alongthe line of sight can be taken.Again there are a number of responses to the problem. One is to abandon the attempt tomeasure H or cosmological parameters in general, regard them as a solved problem at thelevel that lenses will constrain, and regard time delays as a means to break degeneracies inmass models by using them together with a “known” H – for example, from the Hubble KeyProject measurements of Cepheid variables. Many workers in the field would regard this as anunnecessarily defeatist approach, given three facts: lens H work requires much less high-costobserving time than alternatives; large numbers of time delays will be available in future; andalthough the lens modelling systematic is a serious systematic, it is only one systematic andnot many.The second response is to investigate a statistical approach. Can the systematic error be reducedto a random error, albeit a large one in an individual object, which can then be beaten downby root- n statistics? This approach again begins with the average properties of the SLACSgalaxy-galaxy lenses, which appear to have a mass slope very close to isothermal (Koopmanset al. 2006) and mostly lie at low redshift ( < Early in the history of lensing, the number of lenses in a complete sample appeared to be a useful wayof constraining Λ, because a higher Λ increases lengths at high redshift and hence increases the optical depthto lensing (Kochanek 1996); indeed this was the justification for the thoroughness of the CLASS survey inattempting to get a complete sample of lenses. Although this line of research yielded a fairly clear result ofnon-zero Λ (Kochanek 1996, Chae et al. 2002, Mitchell et al. 2005, but see also Keeton 2002), the rate at whichconstraints on dark energy improve is a very slow function of increasing size of lens sample, and it has sincebeen abandoned. Strictly, the dependence is not directly on the mass profile; it is related to the surface mass density in theannulus between the lensed images (Kochanek 2002, but see also Read, Saha & Macci`o 2007). H in thisway. Similar attempts have been made to investigate H using existing time delay informationand best-attempt mass models (Oguri 2007), yielding results around 70 kms − Mpc − althoughleaving the uncomfortable feeling that mutually formally incompatible results for different lensesare being shoehorned into a harmonious conclusion. Another approach is to explore the varietyof possible lens models using non-parametric methods in order to thereby explore the possiblerange of H for each system (Saha et al. 2006), again yielding results of 72 kms − Mpc − , witherrors of about 15%.The third, and in my view fruitful response in the long run, is to grit one’s teeth and do thehard work in individual cases, in the knowledge that it will get easier as telescopes become morepowerful. Two recent examples of the work required are provided by the detailed investigationsof the time-delay lenses RXJ1131 − H is only done once all the systematics have beenestimated. However, when all this is done, the resulting H values from two lenses, includingall the systematic and random errors, are of comparable quality to the HST Key Programme(6% error). There is a serious prospect from further such studies of a competitive contributionto the determination of w which is largely orthogonal to other probes, once a few dozen suchlenses have been thoroughly investigated (e.g. Linder 2011). Galaxy evolution
Individual lenses can be used to find values for the Hubble constant (or for ambiguity-free galaxymass models if the cosmological world model is known). However, the statistics of well-selectedlens samples can be used to investigate galaxy evolution. This is because the number of lensesas a function of galaxy and source redshift in such a sample depends on the evolution in bothdensity and mass of the available lens population. In order to attempt this, a sample whoseselection effects are under control is needed, and in practice the easily identifiable properties ofquasars, and the possibility of getting clean separation between lensed and unlensed objects,makes quasar lens samples the obvious choice.Two at least fairly complete quasar lens samples exist. The CLASS statistically complete sur-vey was the result of a systematic attempt to completely identify all lenses with separationof >
300 mas and primary:secondary flux ratio < ν σ ≡ d ln σ/d ln(1 + z ), is zero within errors of ∼ The future; bigger samples, and how to find them
The current largest sample of quasars, the SDSS quasar list (Schneider et al. 2010) containsabout 100000 objects. Given a typical optical depth in lensing galaxies towards z ∼ objects with accurate position information- but their faintness in the radio makes systematic large surveys difficult, even with the currentgeneration of upgraded radio arrays. Of such arrays, the only one which has the requiredcombination of high sensitivity and sub-arcsecond resolution is LOFAR, a low-frequency radiointerferometer array centred in the Netherlands (van Haarlem 2005) but high resolution surveysat great depth are probably a few years away.In the optical, a combination of wide area and high resolution is likely to be achieved in thenear future, by a number of telescopes. The first is GAIA, scheduled for 2013, a satelliteprimarily designed for astrometry and measurements of proper motions in Galactic stars. Byvirtue of area coverage and accurate measurements of point sources, however, it is also wellsuited to making a large census of about half a million quasars, to determine which suchobjects are extended and thus potentially lensed. This alone should increase the lensed quasarsample by a large factor (Surdej et al. 2002). Towards the end of the decade, two majoradvances are likely with the advent of the Large Synoptic Survey Telescope (LSST) and Euclid.Euclid is an ESA medium mission scheduled for launch in 2019 which will have close to all-skycoverage and 150-200 mas resolution at optical and NIR wavebands. It will provide imaging atslightly less angular resolution and sensitivity than the 2 square-degree COSMOS HST field,but over the whole sky. Its mission includes the detection of about 1000 quasar lenses, aboutan order of magnitude increase on the present sample (as well as hundreds of thousands ofgalaxy-galaxy lens systems). Still vaster samples will be provided by LSST (Abell et al. 2009),with the additional advantage of multiple observations of the same field, allowing quasars tobe identified by variability (Kochanek et al. 2006) and probably yielding a sample of severalthousand lensed quasars. On the same timescale, the Square Kilometre Array will providesimilarly large samples, selected at radio wavelengths (Koopmans et al. 2004).Large samples are good for two reasons. The first is that “more of the same” approaches canbe tried with larger numbers, although they do rely on systematic biases being eliminated. Ifwe assume that the mass slope and mass sheet model degeneracies can be controlled, so thatthe accuracy improves as the square root of the number of objects, then spectacular resultscan be obtained; Coe & Moustakas (2009) calculate that, in conjunction with Planck priors,an accuracy of ∼
3% can be obtained on w . This assumes that time delays will be measurablegiven the LSST cadence. If a smaller sample of the quasar lenses are measured, however, butwith more intensive followup, then extrapolation from the work of Suyu et al. (2012) suggeststhat comparable results on post- H parameters can be achieved in conjunction with other(BAO/SNe) cosmological probes, since the lensing constraints are often orthogonal to othersin parameter space. Linder (2011) calculates that the dark energy figure of merit is potentially13mprovable by a factor of 5 by including lensing information from future surveys.The second advantage of large samples is that they are likely to contain a small number ofhigh-value objects. One of the most prized type of lens systems is a quasar lens system with asecond source at a different redshift, since such double source plane systems allow mass modeldegeneracies to be immediately broken. The problem has been investigated by Collett et al.2012, who find that a small number of these rare objects give 15% accuracy in w very quickly.Many such sources would be currently difficult or impossible to follow up, due to faintness ofsome of the sources, but the era of 30-m class telescopes is around the corner, and such followupoperations will become routine if not easy. Summary and conclusion
The future uses of quasar lenses, as in the past, divide into three natural applications: studyof the lenses, study of the sources, and study of cosmology and galaxy evolution. I brieflysummarise the results from the main body of the review, and the prospects for the next tenyears. • We already know basic facts about lens mass distributions; elliptical galaxies have isother-mal mass distributions at low redshift, and there is some indication of steepening withredshift. In prospect is a vast increase in parameter space, with the ability to studythe evolution of galaxy mass profiles over a much wider range of redshift, and over dif-ferent masses of lens galaxies and Hubble types. Much of this work will be done withlens-selected surveys, like the existing SLACS galaxy-galaxy lenses; however, quasar lenssystems give the opportunity to make large, source-selected surveys and examine the sta-tistical properties of lenses independently of their selection. The major impact will be inthe study of substructure in lens galaxies, however. The existing sample of substructure-friendly quasar lenses is very small and has already yielded a large and surprising body ofinformation about the small-scale features of lensing galaxies. Expansion of these sampleswill provide critical tests for galaxy formation models. • Microlensing studies have yielded unprecedented insights into the nature of quasars, par-ticularly the proportions of stellar and smooth matter within the mass budget and theproperties – size and physics – of the central engine and surrounding emission line re-gions. These studies require multi-wavelength observations and patient monitoring, andhave concentrated on relatively few objects. With future telescopes and high-cadencemonitoring we can expect that studies of quasar physics at huge effective resolution willbecome routine. • Perhaps the most important future application of quasar lensing lies in a promise whichhas taken some time to fulfil, that of cosmography. Many years passed between thediscovery of the first quasar lens and the first reliable determinations of the Hubble con-stant. The process is now accelerating, thanks to coordinated and long-term monitoringcampaigns, allied to advances in lens modelling and observation of individual lens sys-tems. Such investigations are already closing in on estimates of the Hubble constant withstringent enough error constraints to contribute to the overall cosmological world model.Future measurements on a large sample of quasar lenses will give cosmological parameterestimates with error circles orthogonal to many others, and factors of several increase infigures of merit for dark energy searches. • Finally, the prospect of huge lens samples carries with it the probability of finding some-14hing totally new. We can speculate about the likelihood of possible candidates - com-pletely dark lenses, cosmic strings - but with the expectation that the unexpected is likelyto prove more surprising still.
Acknowledgements
I thank Ian Browne for a careful reading of, and comments on, the manuscript.