The Effect of Color Space Selection on Detectability and Discriminability of Colored Objects
TThe Effect of Color Space Selection on Detectability and Discriminability of ColoredObjects
Amir Rasouli and John K. Tsotsos
Department of Electrical Engineering and Computer ScienceYork UniversityToronto, Canada { aras,tsotsos } @eecs.yorku.ca Abstract —In this paper, we investigate the effect of colorspace selection on detectability and discriminability of coloredobjects under various conditions. 20 color spaces from theliterature are evaluated on a large dataset of simulated andreal images. We measure the suitability of color spaces fromtwo different perspectives: detectability and discriminability ofvarious color groups.Through experimental evaluation, we found that there isno single optimal color space suitable for all color groups.The color spaces have different levels of sensitivity to differentcolor groups and they are useful depending on the color of thesought object. Overall, the best results were achieved in bothsimulated and real images using color spaces
C1C2C3 , UVW and
XYZ .In addition, using a simulated environment, we show apractical application of color space selection in the context oftop-down control in active visual search. The results indicatethat on average color space
C1C2C3 followed by
HSI and
XYZ achieve the best time in searching for objects of various colors.Here, the right choice of color space can improve time ofsearch on average by 20%. As part of our contribution wealso introduce a large dataset of simulated 3D objects.
Keywords -color space; visual attention; top-down control;robotic visual search.
I. I
NTRODUCTION
The choice of color space is an important task in variouscomputer vision applications such as image compression [1],and annotation [2], object detection [3] or object tracking[4], [5]. However, it is hard to define a universal color spaceas it can be modeled in numerous ways, e.g.
Luv , Lab , HSV ,etc. [6].The computer vision community has proposed a largenumber of solutions for color space selection for differentapplications. In one of the early works, Ohta et al. [7]introduce a new color space,
I1I2I3 and show that usingthis color space can result in improved object segmentation.In [8], the authors use an automatic color space selectionfor biological image segmentation. They use the Liu andBorsotti segmentation evaluation method to determine whatcolor space provides the best segmentation. Similar adaptiveapproaches also have been applied to applications such assky/cloud [9] or skin [10] segmentation. Using the right color space is also important in objectdetection and recognition applications. Vezhnevets et al. [11]perform a comparative study on various color spaces forskin detection. The authors highlight that changes in colorluminance have little effect on separating skin from non-skinregions. In the context of cast shadow detection, Benedekand Sziranyi [12] evaluate numerous color spaces such as
HSV , RGB and
Luv , and show that using
Luv color space isthe most efficient for color based clustering of the pixels andforeground-background-shadow segmentation. Van de Sande et al. [13] evaluate different color spaces in conjunctionwith SIFT features for object detection. They argue thatdepending on the nature of the dataset using different colorspaces such as opponent axis or
RGB can result in thebest performance. Scandaliaris et al. [14] combine threecolor spaces including
C1C2C3 , opponent axis and
RGB to generate shadow invariant features to detect objects byfinding their contours. Moreover, there are a number ofworks that investigate the suitability of color spaces fortexture classification of objects such as textile [15] and tree[16] classification.In robotics, color spaces are also studied for various appli-cations. Song et al. [17] use a genetic algorithm to generate acolor space suitable for recognition of colored objects in thecontext of robotic soccer. They show that using the colorspace, rSc2 , the highest recognition rate can be achievedcompared to spaces such as
YUV or HSI . In a similar study[18], the authors propose the uSb color space based onan iterative feature selection procedure for recognition ofcolored objects for underwater robotics. Duan et al. [19]investigate the optimal color space for segmentation of aerialimages captured by unmanned UAVs. The authors report thatfor Bayer true color images, the
I1I2I3 color space is theoptimal choice for the segmentation of buildings comparedto
YCbCr , YIQ or Lab .In the context of robotic visual search, color featurescan be used to optimize the process of search [20]. In thiswork the authors use the sought object’s color in a top-down control manner to bias the search. If the sought objectis outside detection range, its color features are used toguide the search. The similarity between detected colors and a r X i v : . [ c s . C V ] F e b he color of the object is measured using a backprojectiontechnique. If a similarity is observed, the importance ofthe corresponding regions is increased, therefore the robotsearches those regions first. The authors show that using thismethod, the time of search can be significantly reduced. In[20], however, the authors only use normalized RGB colorspace and the search is only performed for a single red andgreen toy.In this paper we first investigate the impact of colorspace selection on detection of objects with different colors(detectability) and then using a cluster scoring techniqueidentify how well different color groups can be separated(discriminability) using each color space. We perform thisevaluation for the 20 most common color spaces in theliterature using both synthetic and real images. Finally, weuse active visual search as an application to examine the roleof color space selection in practice.II. C
OLOR SPACE AND ROBUSTNESS
A. Color space
We evaluated 20 color spaces (see Table I) including theRGB space.Table I: The list of color spaces evaluated
Space Reference
XYZ, I1I2I3, HSI, YIQ
Guo and Lyu [21]
Lab, YCrCb, rg, HSV
Danelljan et al. [4]
C1C2C3
Salvador et al. [22]
Opp, Nopp, Copp
Liang et al. [5]
Luv, xyz
Moroney and Fairchild [1]
YES
Saber et al. [2]
CMY, YUV
Tkalcic and Tasic [23]
HSL
Weeks et al. [24]
UVW
Ohta et al. [25] xyY
Lucchese and Mitra [26]
As mentioned earlier, our objective is to evaluate thesuitability of color spaces for various applications, in par-ticular, detecting and distinguishing colored objects. In thissense, two factors have to be considered. First, the colorshould be represented in a way that it is easily detectable(
Detectability ) under various conditions such as in the pres-ence of shadow, illumination changes or various reflectingsurfaces. Second, different color groups should be easilyseparable (
Discriminability ) and not confused with oneanother. Measuring the
Detectability and
Discriminability ofcolors in different color spaces accounts for how robust acolor representation can be.
B. Measuring robustness1) Detectability:
We measure the detectability of differ-ent colors using the histogram backprojection (BP) technique[27]. The BP algorithm generates a probability map in whichthe pixel values refer to how likely is the presence of agiven color in the image. The computation of the BP mapis as follows. Let h ( C ) be the histogram function whichmaps color space C = ( a , a , ..., a i ) , where a i is the i th channel of C , to a bin of histogram H ( C ) computed fromthe object's template, T Θ . The backprojection of the object’scolor over an image I is given by, ∀ x, y : b x,y := h ( I x,y,c ) (1)where b is the grayscale backprojection image.Choosing the right bin size for histograms is vital in theBP algorithm. The larger the size of the bins, the moretolerant BP is to the illumination change. At the same timedifferent colors will likely be detected together. To eliminatebias in our detection, we use histograms with bin sizesof 16, 32, 64 and 128 and average the results over allconfigurations.We compare the detection results against the ground truthdata and use 3 measures of performance, Recall ( R ) = T pos /T pos + F neg , P recision ( P ) = T pos /T pos + F pos and F M easure = 2 ∗ R ∗ P /R + P where T pos , F neg , and F pos stand for true positive, false negative and false positiverespectively.
2) Discriminability:
One way to measure discriminability(separability) of colors in different spaces is by performingclustering and measuring how well the color data is groupedinto different clusters. For this purpose we use a
K-means clustering technique using Expectation Maximization (EM)to find the maximum number of clusters that best representthe color distributions in each image. The number of clustersdepends on the number of colors (out of 12 colors of thetraditional color wheel, see Figure 1) present in the inputimage. Figure 1: The traditional color wheelWe employ silhouette analysis [28] to measure the con-sistency of each cluster against the ground truth. Let i be a single pixel in the image and clust p be the clusterthat i belongs to. We define a ( i ) as the average dissimi-larity of pixel i to all other pixels in cluster clust p . Wedefine d ( i, CL ) as the average dissimilarity of i to allother pixels in CL = { clust , clust , ..., clust j } , j (cid:54) = p and b ( i ) = min d ( i, CL ) . Based on these definitions, thesilhouette measure s ( i ) of pixel i is given by, s ( i ) = b ( i ) − a ( i ) / max( a ( i ) − b ( i )) , (2)where − ≤ s ( i ) ≤ . The value of 1 means pixel i is wellclustered whereas -1 implies that it does not belong to theallocated cluster.II. T EST I MAGES
A. Synthetic images
We setup a simulated environment to capture the robust-ness of the color spaces to various illumination conditions.The following setups were considered for our simulation.
Objects -
To model different surfaces we used objects withthree different shapes namely, sphere, cylinder and cube (seeFigure 15)Figure 2: The sample shapes used in our experiments.
Colors -
We used 12 colors of the traditional color wheeldepicted in Figure 1. The traditional color wheel is chosenbecause it represents three basic color schemes includingprimary, secondary and tertiary colors.
Configuration -
The objects where placed on a circle withthe radius of 2 meters from the center and equally distantfrom one another (see Figure 3).Figure 3: The circular configuration of objects. The right-most object is the 3D model of the camera.
Camera -
We used a camera model same as the Zed stereocamera with o FOV and 1280 × Light -
We used two types of light sources, directional andpoint, placed 10 meters above the scene. To vary illuminationconditions, the directional light source was rotated alongthe y-axis with an interval of π/ forming a total of 12orientations. The point light source also was rotated alongan arc in the x-z plane by π/ within an interval of [ − π/ , π/ . The arc also was rotated along z-axis 6 timesequally within an interval of [ − π/ , π ] (see Figure 4).Using the above setups a total of 4752 simulated imageswhere generated. Figure 5 shows some sample synthetic im-ages that were generated under various lighting conditions. B. Real images
A total of 60 real images are collected from the webrepresenting different materials, object shapes and lightingconditions. Unlike the synthetic images, the real images onlycontain a subset of the 12 colors. Therefore we selectedthem so that all colors are reasonably distributed. Figure 6illustrates some of the images used in the experiments. Figure 4: The rotation path of the point light source abovethe objects.Figure 5: Synthetic image samples generated under variouslighting conditions. From the top: directional light and pointlight with low and high intensity.Figure 6: Real image samples collected from the web.IV. E
XPERIMENTS
We evaluated our samples both with and without normal-ization (pixel-wise normalization of each channel). In thelatter method pixel-wise normalization was used to reducethe effect of illumination changes. The normalization tookplace on the original RGB images and was computed bydividing each pixel value in each channel by the sum ofvalues in all channels.n the following subsection we use the following abbre-viations for each color category: blue-green (bg), blue (b),green (g), green-yellow(gy), orange (o), orange-red (or), red(r), red-violet (rv), violet-blue (vb), violet (v), yellow-orange(yo), and yellow (y).
A. Color spaces in the simulated environment1) Backprojection:
Performing backprojection (BP) onthe original images without pixel-wise normalization, thescores in the majority of the color spaces are fairly low.This is due to the high degree of illumination changes inthe images. The only color spaces that perform well are thephotometric color invariants (
C1C2C3 , NOPP and rg ) whichdescribe color configuration of image discounting the effectof shadow or highlights.To compensate for the changes in illumination, the spaces NOPP and rg normalize the pixel values of each chan-nel by dividing by the sum of pixel values in all chan-nels. The C1C2C3 color space achieves the illuminationinvariant behavior by calculating each channel value by arctan( a, max( b, c )) where a, b, c = R, G, B and b (cid:54) = c (cid:54) = a . After pixel-wise normalization, the results are changedsignificantly. The highlight is that the scores of the top threecolor spaces before normalization deteriorate due to lossof information as a result of double normalization. On thecontrary, all other methods perform dramatically better dueto reducing the effect of illumination changes.Figure 7 shows the scores after normalization. The bestperformance is achieved in color spaces HSI (cid:48) , UVW (cid:48) and
XYZ (cid:48) . The prime sign ( (cid:48) ) indicates the color space is appliedafter pixel-wise normalization of the input image. In addi-tion, despite the drop in the performance of
C1C2C3 , it stillremains one of the top 5 color spaces.After combining the results of both experiments, as shownin Figure 8, BP performs best in the
C1C2C3 color spacefollowed by
UVW (cid:48) and
HSI (cid:48) .Although BP has the best performance on average in thespaces in Figure 8, it does not necessarily perform best indetecting each color group using those color spaces. In fact,using the
C1C2C3 color space, the BP algorithm does nothave the best hit rate in any of the color categories.To highlight the performance of BP using different colorspaces to detect each color group, in Table II we list thetop 3 color spaces followed by the highest score for thecorresponding color group.The best hit rate overall (recall) is achieved using
HSI (cid:48) , UVW (cid:48) and
XYZ (cid:48) . HSI (cid:48) is most suitable when the color isconcentrated only in a single channel. On the other hand,
UVW (cid:48) and
XYZ (cid:48) are better for colors containing violet (red-blue) and yellow (green-red) respectively.Using the
C1C2C3 color space, the best precision andFMeasure are obtained overall. As for the recall, in fourcases the second best performance is achieved using this (a) Recall(b) Precision(c) FMeasure
Figure 7: The overall results of backprojecion on normalizedsynthetic images.Figure 8: The overall results of backprojecion in bothoriginal and normalized synthetic images. The color spacesnoted by ” (cid:48) ” are the ones after normalization.able II: The BP results for the best performance in detecting each color in the synthetic images. The color space(s) in bold is (are) the best color space(s).
Color
Recall Precision FMeasure bg YCrCb (cid:48)
YUV (cid:48) rg . YCrCb (cid:48)
YUV (cid:48)
YIQ (cid:48) . YCrCb (cid:48)
YUV (cid:48) rg . b HSI (cid:48) C C C XY Z (cid:48) . C1C2C3 NOPP rg . C1C2C3 NOPP rg . g HSI (cid:48) C C C rg . C1C2C3 NOPP rg . C1C2C3 NOPP rg . gy XYZ (cid:48) xyY (cid:48) rg . XYZ (cid:48) xyY (cid:48)
HSI (cid:48) . XYZ (cid:48) xyY (cid:48)
HSI (cid:48) . o HSI (cid:48)
HSL (cid:48)
XY Z (cid:48) . HSL (cid:48)
XY Z (cid:48)
HSI (cid:48) . HSI (cid:48)
HSL (cid:48)
XY Z (cid:48) . or Luv (cid:48)
XY Z (cid:48)
HSI (cid:48) . Luv (cid:48)
XY Z (cid:48)
UV W (cid:48) . Luv (cid:48)
XY Z (cid:48)
HSI (cid:48) . r HSI (cid:48)
XY Z (cid:48) C C C . HSI (cid:48)
XYZ (cid:48)
C1C2C3 . HSI (cid:48)
HSL (cid:48)
C1C2C3 . rv UVW (cid:48) C C C HSI (cid:48) . C1C2C3 xyY (cid:48)
HSV (cid:48) . UVW (cid:48) C C C HSI (cid:48) . vb UVW (cid:48) C C C xyY (cid:48) . UVW (cid:48) C C C Lab (cid:48) . UVW (cid:48) C C C HSI (cid:48) . v UVW (cid:48)
HSI (cid:48) C C C . C1C2C3
HSI (cid:48)
UV W (cid:48) . C1C2C3
HSI (cid:48)
UV W (cid:48) . yo XYZ (cid:48)
UV W (cid:48)
HSL (cid:48) . XYZ (cid:48)
UV W (cid:48)
HSL (cid:48) . XYZ (cid:48)
UV W (cid:48)
HSL (cid:48) . y XYZ (cid:48)
UV W (cid:48) rg . NOPP rg Y CrCb (cid:48) . rg NOP P COP P (cid:48) . (a) Original image(b) After normalization Figure 9: The silhouette scores of clustering using differentcolor spaces.color space and in the cases where colors yellow or orangeare present, using
C1C2C3 is not advantageous.
2) Clustering:
We performed the EM clustering in bothoriginal and normalized images. The maximum number ofclusters was set to the maximum number of colors presentin the image (in this case 12) and the silhouette score wasmeasured for each color space.Figure 9 shows the results of the clustering. As expected,after normalization, the clustering is improved. Consideringthe results in both scenarios, the top three color spaces are
YIQ (cid:48) (0.6977),
C1C2C3 (0.6741) and rg (0.6701). B. Color spaces in the real images1) Backprojection:
We followed the same procedure asfor the simulated images and ran the BP algorithm in both original and pixel-wise normalized real images. Thetemplates for BP are generated using a color checker.Once again without normalization the performance usingthe majority of color spaces is poor. After normalization,however, the best performance is achieved using color spaces
C1C2C3 (cid:48) and
UVW (cid:48) followed by
XYZ (cid:48) as shown in Figure11. It should be noted that the performance in
C1C2C3 isalso slightly improved after normalizing the image.The performance of BP in detecting different color groupsin different color spaces is reflected in Table III.The best performance results from the
C1C2C3 colorspace. The only cases in which the performance was poorwas in color groups y and gy similar to simulated images.The runner-up color spaces are UVW (cid:48) and
XYZ (cid:48) . Theperformance was more or less similar in both syntheticand real images. However, there are some exceptions. Forinstance, in the case of color v , the best performance belongsto UVW (cid:48) in synthetic images where
XYZ (cid:48) is not even in topthree spaces. In contrast, using real images opposite resultswere observed. Here, the best space is
XYZ (cid:48) whereas
UVW (cid:48) is not in the top three.
2) Clustering:
The clustering was done on both originaland normalized images. The maximum number of clusterswas set to the maximum number of colors present in theimage ranging from 4-12 colors maximum depending onthe image. The silhouette scores are measured and reflectedin Figure 12.Similar to simulated images, normalization improves theoverall results. The best performance was achieved using thecolor spaces
COPP , rg and C1C2C3 in original images and
UVW (cid:48) , C1C2C3 (cid:48) and rg (cid:48) after normalization. C. Active visual search
In this section we put our findings into practice andevaluate the effect of color space choice on the performanceof a mobile robot searching for an object. To performsearch we used the same greedy algorithm introduced in[29] with the difference of omitting the bottom-up saliencyto eliminate any bias beside the color values.The experiments were conducted in Gazebo simulationenvironment. For this purpose we generated a large numberable III: The BP results for the best performance in detecting each color in real images. The color space in bold is thebest color space.
Color
Recall Precision FMeasure bg UVW (cid:48) C C C Y IQ (cid:48) . UVW (cid:48)
Y IQ (cid:48) C C C . UVW (cid:48) C C C Y IQ (cid:48) . b C1C2C3
UV W (cid:48)
Y IQ (cid:48) . C1C2C3
Luv (cid:48)
UV W (cid:48) . C1C2C3
UV W (cid:48) rg . g Lab (cid:48)
UV W (cid:48) C C C . C1C2C3
HSI (cid:48)
Luv (cid:48) . Lab (cid:48) C C C (cid:48) UV W (cid:48) . gy UVW (cid:48)
XY Z (cid:48)
COP P (cid:48) . UVW (cid:48) rg Lab (cid:48) . UVW (cid:48)
XY Z (cid:48)
COP P (cid:48) . o C1C2C3
UV W (cid:48)
XY Z (cid:48) . UVW (cid:48)
Lab (cid:48)
Luv (cid:48) . C1C2C3
UV W (cid:48)
XY Z (cid:48) . or C1C2C3
UV W (cid:48)
XY Z (cid:48) . NOPP (cid:48)
HSL (cid:48) C C C . C1C2C3
UV W (cid:48)
XY Z (cid:48) . r Lab (cid:48) C C C UV W (cid:48) . C1C2C3
UV W (cid:48)
Lab (cid:48) . C1C2C3
Lab (cid:48)
UV W (cid:48) . rv Luv (cid:48) C C C Lab (cid:48) . YIQ (cid:48)
UV W (cid:48) C C C (cid:48) . Luv (cid:48) C C C UV W (cid:48) . vb YCrCb (cid:48)
Y UV (cid:48)
Lab (cid:48) . Luv (cid:48) I I I (cid:48) Lab (cid:48) . YCrCb (cid:48)
Y UV (cid:48)
Lab (cid:48) . v XYZ (cid:48)
COP P C C C . XYZ (cid:48)
Y IQ C C C . COPP
XY Z (cid:48) C C C . yo XYZ (cid:48) C C C Luv (cid:48) . C1C2C3
XY Z (cid:48)
CMY (cid:48) . C1C2C3
UV W (cid:48)
Luv (cid:48) . y UVW (cid:48)
XY Z (cid:48)
Y IQ (cid:48) . YIQ (cid:48) rg HSI (cid:48) . UVW (cid:48)
XY Z (cid:48) C C C . (a) Recall(b) Precision(c) FMeasure Figure 10: The overall results of backprojecion on normal-ized real images.of objects (see Figure 13) to create a typical office environ-ment (see Figure 14). The search robot is a simulated Pioneer3 platform equipped with a Zed camera for visual processingand Hokuyo Lidar for mapping. In addition, communicationsand navigation are done using ROS nav package. The dataset is available at http://data.nvision2.eecs.yorku.ca/3DGEMS/
Figure 11: The overall results of backprojection in bothoriginal and normalized real images. The color spaces notedby ” (cid:48) ” are the ones with normalization. (a) Original image(b) After normalization
Figure 12: The silhouette scores of clustering using differentcolor spaces.igure 13: Object samples used in the experiment.Figure 14: The simulated office environment used for theexperiments. The targets and the robot are highlighted byyellow and red circles respectively.Figure 15: Target objects used in the experiments.We used 6 target objects with various colors (see Figure15) placed randomly in the scene. In each iteration the lo-cations of objects were rotated, so that objects were equallyrepresented in the environment. In each configuration, therobot was placed in four different locations to begin thesearch.As for the color spaces, we used the top 6 performingspaces found in Section IV-A, namely
C1C2C3 , UVW (cid:48) , XYZ (cid:48) , YCrCb (cid:48) , Luv (cid:48) and
HSI (cid:48) . In total over 1000 experimentswere conducted.Table IV lists the results of experiments for each object.The outcomes are consistent with the results in Table II.Using the top color spaces resulted in lower search time.There are, however, a few exceptions that show the impor-tance of discriminability. For instance,
UVW (cid:48) is placed in top2 position for detecting 5 color groups but in search at best is Figure 16: The average result of the search for all objects.placed in 3rd and 4th position. Given this color space’s lowsilhouette score, the detection algorithm can be distracted bythe other colors in the environment, therefore, the efficiencyof the color space is reduced. In contrast, using
C1C2C3 better performance is achieved. This is consistent with thehigh silhouette score which means this space is more robustagainst distraction.Figure 16 shows the average results of the search for allobjects. Here, the best performance overall is achieved usingcolor spaces (in descending order)
C1C2C3 , HSI (cid:48) and
XYZ (cid:48) .V. C
ONCLUSION
In this paper we evaluated a large number of color spacesto measure their suitability detecting and discriminatingdifferent colored objects. Using empirical evaluations onboth synthetic and real images we showed that there is nosingle optimal color space for detecting and discriminatingall color groups. On average, however, the best performancewas achieved using color spaces
C1C2C3 , UVW and
XYZ .The color spaces were also put to test in the context ofvisual search in a simulated environment. A combinationof high detection rate and robustness to distractors resultedin the lowest time of search using
C1C2C3 and
XYZ colorspaces.We only measured the robustness to distraction by aclustering technique. It would be beneficial to measure thesensitivity of each color space to different color groups.In addition, the visual search experiments were done onlyin a simulated environment. In the future, we intend toperform a similar study on a practical platform to confirmour evaluation on the real images.A
CKNOWLEDGMENT
We acknowledge the financial support of the NaturalSciences and Engineering Research Council of Canada(NSERC), the NSERC Canadian Field Robotic Network(NCFRN), and the Canada Research Chairs Program throughgrants to JKT. R
EFERENCES [1] N. Moroney and D. F. Fairchild, “Color space selection forjpeg image compression,”
Journal of Electronic Imaging ,vol. 4, no. 4, pp. 373–381, 1995. able IV: The results of the search experiments for each object
Object blue cup
XYZ (cid:48) C C C HSI (cid:48)
UV W (cid:48)
Y CrCb (cid:48)
Luv (cid:48) . yellow cup C1C2C3
HSI (cid:48)
UV W (cid:48)
XY Z (cid:48)
Luv (cid:48)
Y CrCb (cid:48) . green cup HSI (cid:48) C C C XY Z (cid:48)
UV W (cid:48)
Luv (cid:48)
Y CrCb (cid:48) . violet book XYZ (cid:48) C C C HSI (cid:48)
Luv (cid:48)
UV W (cid:48)
Y CrCb (cid:48) . red can HSI (cid:48) C C C UV W (cid:48)
Luv (cid:48)
Y CrCb (cid:48)
XY Z (cid:48) . orange can C1C2C3
XY Z (cid:48)
Luv (cid:48)
UV W (cid:48)
Y CrCb (cid:48)
HSI (cid:48) . [2] E. Saber, A. Tekalp, R. Eschbach, and K. Knox, “Automaticimage annotation using adaptive color classification,” Graphi-cal Models and Image Processing , vol. 58, no. 2, pp. 115–126,March 1996.[3] A. Kumar and S. Malhotra, “Pixel-Based Skin Color Classi-fier : A Review,”
International Journal of Signal Processing,Image Processing and Pattern Recognition , vol. 8, no. 7, pp.283–290, 2015.[4] M. Danelljan, F. S. Khan, M. Felsberg, and J. V. de Weijer,“Adaptive color attributes for real-time visual tracking,” in heIEEE Conference on Computer Vision and Pattern Recogni-tion , 2014, pp. 1090–1097.[5] P. Liang, E. Blasch, and H. Ling, “Encoding color informationfor visual tracking: algorithms and benchmark,”
IEEE Trans-actions on Image Processing , vol. 24, no. 12, pp. 5630–5644,2015.[6] H. Stokman and T. Gevers, “Selection and Fusion of ColorModels for Feature Detection.pdf,”
CVPR , 2005.[7] Y.-I. Ohta, T. Kanade, and T. Sakai, “Color information forregion segmentation,”
Computer graphics and image process-ing , vol. 13, no. 3, pp. 222–241, 1980.[8] V. Meas-Yedid, E. Glory, E. Morelon, C. Pinset, G. Stamon,and J. C. Olivo-Marin, “Automatic color space selection forbiological image segmentation,”
Proceedings - InternationalConference on Pattern Recognition , vol. 3, pp. 514–517,2004.[9] S. Dev, Y. H. Lee, and S. Winkler, “Systematic study of colorspaces and components for the segmentation of sky/cloudimages,” , pp. 5102–5106, 2014.[10] A. Gupta and A. Chaudhary, “Robust skin segmentationusing color space switching,”
Pattern Recognition and ImageAnalysis , vol. 26, no. 1, pp. 61–68, 2016.[11] V. Vezhnevets, “A Survey on Pixel-Based Skin Color Detec-tion Techniques,”
Cybernetics , vol. 85, no. 0896-6273 SB -IM, pp. 85–92, 2003.[12] C. Benedek and T. Szir´anyi, “Study on color space selectionfor detecting cast shadows in video surveillance,”
Interna-tional Journal of Imaging Systems and Technology , vol. 17,no. 3, pp. 190–201, 2007.[13] K. Van De Sande, T. Gevers, and C. Snoek, “Evaluatingcolor descriptors for object and scene recognition,”
IEEETransactions on Pattern Analysis and Machine Intelligence ,vol. 32, no. 9, pp. 1582–1596, 2010.[14] J. Scandaliaris, M. Villamizar, J. Andrade-Cetto, and A. San-feliu, “Robust color contour object detection invariant toshadows,”
Proceedings of the Congress on pattern recogni-tion 12th Iberoamerican conference on Progress in patternrecognition, image analysis and applications , pp. 301–310,2007.[15] G. Paschos, “Perceptually uniform color spaces for color tex-ture analysis: an \ nempirical evaluation,” IEEE Transactionson Image Processing , vol. 10, no. 6, pp. 932–937, 2001. [16] A. Porebski and N. Vandenbroucke, “ Iterative Feature Se-lection for Color Texture Classification Ecole d’Ingenieursdu Pas-de-Calais Departement Automatique Campus de laMalassise 62967 Longuenesse Cedex - France LaboratoireLAGIS - UMR CNRS 8146 Universite des Sciences etTechno,”
Image Processing, 2007. ICIP 2007. IEEE Inter-national Conference on , pp. 509–512, 2007.[17] D.-L. Song, L.-H. Ge, W.-W. Qi, and M. Chen, “Illuminationinvariant color model selection based on genetic algorithm inrobot soccer,”
Information Science and Engineering (ICISE),2010 2nd International Conference on , no. 3, pp. 1–4, 2010.[18] D. Song, W. Sun, Z. Ji, G. Hou, X. Li, and L. Liu, “Colormodel selection for underwater object recognition,”
Interna-tional Conference on Information Science, Electronics andElectrical Engineering , pp. 1339–1342, 2014.[19] G. Duan, F. Duan, Y. Xu, H. Gong, and X. Qu, “Investigationof Optimal Segmentation Color Space of Bayer True ColorImages with Multi-Objective Optimization Methods,”
Journalof the Indian Society of Remote Sensing , vol. 43, no. 3, pp.487–499, 2015.[20] A. Rasouli and J. K. Tsotsos, “Attention in autonomousrobotic visual search,” in i-SAIRAS , Montreal, June 2014.[21] P. Guo and M. R. Lyu, “A study on color space selectionfor determining image segmentation region number,” in the2000 International Conference on Artificial Intelligence (IC-AI2000) , Las Vegas, 2000, pp. 1127–1132.[22] E. Salvador, A. Cavallaro, and T. Ebrahimi, “Cast shadowsegmentation using invariant color features,”
Computer visionand image understanding , vol. 95, no. 2, pp. 238–259, 2004.[23] M. Tkalcic and J. F. Tasic, “Colour spaces: perceptual,historical and applicational background,” in
Eurocon , 2013.[24] A. R. Weeks, C. E. Felix, and H. R. Myler, “Edge detectionof color images using the hsl color space,” in
In IS&T/SPIE’sSymposium on Electronic Imaging: Science & Technology ,1995, pp. 291–301.[25] Y. I. Ohta, T. Kanade, and T. Sakai, “Color information forregion segmentation,”
Computer graphics and image process-ing , vol. 13, no. 3, pp. 222–241, 1980.[26] L. Lucchese and S. K. Mitra, “Filtering color images in thexyy color space,” in
ICIP , 2000, pp. 500–503.[27] M. J. Swain and D. H. Ballard, “Color indexing,”
ComputerVision , vol. 7, no. 1, pp. 11–32, 1991.[28] P. J. Rousseeuw, “Silhouettes: a graphical aid to the in-terpretation and validation of cluster analysis,”
Journal ofcomputational and applied mathematics , vol. 20, pp. 53–65,1987.[29] A. Rasouli and J. K. Tsotsos, “Sensor planning for 3d visualsearch with task constraints,” in