Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Abdullah Bulbul is active.

Publication


Featured researches published by Abdullah Bulbul.


international conference on computer graphics and interactive techniques | 2015

Optimal presentation of imagery with focus cues on multi-plane displays

Rahul Narain; Rachel A. Albert; Abdullah Bulbul; Gregory John Ward; Martin S. Banks; James F. O'Brien

We present a technique for displaying three-dimensional imagery of general scenes with nearly correct focus cues on multi-plane displays. These displays present an additive combination of images at a discrete set of optical distances, allowing the viewer to focus at different distances in the simulated scene. Our proposed technique extends the capabilities of multi-plane displays to general scenes with occlusions and non-Lambertian effects by using a model of defocus in the eye of the viewer. Requiring no explicit knowledge of the scene geometry, our technique uses an optimization algorithm to compute the images to be displayed on the presentation planes so that the retinal images when accommodating to different distances match the corresponding retinal images of the input scene as closely as possible. We demonstrate the utility of the technique using imagery acquired from both synthetic and real-world scenes, and analyze the systems characteristics including bounds on achievable resolution.


applied perception in graphics and visualization | 2010

A framework for enhancing depth perception in computer graphics

Abdullah Bulbul; Tolga K. Çapin

This paper introduces a solution for enhancing depth perception in a given 3D computer-generated scene. For this purpose, we propose a framework that decides on the suitable depth cues for a given scene and the rendering methods which provide these cues. First, the system calculates the importance of each depth cue using a fuzzy logic based algorithm which considers the target tasks in the application and the spatial layout of the scene. Then, a knapsack model is constructed to keep the balance between the rendering costs of the graphical methods that provide these cues and their contibution to depth perception. This cost-profit analysis step selects the proper rendering methods. In this work, we also present several objective and subjective experiments which show that our automated depth enhancement system is statistically (p < 0.05) better than the other method selection techniques that are tested.


IEEE Signal Processing Magazine | 2011

Assessing Visual Quality of 3-D Polygonal Models

Abdullah Bulbul; Tolga K. Çapin; Guillaume Lavoué; Marius Preda

Recent advances in evaluating and measuring the perceived visual quality of three-dimensional (3-D) polygonal models are presented in this article, which analyzes the general process of objective quality assessment metrics and subjective user evaluation methods and presents a taxonomy of existing solutions. Simple geometric error computed directly on the 3-D models does not necessarily reflect the perceived visual quality; therefore, integrating perceptual issues for 3-D quality assessment is of great significance. This article discusses existing metrics, including perceptually based ones, computed either on 3-D data or on two-dimensional (2-D) projections, and evaluates their performance for their correlation with existing subjective studies.


Computers & Graphics | 2010

Technical Section: A perceptual approach for stereoscopic rendering optimization

Abdullah Bulbul; Tolga K. Çapin

The traditional way of stereoscopic rendering requires rendering the scene for left and right eyes separately; which doubles the rendering complexity. In this study, we propose a perceptually-based approach for accelerating stereoscopic rendering. This optimization approach is based on the Binocular Suppression Theory, which claims that the overall percept of a stereo pair in a region is determined by the dominant image on the corresponding region. We investigate how binocular suppression mechanism of human visual system can be utilized for rendering optimization. Our aim is to identify the graphics rendering and modeling features that do not affect the overall quality of a stereo pair when simplified in one view. By combining the results of this investigation with the principles of visual attention, we infer that this optimization approach is feasible if the high quality view has more intensity contrast. For this reason, we performed a subjective experiment, in which various representative graphical methods were analyzed. The experimental results verified our hypothesis that a modification, applied on a single view, is not perceptible if it decreases the intensity contrast, and thus can be used for stereoscopic rendering.


applied perception in graphics and visualization | 2010

Saliency for animated meshes with material properties

Abdullah Bulbul; Çetin Koca; Tolga K. Çapin; Uğur Güdükbay

We propose a technique to calculate the saliency of animated meshes with material properties. The saliency computation considers multiple features of 3D meshes including their geometry, material and motion. Each feature contributes to the final saliency map which is view independent; and therefore, can be used for view dependent and view independent applications. To verify our saliency calculations, we performed an experiment in which we use an eye tracker to compare the saliencies of the regions that the viewers look with the other regions of the models. The results confirm that our saliency computation gives promising results. We also present several applications in which the saliency information is used.


cyberworlds | 2009

A Face Tracking Algorithm for User Interaction in Mobile Devices

Abdullah Bulbul; Tolga K. Çapin

A new face tracking algorithm, and a human-computer interaction technique based on this algorithm, are proposed for use on mobile devices. The face tracking algorithm considers the limitations of mobile use case – constrained computational resources and varying environmental conditions. The solution is based on color comparisons and works on images gathered from the front camera of a device. The face tracking system generates 2D face position as an output that can be used for controlling different applications. Two of such applications are also presented in this work; the first example uses face position to determine the viewpoint, and the second example enables an intuitive way of browsing large images.


Computers & Graphics | 2012

Technical Section: Perceptual 3D rendering based on principles of analytical cubism

Sami Arpa; Abdullah Bulbul; Tolga K. Çapin; Bülent Özgüç

Cubism, pioneered by Pablo Picasso and Georges Braque, was a breakthrough in art, influencing artists to abandon existing traditions. In this paper, we present a novel approach for cubist rendering of 3D synthetic environments. Rather than merely imitating cubist paintings, we apply the main principles of analytical cubism to 3D graphics rendering. In this respect, we develop a new cubist camera providing an extended view, and a perceptually based spatial imprecision technique that keeps the important regions of the scene within a certain area of the output. Additionally, several methods to provide a painterly style are applied. We demonstrate the effectiveness of our extending view method by comparing the visible face counts in the images rendered by the cubist camera model and the traditional perspective camera. Besides, we give an overall discussion of final results and apply user tests in which users compare our results very well with analytical cubist paintings but not synthetic cubist paintings.


Journal of Vision | 2014

Correct blur and accommodation information is a reliable cue to depth ordering.

Marina Zannoli; Rachel A. Albert; Abdullah Bulbul; Rahul Narain; James F. O'Brien; Martin S. Banks

a Ferroelectric liquid-crystal polarization switch, b Birefringent lens Bias: sharp boundary, blurred texture perceived in front. Limitations: correct blur only if viewer accomodates to display screen. unrealistic blur. Stimuli used in Experiment 1. (a) Volumetric presentation: two flat textures (chosen among four) displayed in separate focal planes (distance 1.2 D), blur produced by accommodation. (b) Single-plane presentation: two flat textures displayed in same focal plane, artificially rendered blur (1.2 D). Monocular presentation. Tas k: W hich tex ture is i n fr ont ?


international symposium on computer and information sciences | 2013

Perceptual Caricaturization of 3D Models

Gokcen Cimen; Abdullah Bulbul; Bülent Özgüç; Tolga K. Çapin

Caricature is an illustration of a person or a subject that uses a way of exaggerating the most distinguishable characteristic traits and simplifying the common features in order to magnify the unique features of the subject. Recently, automatic caricature generation has become a research area due to the advantageous features of amusement in the fields such as network, communications, online games, and the animation industry. The aim of this study is to present a perceptual caricaturization approach practicing the concept of exaggeration, which is very common in traditional art and caricature, on 3D mesh models synthesizing the idea of mesh saliency.


Journal of Vision | 2014

The Perception of Surface Material from Disparity and Focus Cues

Martin S. Banks; Abdullah Bulbul; Rachel A. Albert; Rahul Narain; James F. O'Brien; Gregory J. Ward

The� visual� properties� of� surfaces� reveal� many� things� including� a� floor�适s� cleanliness� and� a� car�适s� age.� These� judgments� of� material� are� based� on� the� spread� of� light� reflected� from� a� surface.� The� bidirectional� reflectance� distribution� function� (BRDF)� quantifies� the� pattern� of� spread� and� how� it� depends� on� the� direction� of� incident� light,� surface� shape,� and� surface� material.� Two� extremes� are� Lambertian� and� mirrored� surfaces,� which� respectively� have� uniform� and� delta�괂Ġ function� BRDFs.� Most� surfaces� have� more� complicated� BRDFs� and� we� examined� many� of� them� using� the� Ward� model� as� an� approximation� for� real� surfaces.� Reflections� are� generally� view� dependent.� This� dependence� creates� a� difference� between� the� binocular� disparities� of� a� reflection� and� the� surface� itself.� It� also� creates� focus� differences� between� the� reflection� and� physical� surface.� In� simulations� we� examined� how� material� type� affects� retinal� images.� We� calculated� point�괂Ġspread� functions� (PSFs)� for� reflections� off� different� materials� as� a� function� of� the� eye�适s� focus� state.� When� surface� roughness� is� zero,� the� reflection� PSF� changes� dramatically� with� focus� state.� With� greater� roughness,� the� PSF� change� is� reduced� until� there� is� no� effect� of� focus� state� with� sufficiently� rough� surfaces.� The� reflection� PSF� also� has� a� dramatic� effect� on� the� ability� to� estimate� disparity.� We� next� examined� people�适s� ability� to� distinguish� surface� markings� from� reflections� and� to� identify� different� types� of� material.� We� used� a� unique� volumetric� display� that� allows� us� to� present� nearly� correct� focus� cues� along� with� more� traditional� depth� cues� such� as� disparity.� With� binocular� viewing,� we� observed� a� clear� effect� of� the� disparity� of� reflections� on� these� judgments.� We� also� found� that� disparity� provided� less� useful� information� with� rougher� materials.� With� monocular� viewing,� we� observed� a� small� but� consistent� effect� of� the� reflection�适s� focal� distance� on� judgments� of� markings� vs.� reflections� and� on� identification� of� material.�

Collaboration


Dive into the Abdullah Bulbul's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rahul Narain

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sami Arpa

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Gregory J. Ward

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marina Zannoli

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge