Ulisses Braga-Neto
Texas A&M University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ulisses Braga-Neto.
IEEE Transactions on Medical Imaging | 2002
Xiao Han; Chenyang Xu; Ulisses Braga-Neto; Jerry L. Prince
Reconstructing an accurate and topologically correct representation of the cortical surface of the brain is an important objective in various neuroscience applications. Most cortical surface reconstruction methods either ignore topology or correct it using manual editing or methods that lead to inaccurate reconstructions. Shattuck and Leahy (1999, 2000) recently reported a fully automatic method that yields a topologically correct representation with little distortion of the underlying segmentation. We provide an alternate approach that has several advantages over their approach, including the use of arbitrary digital connectivities, a flexible morphology-based multiscale approach, and the option of foreground-only or background-only correction. A detailed analysis of the methods performance on 15 magnetic resonance brain images is provided.
Journal of Electronic Imaging | 2004
Ulisses Braga-Neto; Manish Choudhary; John Goutsias
We propose a method for automatic target detection and tracking in forward-looking infrared (FLIR) image sequences. We use morphological connected operators to extract and track targets of interest and remove undesirable clutter. The design of these operators is based on general size, connectivity, and motion criteria, using spatial intraframe and temporal interframe information. In a first step, an image sequence is filtered on a frame-by-frame basis to remove background and residual clutter and to enhance the presence of targets. Detections extracted from the first step are passed to a second step for motion-based analysis. This step exploits the spatiotemporal correlation of the data, stated in terms of a connectivity criterion along the time dimension. The proposed method is suitable for piplined implementation or time progressive coding/transmission, since only a few frames are considered at a time. Experimental results, obtained with real FLIR image sequences, illustrating a wide variety of target and clutter variability, demonstrate the effectiveness and robustness of the proposed method.
BMC Bioinformatics | 2011
Ying Wang; Noushin Ghaffari; Charles D. Johnson; Ulisses Braga-Neto; Hui-Hui Wang; Rui-rui Chen; Huaijun Zhou
BackgroundRNA-Seq is the recently developed high-throughput sequencing technology for profiling the entire transcriptome in any organism. It has several major advantages over current hybridization-based approach such as microarrays. However, the cost per sample by RNA-Seq is still prohibitive for most laboratories. With continued improvement in sequence output, it would be cost-effective if multiple samples are multiplexed and sequenced in a single lane with sufficient transcriptome coverage. The objective of this analysis is to evaluate what sequencing depth might be sufficient to interrogate gene expression profiling in the chicken by RNA-Seq.ResultsTwo cDNA libraries from chicken lungs were sequenced initially, and 4.9 million (M) and 1.6 M (60 bp) reads were generated, respectively. With significant improvements in sequencing technology, two technical replicate cDNA libraries were re-sequenced. Totals of 29.6 M and 28.7 M (75 bp) reads were obtained with the two samples. More than 90% of annotated genes were detected in the data sets with 28.7-29.6 M reads, while only 68% of genes were detected in the data set with 1.6 M reads. The correlation coefficients of gene expression between technical replicates within the same sample were 0.9458 and 0.8442. To evaluate the appropriate depth needed for mRNA profiling, a random sampling method was used to generate different number of reads from each sample. There was a significant increase in correlation coefficients from a sequencing depth of 1.6 M to 10 M for all genes except highly abundant genes. No significant improvement was observed from the depth of 10 M to 20 M (75 bp) reads.ConclusionThe analysis from the current study demonstrated that 30 M (75 bp) reads is sufficient to detect all annotated genes in chicken lungs. Ten million (75 bp) reads could detect about 80% of annotated chicken genes, and RNA-Seq at this depth can serve as a replacement of microarray technology. Furthermore, the depth of sequencing had a significant impact on measuring gene expression of low abundant genes. Finally, the combination of experimental and simulation approaches is a powerful approach to address the relationship between the depth of sequencing and transcriptome coverage.
Journal of Mathematical Imaging and Vision | 2003
Ulisses Braga-Neto; John Goutsias
Connectivity is a concept of great relevance to image processing and analysis. It is extensively used in image filtering and segmentation, image compression and coding, motion analysis, pattern recognition, and other applications. In this paper, we provide a theoretical tour of connectivity, with emphasis on those concepts of connectivity that are relevant to image processing and analysis. We review several notions of connectivity, which include classical topological and graph-theoretic connectivity, fuzzy connectivity, and the theories of connectivity classes and of hyperconnectivity. It becomes clear in this paper that the theories of connectivity classes and of hyperconnectivity unify all relevant notions of connectivity, and provide a solid theoretical foundation for studying classical and fuzzy approaches to connectivity, as well as for constructing new examples of connectivity useful for image processing and analysis applications.
PLOS ONE | 2009
Eduardo J. M. Nascimento; Ana Maria Silva; Marli Tenório Cordeiro; Carlos Alexandre Antunes de Brito; Laura H.V.G. Gil; Ulisses Braga-Neto; Ernesto T. A. Marques
Background The complement system, a key component that links the innate and adaptive immune responses, has three pathways: the classical, lectin, and alternative pathways. In the present study, we have analyzed the levels of various complement components in blood samples from dengue fever (DF) and dengue hemorrhagic fever (DHF) patients and found that the level of complement activation is associated with disease severity. Methods and Results Patients with DHF had lower levels of complement factor 3 (C3; p = 0.002) and increased levels of C3a, C4a and C5a (p<0.0001) when compared to those with the less severe form, DF. There were no significant differences between DF and DHF patients in the levels of C1q, immunocomplexes (CIC-CIq) and CRP. However, small but statistically significant differences were detected in the levels of MBL. In contrast, the levels of two regulatory proteins of the alternative pathway varied widely between DF and DHF patients: DHF patients had higher levels of factor D (p = 0.01), which cleaves factor B to yield the active (C3bBb) C3 convertase, and lower levels of factor H (p = 0.03), which inactivates the (C3bBb) C3 convertase, than did DF patients. When we considered the levels of factors D and H together as an indicator of (C3bBb) C3 convertase regulation, we found that the plasma levels of these regulatory proteins in DHF patients favored the formation of the (C3bBb) C3 convertase, whereas its formation was inhibited in DF patients (p<0.0001). Conclusion The data suggest that an imbalance in the levels of regulatory factors D and H is associated with an abnormal regulation of complement activity in DHF patients.
Computer Vision and Image Understanding | 2002
Ulisses Braga-Neto; John Goutsias
The notion of connectivity is very important in image processing and analysis, and particularly in problems related to image segmentation. It is well understood, however, that classical notions of connectivity, including topological and graph-theoretic notions, are not compatible with each other. This motivated G. Matheron and J. Serra to develop a general framework of connectivity, which unifies most classical notions, circumvents incompatibility issues, and allows the construction of new types of connectivity for binary and grayscale images. In this paper, we enrich this theory of connectivity by providing several new theoretical results and examples that are useful in image processing and analysis. In particular, we provide new results on the semi-continuity behavior of connectivity openings, we study the reconstruction operator in a complete lattice framework, and we extend some known binary results regarding reconstruction to this framework. Moreover, we study connectivities constructed by expanding given connectivities by means of clustering operators and connectivities constructed by restricting given connectivities by means of contraction operators.
PLOS ONE | 2009
Eduardo J. M. Nascimento; Ulisses Braga-Neto; Carlos E. Calzavara-Silva; Ana L. Gomes; Frederico Guilherme Coutinho Abath; Carlos Alexandre Antunes de Brito; Marli Tenório Cordeiro; Ana Maria Silva; Cecilia Magalhães; Raoni Andrade; Laura H.V.G. Gil; Ernesto T. A. Marques
Background We report the detailed development of biomarkers to predict the clinical outcome under dengue infection. Transcriptional signatures from purified peripheral blood mononuclear cells were derived from whole-genome gene-expression microarray data, validated by quantitative PCR and tested in independent samples. Methodology/Principal Findings The study was performed on patients of a well-characterized dengue cohort from Recife, Brazil. The samples analyzed were collected prospectively from acute febrile dengue patients who evolved with different degrees of disease severity: classic dengue fever or dengue hemorrhagic fever (DHF) samples were compared with similar samples from other non-dengue febrile illnesses. The DHF samples were collected 2–3 days before the presentation of the plasma leakage symptoms. Differentially-expressed genes were selected by univariate statistical tests as well as multivariate classification techniques. The results showed that at early stages of dengue infection, the genes involved in effector mechanisms of innate immune response presented a weaker activation on patients who later developed hemorrhagic fever, whereas the genes involved in apoptosis were expressed in higher levels. Conclusions/Significance Some of the gene expression signatures displayed estimated accuracy rates of more than 95%, indicating that expression profiling with these signatures may provide a useful means of DHF prognosis at early stages of infection.
PLOS Computational Biology | 2006
Ulisses Braga-Neto; Ernesto T. A. Marques
The development of DNA microarray technology a decade ago led to the establishment of functional genomics as one of the most active and successful scientific disciplines today. With the ongoing development of immunomic microarray technology—a spatially addressable, large-scale technology for measurement of specific immunological response—the new challenge of functional immunomics is emerging, which bears similarities to but is also significantly different from functional genomics. Immunonic data has been successfully used to identify biological markers involved in autoimmune diseases, allergies, viral infections such as human immunodeficiency virus (HIV), influenza, diabetes, and responses to cancer vaccines. This review intends to provide a coherent vision of this nascent scientific field, and speculate on future research directions. We discuss at some length issues such as epitope prediction, immunomic microarray technology and its applications, and computation and statistical challenges related to functional immunomics. Based on the recent discovery of regulation mechanisms in T cell responses, we envision the use of immunomic microarrays as a tool for advances in systems biology of cellular immune responses, by means of immunomic regulatory network models.
Pattern Recognition | 2005
Ulisses Braga-Neto; Edward R. Dougherty
Discrete classification problems abound in pattern recognition and data mining applications. One of the most common discrete rules is the discrete histogram rule. This paper presents exact formulas for the computation of bias, variance, and RMS of the resubstitution and leave-one-out error estimators, for the discrete histogram rule. We also describe an algorithm to compute the exact probability distribution of resubstitution and leave-one-out, as well as their deviations from the true error rate. Using a parametric Zipf model, we compute the exact performance of resubstitution and leave-one-out, for varying expected true error, number of samples, and classifier complexity (number of bins). We compare this to approximate performance measures-computed by Monte-Carlo sampling-of 10-repeated 4-fold cross-validation and the 0.632 bootstrap error estimator. Our results show that resubstitution is low-biased but much less variable than leave-one-out, and is effectively the superior error estimator between the two, provided classifier complexity is low. In addition, our results indicate that the overall performance of resubstitution, as measured by the RMS, can be substantially better than the 10-repeated 4-fold cross-validation estimator, and even comparable to the 0.632 bootstrap estimator, provided that classifier complexity is low and the expected error rates are moderate. In addition to the results discussed in the paper, we provide an extensive set of plots that can be accessed on a companion website, at the URL http://ee.tamu.edu/~edward/exact_discrete.
Journal of Biological Systems | 2006
Edward R. Dougherty; Ulisses Braga-Neto
Knowing the roles of mathematics and computation in experimental science is important for computational biology because these roles determine to a great extent how research in this field should be pursued and how it should relate to biology in general. The present paper examines the epistemology of computational biology from the perspective of modern science, the underlying principle of which is that a scientific theory must have two parts: (1) a structural model, which is a mathematical construct that aims to represent a selected portion of physical reality and (2) a well-defined procedure for relating consequences of the model to quantifiable observations. We also explore the contingency and creative nature of a scientific theory. Among the questions considered are: Can computational biology form the theoretical core of biology? What is the basis, if any, for choosing one particular model over another? And what is the role of computation in science, and in biology in particular? We examine how this broad epistemological framework applies to important statistical methodologies pertaining to computational biology, such as expression-based phenotype classification, gene regulatory networks, and clustering. We consider classification in detail, as the epistemological issues raised by classification are related to all computational-biology topics in which statistical prediction plays a key role. We pay particular attention to classifier-model validity and its relation to estimation rules.