Ana Maria Martinez-Enriquez
CINVESTAV
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ana Maria Martinez-Enriquez.
mexican international conference on computer science | 2003
Hiroshi Natsu; Jesús Favela; Alberto L. Morán; Dominique Decouchant; Ana Maria Martinez-Enriquez
Pair programming is an extreme programming practice, where two programmers working sided by side on a single computer produce a software artifact. This technique has demonstrated to produce higher quality code in less time it would take an individual programmer. We present the COPPER system, a synchronous source code editor that allows two distributed software engineers to write a program using pair programming. COPPER implements characteristics of groupware systems such as communication mechanism, collaboration awareness, concurrency control, and a radar view of the documents, among others. It also incorporates a document presence module, which extends the functionality of instant messaging systems to allow users to register documents from a Web server and interact with them in a similar fashion as they do with a colleague. We report results from a preliminary evaluation of COPPER which provide evidence that the system could successfully support distributed pair programming.
symposium on applications and the internet | 2001
Dominique Decouchant; Jesús Favela; Ana Maria Martinez-Enriquez
Currently, the World Wide Web environment provides support mostly for single-user authoring and browsing. Even though initiatives such as WebDAV have been proposed, there is still no adequate support for cooperative authoring of WWW documents that deals with the unreliability of such a distributed environment. We present the PINAS platform which provides means for supporting cooperative authoring on the Web. Using cooperative authoring applications built using the services of this platform, several users can create shared Web documents in a consistent and controlled way. PINAS provides several interesting features, such as: author identification, document and resource naming, document composition and management, document replication, consistency and storage. We propose seamless extensions to standard Web services that can be fully integrated within the Web environment. In this way, a shared document can be edited using a distributed cooperative editor and be accessed, at the same time, from standard Web browsers.
mexican international conference on artificial intelligence | 2010
Waqar Mirza Muhammad; Rizwan Muhammad; Aslam Muhammad; Ana Maria Martinez-Enriquez
In Islamic religion, mistakes in recitation of holy Quran (the sacred book of Muslims) are forbidden. Mistakes can be missing words, verse, misreading Harakat (pronunciations, punctuations, and accents). Thus, a hafiz/reciter who memorizes the holy Quran, needs other hafiz/tutor who listens the recitation and points oral mistakes. Due to the seriously commitment, the availability and expertise of a hafiz are also questionable. A listener can also make mistakes while hearing imputable to environmental interruptions like noise, attention. In order to tackle this issue, we designed, developed, and tested the E-hafiz system. E-hafiz is based on Mel-Frequency Cepstral Coefficient (MFCC) technique to extract voice features from Quranic verse recitation and maps them with the data collected during the training phase. Any mismatch mistake is pointed out. Testing results of short verses of Quran using the E-hafiz system are very encouraging.
mexican international conference on artificial intelligence | 2010
Afraz Z. Syed; Muhammad Aslam; Ana Maria Martinez-Enriquez
Like other languages, Urdu websites are becoming more popular, because the people prefer to share opinions and express sentiments in their own language. Sentiment analyzers developed for other well-studied languages, like English, are not workable for Urdu, due to their scriptic, morphological, and grammatical differences. As a result, this language should be studied as an independent problem domain. Our approach towards sentiment analysis is based on the identification and extraction of SentiUnits from the given text, using shallow parsing. SentiUnits are the expressions, which contain the sentiment information in a sentence. We use sentiment-annotated lexicon based approach. Unluckily, for Urdu language no such lexicon exists. So, a major part of this research consists in developing such a lexicon. Hence, this paper is presented as a base line for this colossal and complex task. Our goal is to highlight the linguistic (grammar and morphology) as well as technical aspects of this multidimensional research problem. The performance of the system is evaluated on multiple texts and the achieved results are quite satisfactory.
Artificial Intelligence Review | 2014
Afraz Z. Syed; Muhammad Aslam; Ana Maria Martinez-Enriquez
This paper presents, a grammatically motivated, sentiment classification model, applied on a morphologically rich language: Urdu. The morphological complexity and flexibility in grammatical rules of this language require an improved or altogether different approach. We emphasize on the identification of the SentiUnits, rather than, the subjective words in the given text. SentiUnits are the sentiment carrier expressions, which reveal the inherent sentiments of the sentence for a specific target. The targets are the noun phrases for which an opinion is made. The system extracts SentiUnits and the target expressions through the shallow parsing based chunking. The dependency parsing algorithm creates associations between these extracted expressions. For our system, we develop sentiment-annotated lexicon of Urdu words. Each entry of the lexicon is marked with its orientation (positive or negative) and the intensity (force of orientation) score. For the evaluation of the system, two corpora of reviews, from the domains of movies and electronic appliances are collected. The results of the experimentation show that, we achieve the state of the art performance in the sentiment analysis of the Urdu text.
mexican international conference on artificial intelligence | 2008
Shafqat M. Virk; Aslam Muhammad; Ana Maria Martinez-Enriquez
This paper studies different vehicle fault prediction techniques, using artificial neural network and fuzzy logic based model. With increasing demands for efficiency and product quality as well as progressing integration of automatic control systems in high-cost mechatronics and safety-critical processes, monitoring is necessary to detect and diagnose faults using symptoms and related data. However, beyond protective maintenance services, it is viable to integrate fault prediction services. Thus, we studied different parameters to model a fault prediction service. This service not only helps to predict faults but is also useful to take precautionary measures to avoid tangible and intangible losses.
international workshop on groupware | 2004
Jesús Favela; Hiroshi Natsu; Cynthia B. Pérez; Omar Robles; Alberto L. Morán; Raul Romero; Ana Maria Martinez-Enriquez; Dominique Decouchant
Pair programming is an Extreme Programming (XP) practice where two programmers work on a single computer to produce an artifact. Empirical evaluations have provided evidence that this technique results in higher quality code in half the time it would take an individual programmer. Distributed pair programming could facilitate opportunistic pair programming sessions with colleagues working in remote sites. In this paper we present the preliminary results of the empirical evaluation of the COPPER collaborative editor, developed explicitly to support pair programming. The evaluation was performed on three different conditions: pairs working collocated on a single computer; distributed pairs working in application sharing mode; and distributed pairs using collaboration aware facilities. In all three cases the subjects used the COPPER collaborative editor. The results support our hypothesis that distributed pairs could find the same amount of errors as their collocated counterparts. However, no evidence was found that the pairs that used collaborative awareness services had better code comprehension, as we had also hypothesized.
2014 11th Annual High Capacity Optical Networks and Emerging/Enabling Technologies (Photonics for Energy) | 2014
Muhammad Adnan; Muhammad Afzal; Muhammad Aslam; Roohi Jan; Ana Maria Martinez-Enriquez
This paper emphasizes importance and solution of big data problems through cloud computing. Knowledge embedded in big data generated by sensors, personal computers and mobile devices is compelling many companies to spend millions of dollars to solve problems of information and knowledge extraction to make intelligent decisions in time for the growth of their businesses. Google BigQuery, Rackspace Big Data Cloud, Amazon Web Services are some platforms that are providing limited solutions and infrastructures to deal with big data problems. However, our study motivates IT companies to use open source Hadoop architecture to develop cloud systems for reliable distributed computing to process their large data sets efficiently and effectively. Our main guideline is to resolve the big data through a companys own infrastructure and integrating various other big data infrastructures into their clouds. Also that, Hadoop reduce/map technique can be implemented on the clusters within and across the private and public clouds.
mexican international conference on artificial intelligence | 2011
Afraz Z. Syed; Muhammad Aslam; Ana Maria Martinez-Enriquez
The paper investigates and proposes the treatment of the effect of the phrase-level negation on the sentiment analysis of the Urdu text based reviews. The negation acts as the valence shifter and flips or switches the inherent sentiments of the subjective terms in the opinionated sentences. The presented approach focuses on the subjective phrases called the SentiUnits, which are made by the subjective terms (adjectives), their modifiers, conjunctions, and the negation. The final effect of these phrases is computed according to the given model. The analyzer takes one sentence from the given review, extracts the constituent SentiUnits, computes their overall effect (polarity) and then calculates the final sentence polarity. Using this approach, the effect of negation is handled within these subjective phrases. The main contribution of the research is to deal with a morphologically rich, and resource poor language, and despite of being a pioneering effort in handling negation for the sentiment analysis of the Urdu text, the results of experimentation are quit encouraging.
mexican international conference on artificial intelligence | 2010
Ali Zulfiqar; Aslam Muhammad; Ana Maria Martinez-Enriquez; Gonzalo Escalada-Imaz
Every feature extraction and modeling technique of voice/speech is not suitable in all type of environments. In many real life applications, it is not possible to use all type of feature extraction and modeling techniques to design a single classifier for speaker identification tasks because it will make the system complex. So instead of exploring more techniques or making the system complex it is more reasonable to develop the classifier by using existing techniques and then combine them by using different combination techniques to enhance the performance of the system. Thus, this paper describes the design and implementation of a VQ-HMM based Multiple Classifier System by using different combination techniques. The results show that the developed system by using confusion matrix significantly improve the identification rate.