Adolfo Guzmán Arenas
Instituto Politécnico Nacional
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Adolfo Guzmán Arenas.
Polibits | 2015
Benina Velázquez Ordoñez; Jesús Manuel Olivares Ceja; Miguel Patiño Ortiz; Julián Patiño Ortiz; Adolfo Guzmán Arenas
Resumen —Se ha detectado que en algunas aplicaciones de integracion de informacion de fuentes de datos, en algunos casos pueden ocurrir inconsistencias y en otros, se carec e de una entidad para almacenar los datos. Algunas inconsist encias se deben a que los datos se expresan en diferente idio ma al utilizado en el repositorio o por el uso de diferentes unidad es de medida. En este articulo, la propuesta utiliza reglas en la integracion de datos tratando de preservar la consistencia y en ot ros casos implican modificaciones al esquema. Se selecciono el modelo orientado a objetos por sus caracteristicas que fac ilitan la reutilizacion de clases. La base de datos de ejempl o utiliza datos obtenidos de fuentes heterogeneas de la Web pertenecientes al dominio de equipos de computacion. En la integracion, intervienen entidades, atributos, valores y unidade s de medida. Esta propuesta se enfoca en el contenido que es una alternativa a la integracion de esquemas de datos.
Lecture Notes in Computer Science | 2004
Adolfo Guzmán Arenas; Victor-Polo de Gyves
BiblioDigital ® is a network of reservoirs (R) of text documents. Each document exists primarily in one R, with possible duplicates in other Rs. Each R sits in its own server. Each document in indexed in three ways: * by themes (vocabulary controlled by that R’s librarian); * by each word in the document, * by the concepts which the document covers (using Clasitex ®). Each R contains the global index (of all Rs), so that each R can provide the following services: * browsing by themes; * by concepts; * by words; * by metadata; * by Boolean combination of above. Also, BiblioDigital * allows subscription to a personal News Services: through a user interest profile; * BiblioDigital combs the Web for documents that could fall in the themes or topics contained in its indices, and indexes them, thus enriching its knowledge content.
IEEE Latin America Transactions | 2015
Jennifer Lynn Reynoso Munoz; Alma Delia Cuevas Rasgado; Farid Garcia Lamont; Adolfo Guzmán Arenas
This paper focuses on the representation of magnetic resonances of different parts of the human body, such as knees, spinal column, arms, elbows, etc., using ontologies. First, it maps the resonance images in a multimedia database. Then, automatically, using the SIFT pattern recognition algorithm, descriptors of the images stored in the database are extracted in order to recover useful data for the user; it uses the ontologies as an artificial intelligence tool and, in consequence, reduces generation of useless data. Why do we think this is an interesting task? Because, if the user requires information about any topics or (s)he has some illness or needs to undergo magnetic resonance, this tool will show him/her images and text to convey a better understanding, helping to obtain useful conclusions. Artificial intelligence techniques are used, such as machine learning, knowledge representation, and pattern recognition. The ontological relations introduced here are based on the common representation of language, using definition dictionaries, Rogets thesaurus, synonym dictionaries, and other resources The system generates an output in the OM ontological language [1]. This language represents a structure where our system adds the data scanned by the SIFT algorithm. The tests have been made in Spanish; however, thanks to the portability of our system, it is possible to extend the method to any language.
Lecture Notes in Computer Science | 2004
Adolfo Guzmán Arenas
It is a surprise to see how, as years go by, two activities so germane to our discipline, (1) the creation of quality software, and (2) the quality teaching of software construction, and more generally of Computer Science, are surrounded or covered, little by little, by beliefs, attitudes, ”schools of thought,” superstitions and fetishes rarely seen in a scientific endeavor. Each day, more people question them less frequently, so that they become ”everyday truths” or ”standards to observe and demand.” I have the feeling that I am minority in this wave of believers and beliefs, and that my viewpoints are highly unpopular. I dare to express them because I fail to see enough faults in my reasoning and reasons, and because perhaps there exist other ”believers” not so convinced about these viewpoints, so that, perhaps, we will discover that ”the imperator had no clothes, he was naked.”It is a surprise to see how, as years go by, two activities so germane to our discipline, (1) the creation of quality software, and (2) the quality teaching of software construction, and more generally of Computer Science, are surrounded or covered, little by little, by beliefs, attitudes, “schools of thought,” superstitions and fetishes rarely seen in a scientific endeavor. Each day, more people question them less frequently, so that they become “everyday truths” or “standards to observe and demand.” I have the feeling that I am minority in this wave of believers and beliefs, and that my viewpoints are highly unpopular. I dare to express them because I fail to see enough faults in my reasoning and reasons, and because perhaps there exist other “believers” not so convinced about these viewpoints, so that, perhaps, we will discover that “the imperator had no clothes, he was naked.” 1. Myths and beliefs about the production of quality software This section lists several “general truths,” labeled A, B..., G concerning quality of software, and tries to ascertain whether they are reasonable assertions (“facts,” sustainable opinions) or myths. 1.1 About measuring software quality A. It is possible to measure the main attributes that characterize good quality software. The idea here is that software quality can be characterized by certain attributes: reliability, flexibility, robustness, comprehension, adaptability, modularity, complexity, portability, usability, reuse, efficiency... and that it is possible to measure each of these, and therefore, characterize or measure the quality of the software under examination. To ascertain whether point A is a fact or a myth, let us analyze three facets of it. 1) It is possible to measure above attributes subjectively, asking their opinion to people who have used the software in question. Comment 1. Opinions by experienced users are reliable. That is, (1) is not a myth, but something real. It is easy to agree that a program can be characterized by above attributes (or similar list). Also, it is convincing that the opinions of a group of qualified users respect to the quality, ergonomics, portability... of a given software are reliable and worth to be taken into account (subjective, but reliable opinions). 2) Another practice is to try to measure above attributes objectively, by measuring surrogate attributes if the real attribute is difficult to measure [Myth B below]. Comment 2. Measuring surrogate attributes. To measure the height of a water tank when one wishes to measure its volume, is risky. Objective (accurate) measurements of surrogate attributes may be possible, but to think that these measures are proportional to the real attribute, is risky. “If you can not measure beauty of a face, measure the length of the nose, the color of eyes...” If you can not measure the complexity of a program, measure the degree of nesting in its formulas and equations, and say that they are directly related. More in my comments to Myth B. 3) Finally, instead of measuring the quality of a piece of software, go ahead and measure the quality of the manufacturing process of such software: if the building process has quality, no doubt the resulting software should have quality, too (Discussed below as Myth C). Comment 3. To measure the process, instead of measuring the product. In old disciplines (manufacturing of steel hinges, leather production, wine production, cooking...) where there are hundred of years of experience, and which are based in established disciplines (Physics, Chemistry...), it is possible to design a process that guarantees the quality of the product. A process to produce good leather, let us say. And it is also possible to (objectively) measure the quality of the resulting product. And to adapt the process, modifying it to fix errors (deviations) in the product quality: for instance, to obtain a more elastic leather. Our problem is that it is not possible to do that with software. We do not know what processes are good to produce good quality software. We do not know what part of the process to change in order, let us say, to produce software with less complexity, or with greater portability. More in my comments to Myth C. B. There exists a reliable measurement for each attribute. For each attribute to be measured, there exists a reliable, objective measurement that can be carried out. The idea is that, if the original attribute is difficult to measure,1 measure another attribute, correlated to the first, and report the (second) measurement as proportional or a substitute for the measure of the original attribute. 1. Reliability (reliable software, few errors): measure instead the number of error messages in the code. The more, the less errors that software has. 2. Flexibility (malleability to different usage, to different environments) or adaptability: measure instead the number of standards to which that software adheres. 3. Robustness (few drastic failures, the system rarely goes down): measure through tests and long use (Subjective measurement). 1 Or we do not know how to measure it. 4. Comprehension (ability to understand what the system does): measure instead the extent of comments in source code, and the size of its manuals. 5. The size of a program is measured in bytes, the space it occupies in memory (This measurement has no objection, we measure what we want to measure). 6. Speed of execution is measured in seconds (This measurement has no objection, we measure what we want to measure). 7. Modularity: count the number of source modules forming it. 8. Program complexity (how difficult it is to understand the code): measure instead the level of nesting in expressions and commands (“cyclomatic complexity”). 9. Portability (how easy it is to port a software to a different operating system): ask users that have done these portings (Subjective measurement). 10.Program usability (it is high when the program brings large added value to our work. “It is essential to have it.”): measure the percentage of our needs that this program covers (Subjective measurement). 11. Program reuse: measure how many times (parts of) this program have been used in other software development projects. (Objective measurement, but only obtained in hindsight). 12. Ease of use (ergonomics) characterizes programs that are easy to learn, tailored to our intuitive ways to carry out certain tasks. Measure instead the quantity of screens that interact with the user, and their sophistication. Comment 4. Measuring surrogate attributes. These “surrogate measurements” can produce irrelevant figures for the quality that we are really trying to measure. For instance, the complexity of a program will be difficult to measure using point 8, for languages that use no parenthesis for nesting. For instance, it is not clear that a software with long manuals is easier to comprehend (point 4). To measure the temperature of a body when one wants to measure the amount of heat (calories) in it, is incorrect and will produce false results. A very hot needle has less heat that a lukewarm anvil. Comment 5. It is true that in the production of other goods, say iron hinges, is easy to list the qualities that a good hinge must possess: hardness, resistance to corrosion... And it is also easy to objectively measure those qualities. Why is it difficult, then, to measure the equivalent quantities about software? Because hinges have been produced before Pharaohnic times, humankind has accumulated experience on this, and because its manufacture is based on Physics, which is a consolidated science more than 2,000 years old. Physics has defined units (mass, hardness, tensile strength...) capable of objective measurement. More over, Physics often gives us equations (f = ma) that these measurements need to obey. In contrast, Computer Science has existed only for 60 years, and thus almost all its dimensions (reliability, ease of use...) are not susceptible (yet) of objective measurements. Computer Science is not a science yet, it is an art or a craft.2 Nevertheless, it is tempting to apply to software characterization (about its quality, 2 Remember the title of the book “The Art of Computer Programming” of Donald C. Knuth. In addition, we should not be afraid that our science begins as an art or a craft. Visualize Medicine when it was only 60 years old: properties of lemon tea were just being discovered. And physicians talked for a long time of fluids, effluvia, bad air, and witchcraft. With time, our discipline will become a science. say), methods that belong and are useful in these more mature disciplines, but that are not (yet) applicable in our emerging science. We are not aware that methods that work in leather production, do not work in software creation. Indeed, it is useful at times to talk of software creation, not of software production, to emphasize the fact that software building is an art, dominated by inspiration, good luck... (see Comment 7). 1.2 Measuring the process instead of measuring the product An indirect manner to ascertain the quality of a piece of software, is to review the quality of the process producing it. C. Measuring the quality of the process, not the product quality. Instead of measuring the quality of the software product, let us measure the quality of its construction process. To have a good process implies to produce quality software. Comment 6. It is tempting to claim that a “good” process produces good quality software, and therefore, deviations of programmers with respect to the given process should be measured and corrected. The problem here is that it is not possible to say which process will produce good quality software. For instance, if I want to produce portable software, what process should I introduce, versus if what I want to emphasize is ease of use? Thus, the definition of the process becomes very subjective, an act of faith. Processes are used that sound and look reasonable, or that
Soluciones avanzadas | 1996
Adolfo Guzmán Arenas
Computación y Sistemas; Vol 8, No 004 (2005) | 2011
Alexander F. Gelbukh; Adolfo Guzmán Arenas; Grigori Sidorov
Computación y Sistemas | 2009
Alexander F. Gelbukh; Adolfo Guzmán Arenas; Grigori Sidorov
Research on computing science | 2006
Alma Delia Cuevas Rasgado; Adolfo Guzmán Arenas
Archive | 2000
Jesús Manuel Olivares Ceja; Araceli Demetrio Aguirre; Adolfo Guzmán Arenas
IPN ciencia, arte: cultura | 2000
Adolfo Guzmán Arenas