Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Juan Pazos is active.

Publication


Featured researches published by Juan Pazos.


machines computations and universality | 2003

Tissue P systems

Carlos Martín-Vide; Gheorghe Păun; Juan Pazos; Alfonso Rodríguez-Patón

Starting from the way the inter-cellular communication takes place by means of protein channels (and also from the standard knowledge about neuron functioning), we propose a computing model called a tissue P system, which processes symbols in a multiset rewriting sense, in a net of cells. Each cell has a finite state memory, processes multisets of symbol-impulses, and can send impulses (“excitations”) to the neighboring cells. Such cell nets are shown to be rather powerful: they can simulate a Turing machine even when using a small number of cells, each of them having a small number of states. Moreover, in the case when each cell works in the maximal manner and it can excite all the cells to which it can send impulses, then one can easily solve the Hamiltonian Path Problem in linear time. A new characterization of the Parikh images of ET0L languages is also obtained in this framework. Besides such basic results, the paper provides a series of suggestions for further research.


computing and combinatorics conference | 2002

A New Class of Symbolic Abstract Neural Nets: Tissue P Systems

Carlos Martín-Vide; Juan Pazos; Gheorghe Paun; Alfonso Rodríguez-Patón

Starting from the way the inter-cellular communication takes place by means of protein channels and also from the standard knowledge about neuron functioning, we propose a computing model called a tissue P system, which processes symbols in a multiset rewriting sense, in a net of cells similar to a neural net. Each cell has a finite state memory, processes multisets of symbol-impulses, and can send impulses (?excitations?) to the neighboring cells. Such cell nets are shown to be rather powerful: they can simulate a Turing machine even when using a small number of cells, each of them having a small number of states. Moreover, in the case when each cell works in the maximal manner and it can excite all the cells to which it can send impulses, then one can easily solve the Hamiltonian Path Problem in linear time. A new characterization of the Parikh images of ET0L languages are also obtained in this framework.


data and knowledge engineering | 2000

Knowledge maps: An essential technique for conceptualisation

A. Gómez; Ana Moreno; Juan Pazos; Almudena Sierra-Alonso

Abstract The process of conceptualisation is a fundamental problem-solving activity and, hence, is an essential activity for solving the problem of software systems construction. This paper first analyses the process of conceptualisation generally, that is, not as applied specifically to software systems, and establishes a general-purpose conceptualisation process, composed of three activities: analysis, synthesis and holistic testing. A proposed instantiation of this framework for the process of conceptualisation in knowledge-based systems (KBS) construction is then presented. The paper focuses on an activity that is frequently overlooked in conceptualisation, that is, holistic testing, and on a technique that is proposed to address this phase, known as the knowledge map (KM). This technique integrates the static and dynamic perspectives of the reasoning employed by the expert to solve the problem. This paper discusses the foundations and an application of this technique.


IEEE Transactions on Software Engineering | 2004

A methodological framework for viewpoint-oriented conceptual modeling

Javier Andrade; Juan Ares; Rafael García; Juan Pazos; Santiago Rodríguez; Andrés Silva

To solve any nontrivial problem, it first needs to be conceptualized, taking into account the individual who has the problem. However, a problem is generally associated with more than one individual, as is usually the case in software development. Therefore, this process has to take into account different viewpoints about the problem and any discrepancies that could arise as a result. Traditionally, conceptualization in software engineering has omitted the different viewpoints that the individuals may have of the problem and has inherently enforced consistency in the event of any discrepancies, which are considered as something to be systematically rejected. The paper presents a methodological framework that explicitly drives the conceptualization of different viewpoints and manages the different types of discrepancies that arise between them, which become really important in the process. The definition of this framework is generic, and it is therefore independent of any particular software development paradigm. Its application to software engineering means that viewpoints and their possible discrepancies can be considered in the software process conceptual modeling phase. This application is illustrated by means of what is considered to be a standard problem: the IFIP case.


Computers in Education | 2014

A system for knowledge discovery in e-learning environments within the European Higher Education Area - Application to student data from Open University of Madrid, UDIMA

Juan Alfonso Lara; David Lizcano; María-Aurora Martínez; Juan Pazos; Teresa Riera

In todays open and dynamic learning environment, a significant percentage of students have a preference for flexible learning systems whereby they can reconcile their academic pursuits with their job responsibilities and family obligations.Non face-to-face educational models, like e-learning (electronic learning), evolved in order to offer such flexibility. E-learning systems have major strengths but also pose major challenges to the educational community.One such challenge is the large spatial and temporal gap between the teacher and student, which is an obstacle to student follow-up by teachers. The information generated by virtual learning systems sometimes overwhelms instructors who are unable to process the data without the support of special-purpose techniques and tools that are useful for analysing large dataflows. Student supervision is essential for detecting student behaviours that can lead to course dropout.The use of time analysis techniques promises to be a good option for evaluating educational data.The proposal that we present is able to identify students that are likely to drop out.The proposed system outperforms all of on the analysed proposals.The number of students correctly classified by our system describes a logarithmic behaviour.


Information & Software Technology | 2013

An architectural model for software testing lesson learned systems

Javier Andrade; Juan Ares; María-Aurora Martínez; Juan Pazos; Santiago Rodríguez; Julio Romera; Sonia Suárez

Context: Software testing is a key aspect of software reliability and quality assurance in a context where software development constantly has to overcome mammoth challenges in a continuously changing environment. One of the characteristics of software testing is that it has a large intellectual capital component and can thus benefit from the use of the experience gained from past projects. Software testing can, then, potentially benefit from solutions provided by the knowledge management discipline. There are in fact a number of proposals concerning effective knowledge management related to several software engineering processes. Objective: We defend the use of a lesson learned system for software testing. The reason is that such a system is an effective knowledge management resource enabling testers and managers to take advantage of the experience locked away in the brains of the testers. To do this, the experience has to be gathered, disseminated and reused. Method: After analyzing the proposals for managing software testing experience, significant weaknesses have been detected in the current systems of this type. The architectural model proposed here for lesson learned systems is designed to try to avoid these weaknesses. This model (i) defines the structure of the software testing lessons learned; (ii) sets up procedures for lesson learned management; and (iii) supports the design of software tools to manage the lessons learned. Results: A different approach, based on the management of the lessons learned that software testing engineers gather from everyday experience, with two basic goals: usefulness and applicability. Conclusion: The architectural model proposed here lays the groundwork to overcome the obstacles to sharing and reusing experience gained in the software testing and test management. As such, it provides guidance for developing software testing lesson learned systems.


Journal of Systems and Software | 1996

Software engineering and knowledge engineering: towards a common life cycle

Fernando Alonso; Natalia Juristo; José Luis Maté; Juan Pazos

Software and knowledge engineering are increasingly converging into a single life-cycle, as the two engineering disciplines are studied in more depth, and increasingly larger systems are developed in the two fields. In this article, the authors advocate a conical-spiral type life cycle, arranged in two dimensions (the spiral) for the development of software engineering (SE) systems and three dimensions (the conical-spiral) for the knowledge engineering (KE) life cycle. A conventional example for the overall personnel management of a company is also presented, showing how the two branches of engineering are essential and complementary in solving important problems.


International Journal of Software Engineering and Knowledge Engineering | 1995

TRENDS IN LIFE-CYCLE MODELS FOR SE AND KE: PROPOSAL FOR A SPIRAL-CONICAL LIFE-CYCLE APPROACH

Fernando Alonso; Natalia Juristo; Juan Pazos

The ten-year gap between the emergence of Software Engineering (SE) and Knowledge Engineering (KE) has led to the two disciplines developing along different methodological lines. In this paper, we point out that, after having passed through a period during which they ignored each other, followed by a competitive phase, the two disciplines have now reached a meeting point. We see the need for a life-cycle model for systems that integrate traditional and knowledge-based software. Besides, software development in the 21st century will entail open requirements and technological tools that will evolve during the life-cycle. Finally, the paper discusses a proposal for a conical-type spiral life-cycle model that seeks to meet all those needs.


Information & Software Technology | 2004

A methodological framework for generic conceptualisation: problem-sensitivity in software engineering

Javier Andrade; Juan Ares; Rafael García; Juan Pazos; Santiago Rodríguez; Andrés Silva

Abstract The first step towards developing quality software is to conceptually model the problem raised in its own context. Software engineering, however, has traditionally focused on implementation concepts, and has paid little or no attention to the problem domain. This paper presents a generic methodological framework to guide conceptual modelling, focusing on the problem within its domain. This framework is defined considering aspects related to a generic conceptualisation, and its application to software engineering—illustrated using the IFIP Case—achieves the called-for problem-sensitivity.


Fuzzy Sets and Systems | 2002

An inference engine based on fuzzy logic for uncertain and imprecise expert reasoning

R. O. D'Aquila; C. Crespo; José Luis Maté; Juan Pazos

This paper addresses the development and computational implementation of an inference engine based on a full fuzzy logic, excluding only imprecise quantifiers, for handling uncertainty and imprecision in rule-based expert systems. The logical model exploits some connectives of Lukasiewiczs infinite multi-valued logic and is mainly founded on the work of L.A. Zadeh and J.F. Baldwin. As it is oriented to expert systems, the inference engine was developed to be as knowledge domain independent as possible, while having satisfactory computational efficiency. This is achieved firstly by using the same linguistic term set in every universe of discourse. Thus, it is possible to add a dictionary to the knowledge base, which translates the usual linguistic values of the domain to those of the term set. Secondly, the logical operations of negation and conjunction and the modus ponens rule of inference are implemented exclusively in the truth space.The approach provides, firstly, a realistic and unambiguous solution to the combination of evidence problem and, secondly, offers two alternative versions of implementation. The full version uses the algorithms of the operations involved. In a more efficient version, which places a small constraint on the use of linguistic modifiers and is confined to knowledge bases whose inference chains are no longer than three links, the above algorithms are replaced by pre-computed tables.

Collaboration


Dive into the Juan Pazos's collaboration.

Top Co-Authors

Avatar

Juan Ares

University of A Coruña

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrés Silva

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Lizcano

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Juan Alfonso Lara

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Natalia Juristo

Technical University of Madrid

View shared research outputs
Researchain Logo
Decentralizing Knowledge