Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alejandro Bia is active.

Publication


Featured researches published by Alejandro Bia.


International Journal of Information Technology and Web Engineering | 2007

Tool Support for Model-driven Development of Web Applications

Jaime Gómez; Alejandro Bia; Antonio Parraga

This article describes the engineering foundations of VisualWADE, a CASE tool to automate the production of Web applications. VisualWADE follows a model-driven approach focusing on requirements analysis, high level design, and rapid prototyping. In this way, an application evolves smoothly from the first prototype to the final product, and its maintenance is a natural consequence of development. The article also discusses the lessons learned in the development of the tool and its application to several case studies in the industrial context.


International Journal of Neural Systems | 2001

ALOPEX-B: A NEW, SIMPLER, BUT YET FASTER VERSION OF THE ALOPEX TRAINING ALGORITHM *

Alejandro Bia

Experimenting with some changes and simplifications to the Alopex algorithm, we obtained a new faster version (Alopex-B), that also shows lower failure rates on training attempts. Like Alopex, our version is network-architecture independent, does not require error or transfer functions to be differentiable, has a high potential for parallelism, and is stochastic (which helps avoid local minima), but unlike Alopex it follows no annealing scheme, and uses less parameters which makes it simpler to implement and to use.


web information systems engineering | 2005

Tool support for model-driven development of web applications

Jaime Gómez; Alejandro Bia; Antonio Parraga

This paper describes the engineering foundations of VisualWADE, a CASE tool to automate the production of Web applications. VisualWADE follows a model-driven approach focusing on requirements analysis, high level design, and rapid prototyping. In this way, an application evolves smoothly from the first prototype to the final product, and its maintenance is a natural consequence of development. The paper also discusses the lessons learned in the development of the tool and its application to several case studies in the industrial context.


european conference on research and advanced technology for digital libraries | 2001

Using Copy-Detection and Text Comparison Algorithms for Cross-Referencing Multiple Editions of Literary Works

Arkady B. Zaslavsky; Alejandro Bia; Krisztián Monostori

This article describes a joint research work between Monash University and the University of Alicante, where software originally meant for plagiarisman d copy detection in academic works is successfully applied to performcom parative analysis of different editions of literary works. The experiments were performed with Spanish texts from the Miguel de Cervantes digital library. The results have proved useful for literary and linguistic research, automating part of the tedious task of comparative text analysis. Besides, other interesting uses were detected.


Literary and Linguistic Computing | 2001

The Miguel de Cervantes Digital Library: the Hispanic Voice on the Web

Alejandro Bia; Andrés Pedreño

This paper describes the philosophy behind what represents one of the most ambitious projects of its kind ever to have been undertaken in the Spanish-speaking world: the Miguel de Cervantes Digital Library (http://cervantesvirtual. com/). It explains the reasons behind its creation, the private-public sector alliance that has made it possible, and the new ground being explored by its creators in terms of the new services it offers to its audience worldwide and of innovative application of digital methods. The final section of the paper deals with the technical underpinnings of this project at present and in the future, reporting continuing research and development activities being carried out at the Miguel de Cervantes Digital Library in the field of text markup and derived applications, such as automatic transformation of documents to different formats and complex searches performed upon the small textual objects defined by the markup scheme. A brief survey of works done on Named Entity Recognition that can be applied to automatic markup is also included. Finally, there are some comments on the research lines we intend to follow concerning information retrieval and filtering from structurally marked-up texts. This is a fascinating period in the history of libraries and publishing. For the first time, it is possible to build large-scale services where collections of information are stored in digital formats and retrieved over the networks (Arms, 2000).


european conference on research and advanced technology for digital libraries | 2010

Using mind maps to model semistructured documents

Alejandro Bia; Rafael Muñoz; Jaime Gómez

We often use UML diagrams for our software development projects, and also for modeling XML DTDs and Schemas (1), finding that although UML diagrams can effectively be made to represent DTDs and Schemas (either using Class or Component diagrams), in real practice, complex DTDs and Schemas produce unreadable, unmanageable, complex UML diagrams. Recently we started exploring other types of diagrams and unconventional methods which can be both useful for designing and modeling semistructured data, and as teaching aids or thinking tools. This experience also served to open our minds to tools and methods other than the recognized mainstream practices. In this paper, we describe how we managed to use Mind Maps and a modified Freemind tool to successfully model, design, modify, import and export XML DTDs, XML Schemas (XSD and RNG) and also XML document instances, getting very manageable, easily comprehensible, folding diagrams. In this way, we converted a general purpose mind-mapping tool, into a very powerful tool for XML vocabulary design and simplification (and also for teaching XML markup, or for presentation purposes).


international conference on web engineering | 2004

Personalizing digital libraries at design time: the Miguel de Cervantes Digital Library case study

Alejandro Bia; Irene Garrigós; Jaime Gómez

In this article we describe our experience in the development of a personalizable dissemination model for the Miguel de Cervantes Digital Library’s Web-based newsletter-service, which combines adaptive with adaptable personalization techniques, being capable or ranking news according to navigation-inferred preferences and then filter them according to a user-given profile. We explain how Web engineering design techniques have been applied to make that service evolve into a more adaptive personalization approach obtaining more effective results. The work is presented in the context of the OO-H [5] web design method.


Literary and Linguistic Computing | 2005

A Repository Management System for TEI Documents

Amit Kumar; Susan Schreibman; Stewart Arneil; Martin Holmes; Alejandro Bia; John A. Walsh

Digital Humanities (DH) and Digital Library (DL) projects are complex systems that require specialized programming skills. Many encoders cannot take their work to the next level by transforming their collections of structured XML texts into a web searchable and browsable database. Often teams of text encoders are able to encode their texts with a high degree of sophistication, but unless they have funds to hire a programmer, their collections far too often remain on local disk storage away from public access. aims to relieve some of this burden by providing the tools to manage an extensible, modular and configurable XML-based repository which will house, search, browse, and display documents encoded in TEI-Lite on the world wide web. provides an administrative interface that allows DL and DH administrators to upload and delete documents from a web accessible repository; analyze XML documents to determine elements for searching and browsing; refine ontology development; select inter and intra document links; partition the repository into collections; create backups; generate search, browse, and display pages; customize the interface; and associate XSL transformation scripts and CSS stylesheets to obtain different target outputs (HTML, PDF, etc.).


acm/ieee joint conference on digital libraries | 2001

A versatile facsimile and transcription service for manuscripts and rare old books at the Miguel de Cervantes digital library

Alejandro Bia

The purpose of this poster is to describe our approach to provide facsimiles of manuscripts and old books as one of our DL services publicly available by Internet.


european conference on research and advanced technology for digital libraries | 2010

Estimating digitization costs in digital libraries using DiCoMo

Alejandro Bia; Rafael Muñoz; Jaime Gómez

The estimate of digitization costs is a very difficult task. It is difficult to make exact predictions due to the great quantity of unknown factors. However, digitization projects need to have a precise idea of the economic costs and the times involved in the development of their contents. The common practice when we start digitizing a new collection is to set a schedule, and a firm commitment to fulfill it (both in terms of cost and deadlines), even before the actual digitization work starts. As it happens with software development projects, incorrect estimates produce delays and cause costs overdrafts. Based on methods used in Software Engineering for software development cost prediction like COCOMO and Function Points, and using historical data gathered during five years at the Miguel de Cervantes Digital Library, during the digitization of more than 12.000 books, we have developed a method for time and cost estimates named DiCoMo (Digitization Costs Model) for digital content production in general. This method can be adapted to different production processes, like the production of digital XML or HTML texts using scanning and OCR, and undergoing human proofreading and error correction, or for the production of digital facsimiles (scanning without OCR). The accuracy of the estimates improve with time, since the algorithms can be optimized by making adjustments based on historical data gathered from previous tasks.

Collaboration


Dive into the Alejandro Bia's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arkady B. Zaslavsky

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Federico Botella

Universidad Miguel Hernández de Elche

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ramón P. Ñeco

Universidad Miguel Hernández de Elche

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge