Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Teodor Georgiev is active.

Publication


Featured researches published by Teodor Georgiev.


Biodiversity Data Journal | 2013

Eupolybothrus cavernicolus Komerički & Stoev sp. n. (Chilopoda: Lithobiomorpha: Lithobiidae): the first eukaryotic species description combining transcriptomic, DNA barcoding and micro-CT imaging data

Pavel Stoev; Ana Komerički; Nesrine Akkari; Shanlin Liu; Xin Zhou; Alexander M. Weigand; Jeroen Hostens; Christopher I. Hunter; Scott C Edmunds; David Porco; Marzio Zapparoli; Teodor Georgiev; Daniel Mietchen; David Roberts; Sarah Faulwetter; Vincent S. Smith; Lyubomir Penev

Abstract We demonstrate how a classical taxonomic description of a new species can be enhanced by applying new generation molecular methods, and novel computing and imaging technologies. A cave-dwelling centipede, Eupolybothrus cavernicolus Komerički & Stoev sp. n. (Chilopoda: Lithobiomorpha: Lithobiidae), found in a remote karst region in Knin, Croatia, is the first eukaryotic species for which, in addition to the traditional morphological description, we provide a fully sequenced transcriptome, a DNA barcode, detailed anatomical X-ray microtomography (micro-CT) scans, and a movie of the living specimen to document important traits of its ex-situ behaviour. By employing micro-CT scanning in a new species for the first time, we create a high-resolution morphological and anatomical dataset that allows virtual reconstructions of the specimen and subsequent interactive manipulation to test the recently introduced ‘cybertype’ notion. In addition, the transcriptome was recorded with a total of 67,785 scaffolds, having an average length of 812 bp and N50 of 1,448 bp (see GigaDB). Subsequent annotation of 22,866 scaffolds was conducted by tracing homologs against current available databases, including Nr, SwissProt and COG. This pilot project illustrates a workflow of producing, storing, publishing and disseminating large data sets associated with a description of a new taxon. All data have been deposited in publicly accessible repositories, such as GigaScience GigaDB, NCBI, BOLD, Morphbank and Morphosource, and the respective open licenses used ensure their accessibility and re-usability.


Biodiversity Data Journal | 2013

Beyond dead trees: integrating the scientific process in the Biodiversity Data Journal

Vincent S. Smith; Teodor Georgiev; Pavel Stoev; Jordan Biserkov; Jeremy Miller; Laurence Livermore; Edward Baker; Daniel Mietchen; Thomas L.P. Couvreur; Gregory M. Mueller; Torsten Dikow; Kristofer M. Helgen; Jiři Frank; Donat Agosti; David Roberts; Lyubomir Penev

Driven by changes to policies of governments and funding agencies, Open Access to content and data is quickly becoming the prevailing model in academic publishing. Open Access benefits scientists with greater dissemination and citation of their work, and provides society as a whole with access to the latest research. Open Access is, however, only one facet of scholarly communication. Core scientific statements or assertions are intertwined and hidden in the scholarly narratives, and the data underlying these statements are often obscured to the point that replication of results is impossible (Nature Editorial 2012). This is in part a result of the way scientific papers are written as narratives, rather than sources of data. An often cited reason for the lack of published data is the absence of a reward mechanism for the individuals involved in creating and managing information (Smith 2009, Costello 2009, Vision 2010, McDade et al. 2011, Duke and Porter 2013). Preparing data for publication is a time consuming activity that few scholars will undertake without recognition from their peers. Data papers are a potential solution to this problem (Chavan and Penev 2011, Chavan and Penev 2013). They allow authors to publish data and receive reward through the traditional citation process. Coupling tools to rapidly and simply generate publications will incentivise this behaviour and create a culture of data curation and sharing within the biodiversity science community. If we are going to incentivise the mass publication of data, we also need mechanisms to ensure quality. Traditional peer review is one of the bottlenecks in standard publication practice (Hauser and Fehr 2007, Fox and Petchey 2010). A common criticism of peer review is the lack of transparency and accountability on the part of the reviewers. To cope with the additional volume of papers created by data publication and to move to a more transparent system, we need to rethink peer review. We need both new methods of reviewing and new tools to automate as much of the review process as possible. This requires a new publishing platform, not just a new journal. An abundance of small isolated datasets does not, however, allow us to address the fundamental problems within the biodiversity science community. These islands of data are only of value if connected and interlinked. The task of interlinking is performed by biodiversity data aggregators like the Global Biodiversity Information Facility (GBIF) and Encylopedia of Life (EOL) which form the backbone of data-driven biodiversity research. By automating the submission of data to these aggregators, we can increase their value to more than the sum of their parts, making small data big. A renewed appreciation of the value of small data will help to reduce the vast amount of research data that exists only on laptops and memory sticks - data that is often lost when people change roles or retire. Works of potentially very limited length can hold intrinsic value to the community, but are almost impossible to publish in traditional journals chasing impact factors. Examples include single species descriptions, local checklists and software descriptions, or ecological surveys and plot data. An infrastructure that allows datasets of any size to be important means we can publish them at any time. There is no need to wait for datasets to reach a critical mass suitable for publication in a traditional journal. Today, we are pleased to announce the official release of the first series of papers published in Biodiversity Data Journal (BDJ). After years of hard work in analyzing, planning and programming the Pensoft Writing Tool (PWT), we now have a publishing platform that addresses the key concerns raised above. This provides the first workflow to support the full life cycle of a manuscript - from writing through submission, community peer-review, publication and dissemination, all within a single online collaborative environment. Shortening distance between “data” and “narrative” publishing Most journals nowadays clearly separate data from narrative (text). Moreover, data publishing through data centres and repositories has almost become a separate sector within the scholarly publishing landscape. BDJ is not a conventional journal, nor is it a conventional “data journal”. It aims to integrate data and text in a single publication by converting several kinds of biodiversity data (e.g., species occurrences, checklists, or data tables) into the text for human-readable use, while simultaneously making data units from the same article harvestable and downloadable. The text itself is marked up and presented in a highly structured and machine readable form. BDJ aims to integrate small data into the text whenever possible. Supplementary data files that underpin graphs, hypotheses and results can also be uploaded on the journal’s website and published with the article. Nonetheless, this is usually not possible for large or complex data, for which we recommend deposition in an established open international repository (for details, see Penev et al. 2011): Large primary biodiversity data sets (e.g., institutional collections of species-occurrence records) should be published with the GBIF Integrated Publishing Toolkit (IPT); small data sets of this kind are imported into the article text through an Excel template, available in PWT. Genomic data should be deposited with INSDC (GenBank/EMBL/DDBJ), either directly or via a partnering repository, e.g. Barcode of Life Data Systems (BOLD). Transcriptomics data should be deposited in Gene Expression Omnibus (GEO) or ArrayExpress. Phylogenetic data should be deposited at TreeBASE, either directly or through the Dryad Data Repository. Biodiversity-related geoscience and environmental data should be deposited in PANGAEA. Morphological images other than those presented in the article should be deposited at Morphbank. Images of a specific kind should be deposited in appropriate repositories if these exist (e.g., Morphosource for MicroCT data). Videos should be uploaded to video sharing sites like YouTube, Vimeo or SciVee and linked back to the article text. Similarly, audio files should go to platforms like FreeSound or SoundCloud, and presentations to Slideshare. In addition, multimedia files can also be uploaded as supplementary files on the journal’s website. 3D and other interactive models can be embedded in the article’s HTML and PDF. Any other large data sets (e.g., ecological observations, environmental data, morphological and other data types) should be deposited in the Dryad Data Repository, either prior to or upon acceptance of the manuscript. Other specialised data repositories can be used if these offer unique identifiers and long-term preservation. All external data used in a BDJ paper must be cited in the reference list, and links to these data (as deposited in external repositories) must be included in a separate data resources section of the article. All datasets, images or multimedia are freely downloadable from the text under the Open Data Commons Attribution License or a Creative Commons CC-Zero waiver / Public Domain Dedication. The article text is available under a Creative Commons (CC-BY) 3.0 license. Primary biodiversity data within an article can be exported in Darwin Core Archive format, which makes them interoperable with biodiversity tools based on the Darwin Core standard. By facilitating open access to the data that underlie every publication, BDJ is setting a new standard in transparency and repeatability in biodiversity science. Perpetual and universal access to primary data stimulates scientific progress by helping authors build upon existing datasets. BDJ’s commitment to supporting automated data aggregation and interlinking is happening alongside multiple advances in biodiversity informatics infrastructure that herald the dawning of an era of collaborative, big-data biodiversity science (Page 2008, Patterson et al. 2010, Thessen and Patterson 2011, Parr et al. 2012).


ZooKeys | 2010

Streamlining taxonomic publication: a working example with Scratchpads and ZooKeys

Vladimir Blagoderov; Irina Brake; Teodor Georgiev; Lyubomir Penev; David Roberts; Simon Ryrcroft; Ben Scott; Donat Agosti; Terrence Catapano; Vincent S. Smith

Abstract We describe a method to publish nomenclatural acts described in taxonomic websites (Scratchpads) that are formally registered through publication in a printed journal (ZooKeys). This method is fully compliant with the zoological nomenclatural code. Our approach supports manuscript creation (via a Scratchpad), electronic act registration (via ZooBank), online and print publication (in the journal ZooKeys) and simultaneous dissemination (ZooKeys and Scratchpads) for nomenclatorial acts including new species descriptions. The workflow supports the generation of manuscripts directly from a database and is illustrated by two sample papers published in the present issue.


ZooKeys | 2010

The centipede genus Eupolybothrus Verhoeff, 1907 (Chilopoda: Lithobiomorpha: Lithobiidae) in North Africa, a cybertaxonomic revision, with a key to all species in the genus and the first use of DNA barcoding for the group.

Pavel Stoev; Nesrine Akkari; Marzio Zapparoli; David Porco; Henrik Enghoff; Gregory D. Edgecombe; Teodor Georgiev; Lyubomir Penev

Abstract The centipede genus Eupolybothrus Verhoeff, 1907 in North Africa is revised. A new cavernicolous species, Eupolybothrus kahfi Stoev & Akkari, sp. n., is described from a cave in Jebel Zaghouan, northeast Tunisia. Morphologically, it is most closely related to Eupolybothrus nudicornis (Gervais, 1837) from North Africa and Southwest Europe but can be readily distinguished by the long antennae and leg-pair 15, a conical dorso-median protuberance emerging from the posterior part of prefemur 15, and the shape of the male first genital sternite. Molecular sequence data from the cytochrome c oxidase I gene (mtDNA–5’ COI-barcoding fragment) exhibit 19.19% divergence between Eupolybothrus kahfi and Eupolybothrus nudicornis, an interspecific value comparable to those observed among four other species of Eupolybothrus which, combined with a low intraspecific divergence (0.3–1.14%), supports the morphological diagnosis of Eupolybothrus kahfi as a separate species. This is the first troglomorphic myriapod to be found in Tunisia, and the second troglomorph lithobiomorph centipede known from North Africa. Eupolybothrus nudicornis is redescribed based on abundant material from Tunisia and its post-embryonic development, distribution and habitat preferences recorded. Eupolybothrus cloudsley-thompsoni Turk, 1955, a nominal species based on Tunisian type material, is placed in synonymy with Eupolybothrus nudicornis. To comply with the latest technological developments in publishing of biological information, the paper implements new approaches in cybertaxonomy, such as fine granularity XML tagging validated against the NLM DTD TaxPub for PubMedCentral and dissemination in XML to various aggregators (GBIF, EOL, Wikipedia), vizualisation of all taxa mentioned in the text via the dynamically created Pensoft Taxon Profile (PTP) page, data publishing, georeferencing of all localities via Google Earth, and ZooBank, GenBank and MorphBank registration of datasets. An interactive key to all valid species of Eupolybothrus is made with DELTA software.


ZooKeys | 2011

XML schemas and mark-up practices of taxonomic literature

Lyubomir Penev; Christopher H. C. Lyal; Anna L. Weitzman; David R. Morse; Guido Sautter; Teodor Georgiev; Robert A. Morris; Terry Catapano; Donat Agosti

Abstract We review the three most widely used XML schemas used to mark-up taxonomic texts, TaxonX, TaxPub and taXMLit. These are described from the viewpoint of their development history, current status, implementation, and use cases. The concept of “taxon treatment” from the viewpoint of taxonomy mark-up into XML is discussed. TaxonX and taXMLit are primarily designed for legacy literature, the former being more lightweight and with a focus on recovery of taxon treatments, the latter providing a much more detailed set of tags to facilitate data extraction and analysis. TaxPub is an extension of the National Library of Medicine Document Type Definition (NLM DTD) for taxonomy focussed on layout and recovery and, as such, is best suited for mark-up of new publications and their archiving in PubMedCentral. All three schemas have their advantages and shortcomings and can be used for different purposes.


Biodiversity Data Journal | 2015

Integrating and visualizing primary data from prospective and legacy taxonomic literature

Jeremy Miller; Donat Agosti; Lyubomir Penev; Guido Sautter; Teodor Georgiev; Terry Catapano; David J. Patterson; Serrano Pereira; Rutger A. Vos; Soraya Sierra

Abstract Specimen data in taxonomic literature are among the highest quality primary biodiversity data. Innovative cybertaxonomic journals are using workflows that maintain data structure and disseminate electronic content to aggregators and other users; such structure is lost in traditional taxonomic publishing. Legacy taxonomic literature is a vast repository of knowledge about biodiversity. Currently, access to that resource is cumbersome, especially for non-specialist data consumers. Markup is a mechanism that makes this content more accessible, and is especially suited to machine analysis. Fine-grained XML (Extensible Markup Language) markup was applied to all (37) open-access articles published in the journal Zootaxa containing treatments on spiders (Order: Araneae). The markup approach was optimized to extract primary specimen data from legacy publications. These data were combined with data from articles containing treatments on spiders published in Biodiversity Data Journal where XML structure is part of the routine publication process. A series of charts was developed to visualize the content of specimen data in XML-tagged taxonomic treatments, either singly or in aggregate. The data can be filtered by several fields (including journal, taxon, institutional collection, collecting country, collector, author, article and treatment) to query particular aspects of the data. We demonstrate here that XML markup using GoldenGATE can address the challenge presented by unstructured legacy data, can extract structured primary biodiversity data which can be aggregated with and jointly queried with data from other Darwin Core-compatible sources, and show how visualization of these data can communicate key information contained in biodiversity literature. We complement recent studies on aspects of biodiversity knowledge using XML structured data to explore 1) the time lag between species discovry and description, and 2) the prevelence of rarity in species descriptions.


PhytoKeys | 2012

From text to structured data: Converting a word- processed floristic checklist into Darwin Core Archive format

Sandra Knapp; Teodor Georgiev; Pavel Stoev; Lyubomir Penev

Abstract The paper describes a pilot project to convert a conventional floristic checklist, written in a standard word processing program, into structured data in the Darwin Core Archive format. After peer-review and editorial acceptance, the final revised version of the checklist was converted into Darwin Core Archive by means of regular expressions and published thereafter in both human-readable form as traditional botanical publication and Darwin Core Archive data files. The data were published and indexed through the Global Biodiversity Information Facility (GBIF) Integrated Publishing Toolkit (IPT) and significant portions of the text of the paper were used to describe the metadata on IPT. After publication, the data will become available through the GBIF infrastructure and can be re-used on their own or collated with other data.


ZooKeys | 2012

Publishing online identification keys in the form of scholarly papers

Lyubomir Penev; Pierfilippo Cerretti; Hans-Peter Tschorsnig; Massimo Lopresti; Filippo Di Giovanni; Teodor Georgiev; Pavel Stoev; Terry L. Erwin

One of the main deficiencies in publishing and dissemination of online interactive identification keys produced through various software packages, such as DELTA, Lucid, MX and others, is the lack of a permanent scientific record and a proper citation mechanism of these keys. In two earlier papers, we have discussed some models for publishing raw data underpinning interactive keys (Penev et al. 2009; Sharkey et al. 2009). Here we propose a method to incentive authors of online keys to publishing these through the already established model of “Data Paper” (Chavan and Penev 2011, examples: Narwade et al. 2011, Van Landuyt et al. 2012, Schindel et al. 2011, Pierrat et al. 2012, see also Pensofts Data Publishing Policies and Guidelines). For clarity, we propose a new article type for this format, “Online Identification Key”, to distinguish it from the “Data Paper” in the narrow sense. The model is demonstrated through an exemplar paper of Cerretti et al. (2012) in the current issue of ZooKeys. The paper describes the main features of an interactive key to the Palaearctic genera of the family Tachinidae (Diptera) implemented as an original web application. The authors discuss briefly the advantages of these tools for both taxonomists and general users, and point out the need of shared, standardized protocols for taxa descriptions to keep matrix-based interactive keys easily and timely updated. The format of the “Online Identification Key” paper largely resembles the structure of Data Papers proposed by Chavan and Penev (2011) on the basis of the Ecological Metadata Language (EML) and developed further in Pensofts Data Publishing Policies and Guidelines. An “Online Identification Key” paper should focus on a formal description of the technical details and content of an online key that is what is often called “metadata”. For example, an “Online Identification Key” paper has a title, author(s), abstract and keywords like any other scientific paper; it should also include in the first place: the URL of an open access version of the online key and possibly also the data underpinning the key, information on the history of and participants in the project, the software used and its technical advantages and constraints, licenses for use, taxonomic and geographic coverage, lists and descriptions of the morphological characters used, and literature references. In contrast to conventional data papers, the “Online Identification Key” papers do not require compulsory publication of raw data files underpinning a key, although such a practice is highly recommended and encouraged. There might be several obstacles in publishing raw data that can be due to copyright issues on either data or source codes. It is mandatory, however, for the online keys published in this way to be freely available for use to anyone, by just clicking the URL address published in the paper. The publication of an online key in the form of a scholarly article is a pragmatic compromise between the dynamic structure of the internet and the static character of scientific articles. The author(s) of the key will be able to continuously update the product, to the benefit of its users. At the same time, the users will have available a citation mechanism for the online key, identical to that used for any other scientific article, to properly credit the authors of the key.


Biodiversity Data Journal | 2016

Species Conservation Profiles compliant with the IUCN Red List of Threatened Species

Pedro Cardoso; Pavel Stoev; Teodor Georgiev; Viktor Senderov; Lyubomir Penev

The International Union for Conservation of Nature (IUCN; www.iucn.org) is the world ́s largest environmental network, with 1,300 member organizations and relying on the input of about 16,000 experts. It provides knowledge and tools that enable and promote the sustainable development at a global level. Among its many outputs, the Red List of Threatened Species (www.iucnredlist.org) is the most widely known and used, by researchers, politicians and the general public. The IUCN Red List is arguably the most useful worldwide list of species at risk of extinction (Lamoreux et al. 2003). Its usefulness is based on its reliance on a number of objective criteria (IUCN 2012). Threatened species are assessed as either Critically Endangered (CR), Endangered (EN) or Vulnerable (VU), but extinct or non-threatened species are also assessed and listed. Besides extinction risk assessment, the Red List provides a plethora of useful information on each species assessed, including distribution, trends, threats and conservation actions. The quantity and quality of this information allows the Red List to be used in multiple ways, such as to raise awareness about threatened species, guide conservation efforts and funding, set priorities for protection, measure site irreplaceability and vulnerability, influence environmental policies and legislation and evaluate and monitor the state of biodiversity (Gärdenfors et al. 2001, Rodrigues et al. 2006, Baillie et al. 2008, Mace et al. 2008, Martín-López et al. 2009). ‡,§ |,¶ ¶ #,¶ #,¶


ZooKeys | 2015

ZooKeys 500: traditions and innovations hand-in-hand servicing our taxonomic community.

Terry L. Erwin; Pavel Stoev; Teodor Georgiev; Lyubomir Penev

On 27th of April 2015 ZooKeys published its jubilee issue 500. It has been exactly 28 months since we published our semiquincentennial issue (Penev et al. 2012) and made a review of the journal’s progress since its establishment in 2008. Now, reaching this milestone makes us cast a look back to see what we have achieved in the passed two and ⅓ years.

Collaboration


Dive into the Teodor Georgiev's collaboration.

Top Co-Authors

Avatar

Lyubomir Penev

American Museum of Natural History

View shared research outputs
Top Co-Authors

Avatar

Pavel Stoev

National Museum of Natural History

View shared research outputs
Top Co-Authors

Avatar

Guido Sautter

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Donat Agosti

Bulgarian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nesrine Akkari

Naturhistorisches Museum

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Viktor Senderov

Bulgarian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge