Dan Tidhar
University of Cambridge
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dan Tidhar.
empirical methods in natural language processing | 2006
Simone Teufel; Advaith Siddharthan; Dan Tidhar
Citation function is defined as the authors reason for citing a given paper (e.g. acknowledgement of the use of the cited method). The automatic recognition of the rhetorical function of citations in scientific text has many applications, from improvement of impact factor calculations to text summarisation and more informative citation indexers. We show that our annotation scheme for citation function is reliable, and present a supervised machine learning framework to automatically classify citation function, using both shallow and linguistically-inspired features. We find, amongst other things, a strong relationship between citation function and sentiment classification.
annual meeting of the special interest group on discourse and dialogue | 2009
Simone Teufel; Advaith Siddharthan; Dan Tidhar
We study the interplay of the discourse structure of a scientific argument with formal citations. One subproblem of this is to classify academic citations in scientific articles according to their rhetorical function, e.g., as a rival approach, as a part of the solution, or as a flawed approach that justifies the current research. Here, we introduce our annotation scheme with 12 categories, and present an agreement study.
international conference on acoustics, speech, and signal processing | 2010
Dan Tidhar; Matthias Mauch; Simon Dixon
We present a novel music signal processing task of classifying the tuning of a harpsichord from audio recordings of standard musical works. We report the results of a classification experiment involving six different temperaments, using real harpsichord recordings as well as synthesised audio data. We introduce the concept of conservative transcription, and show that existing high-precision pitch estimation techniques are sufficient for our task if combined with conservative transcription. In particular, using the CQIFFT algorithm with conservative transcription and removal of short duration notes, we are able to distinguish between 6 different temperaments of harpsichord recordings with 96% accuracy (100% for synthetic data).
Frontiers in Psychology | 2014
Mats B. Küssner; Dan Tidhar; Helen Prior; Daniel Leech-Wilkinson
Cross-modal mappings of auditory stimuli reveal valuable insights into how humans make sense of sound and music. Whereas researchers have investigated cross-modal mappings of sound features varied in isolation within paradigms such as speeded classification and forced-choice matching tasks, investigations of representations of concurrently varied sound features (e.g., pitch, loudness and tempo) with overt gestures—accounting for the intrinsic link between movement and sound—are scant. To explore the role of bodily gestures in cross-modal mappings of auditory stimuli we asked 64 musically trained and untrained participants to represent pure tones—continually sounding and concurrently varied in pitch, loudness and tempo—with gestures while the sound stimuli were played. We hypothesized musical training to lead to more consistent mappings between pitch and height, loudness and distance/height, and tempo and speed of hand movement and muscular energy. Our results corroborate previously reported pitch vs. height (higher pitch leading to higher elevation in space) and tempo vs. speed (increasing tempo leading to increasing speed of hand movement) associations, but also reveal novel findings pertaining to musical training which influenced consistency of pitch mappings, annulling a commonly observed bias for convex (i.e., rising–falling) pitch contours. Moreover, we reveal effects of interactions between musical parameters on cross-modal mappings (e.g., pitch and loudness on speed of hand movement), highlighting the importance of studying auditory stimuli concurrently varied in different musical parameters. Results are discussed in light of cross-modal cognition, with particular emphasis on studies within (embodied) music cognition. Implications for theoretical refinements and potential clinical applications are provided.
international conference on computational linguistics | 2000
Dan Tidhar; Uwe Küssner
Within the machine translation system Verbmobil, translation is performed simultaneously by four independent translation modules. The four competing translations are combined by a selection module so as to form a single optimal output for each input utterance. The selection module relies on confidence values that are delivered together with each of the alternative translations. Since the confidence values are computed by four independent modules that are fundamentally different from one another, they are not directly comparable and need to be rescaled in order to gain comparative significance. In this paper we describe a machine learning method tailored to overcome this difficulty by using off-line human feedback to determine an appropriate confidence rescaling scheme. Additionally, we describe some other sources of information that are used for selecting between the competing translations, and describe the way in which the selection process relates to quality of service specifications.
international conference on machine learning | 2010
Lesley Mearns; Dan Tidhar; Simon Dixon
We describe a preliminary study looking into the characterisation of composer style. The primary motivation of the work is an exploration of methods to automatically extract high level, musicologically valid features. Such features facilitate machine-learning based stylistic classification which, in contrast to previously published results, are more likely to yield musicological insights regarding style characteristics and compositorial techniques. We extract features from scores by Renaissance and Baroque composers, capturing their use of contrapuntal voice leading rules and musical intervals. A composer classification task is performed to test the ability of the feature sets to characterise composer style, yielding an accuracy of 66%. We conclude that although the computation of higher level musical features is challenging, it can give useful insights into characteristics of style which are not revealed by lower level features.
Journal of the Acoustical Society of America | 2012
Simon Dixon; Matthias Mauch; Dan Tidhar
The inharmonicity of vibrating strings can easily be estimated from recordings of isolated tones. Likewise, the tuning system (temperament) of a keyboard instrument can be ascertained from isolated tones by estimating the fundamental frequencies corresponding to each key of the instrument. This paper addresses a more difficult problem: the automatic estimation of the inharmonicity and temperament of a harpsichord given only a recording of an unknown musical work. An initial conservative transcription is used to generate a list of note candidates, and high-precision frequency estimation techniques and robust statistics are employed to estimate the inharmonicity and fundamental frequency of each note. These estimates are then matched to a set of known keyboard temperaments, allowing for variation in the tuning reference frequency, in order to obtain the temperament used in the recording. Results indicate that it is possible to obtain inharmonicity estimates and to classify keyboard temperament automatically from audio recordings of standard musical works, to the extent of accurately (96%) distinguishing between six different temperaments commonly used in harpsichord recordings. Although there is an interaction between inharmonicity and temperament, this is shown to be minor relative to the tuning accuracy.
international conference on peer-to-peer computing | 2001
Richard Gold; Dan Tidhar
We present an architecture for content-based aggregation in peer-to-peer filesharing networks. It is designed to significantly reduce the amount of nodes that have to be queried, by introducing a distributed index into the system. This index allows content to be located in a peer-to-peer network without using broadcast-style techniques. We also show how this index can be created and maintained in a decentralized self-organizing fashion.
Archive | 2000
Damir Ćavar; Uwe Küssner; Dan Tidhar
In order to meet the challenges set by the innovative multi-engine translation architecture, an additional selection component is necessary. The selection component fulfills the task of integrating the various alternative translations that are produced for each input utterance, and comes up with exactly one optimal translation. In the center of this chapter is a learning method that was tailored to overcome the problem of incomparable confidence values delivered by the competing translation paths, thus enabling the selection component to rely on confidence values as the main selection criterion. By using off-line human feedback and applying a linear optimization heuristic, we determine a rescaling scheme that enables us to compare confidence values across modules. We also describe some additional information sources that further elaborate the selection procedure, and finally, outline some Quality of Service parameters that are supported by the selection module.
international conference natural language processing | 2000
Stephan Koch; Uwe Küssner; Manfred Stede; Dan Tidhar
In a speech-to-speech translation system, contextual reasoning for purposes of disambiguation has to respect the specific conditions arising from speech input; on the one hand, it is part of a real-time system, on the other hand, it needs to take errors in the speech recognition phase into account and hence be particular robust. This paper describes the context evaluation module of the Verbmobil translation system: What are the linguistic phenomena that require contextual reasoning, what does the context representation look like, how is it constructed during utterance interpretation, and how is it used for disambiguation and reasoning.