Featured Researches

Multimedia

Are Social Networks Watermarking Us or Are We (Unawarely) Watermarking Ourself?

In the last decade, Social Networks (SNs) have deeply changed many aspects of society, and one of the most widespread behaviours is the sharing of pictures. However, malicious users often exploit shared pictures to create fake profiles leading to the growth of cybercrime. Thus, keeping in mind this scenario, authorship attribution and verification through image watermarking techniques are becoming more and more important. In this paper, firstly, we investigate how 13 most popular SNs treat the uploaded pictures, in order to identify a possible implementation of image watermarking techniques by respective SNs. Secondly, on these 13 SNs, we test the robustness of several image watermarking algorithms. Finally, we verify whether a method based on the Photo-Response Non-Uniformity (PRNU) technique can be successfully used as a watermarking approach for authorship attribution and verification of pictures on SNs. The proposed method is robust enough in spite of the fact that the pictures get downgraded during the uploading process by SNs. The results of our analysis on a real dataset of 8,400 pictures show that the proposed method is more effective than other watermarking techniques and can help to address serious questions about privacy and security on SNs.

Read more
Multimedia

Ari: The Automated R Instructor

We present the ari package for automatically generating technology-focused educational videos. The goal of the package is to create reproducible videos, with the ability to change and update video content seamlessly. We present several examples of generating videos including using R Markdown slide decks, PowerPoint slides, or simple images as source material. We also discuss how ari can help instructors reach new audiences through programmatically translating materials into other languages.

Read more
Multimedia

Attention Based Video Summaries of Live Online Zoom Classes

This paper describes a system developed to help University students get more from their online lectures, tutorials, laboratory and other live sessions. We do this by logging their attention levels on their laptops during live Zoom sessions and providing them with personalised video summaries of those live sessions. Using facial attention analysis software we create personalised video summaries composed of just the parts where a student's attention was below some threshold. We can also factor in other criteria into video summary generation such as parts where the student was not paying attention while others in the class were, and parts of the video that other students have replayed extensively which a given student has not. Attention and usage based video summaries of live classes are a form of personalised content, they are educational video segments recommended to highlight important parts of live sessions, useful in both topic understanding and in exam preparation. The system also allows a Professor to review the aggregated attention levels of those in a class who attended a live session and logged their attention levels. This allows her to see which parts of the live activity students were paying most, and least, attention to. The Help-Me-Watch system is deployed and in use at our University in a way that protects student's personal data, operating in a GDPR-compliant way.

Read more
Multimedia

Audio Watermarking over the Air With Modulated Self-Correlation

We propose a novel audio watermarking system that is robust to the distortion due to the indoor acoustic propagation channel between the loudspeaker and the receiving microphone. The system utilizes a set of new algorithms that effectively mitigate the impact of room reverberation and interfering sound sources without using dereverberation procedures. The decoder has low-latency and it operates asynchronously, which alleviates the need for explicit synchronization with the encoder. It is also robust to standard audio processing operations in legacy watermarking systems, e.g., compression and volume change. The effectiveness of the system is established with a real-time system under general room conditions.

Read more
Multimedia

Audio-Visual Embedding for Cross-Modal MusicVideo Retrieval through Supervised Deep CCA

Deep learning has successfully shown excellent performance in learning joint representations between different data modalities. Unfortunately, little research focuses on cross-modal correlation learning where temporal structures of different data modalities, such as audio and video, should be taken into account. Music video retrieval by given musical audio is a natural way to search and interact with music contents. In this work, we study cross-modal music video retrieval in terms of emotion similarity. Particularly, audio of an arbitrary length is used to retrieve a longer or full-length music video. To this end, we propose a novel audio-visual embedding algorithm by Supervised Deep CanonicalCorrelation Analysis (S-DCCA) that projects audio and video into a shared space to bridge the semantic gap between audio and video. This also preserves the similarity between audio and visual contents from different videos with the same class label and the temporal structure. The contribution of our approach is mainly manifested in the two aspects: i) We propose to select top k audio chunks by attention-based Long Short-Term Memory (LSTM)model, which can represent good audio summarization with local properties. ii) We propose an end-to-end deep model for cross-modal audio-visual learning where S-DCCA is trained to learn the semantic correlation between audio and visual modalities. Due to the lack of music video dataset, we construct 10K music video dataset from YouTube 8M dataset. Some promising results such as MAP and precision-recall show that our proposed model can be applied to music video retrieval.

Read more
Multimedia

Audio-based Near-Duplicate Video Retrieval with Audio Similarity Learning

In this work, we address the problem of audio-based near-duplicate video retrieval. We propose the Audio Similarity Learning (AuSiL) approach that effectively captures temporal patterns of audio similarity between video pairs. For the robust similarity calculation between two videos, we first extract representative audio-based video descriptors by leveraging transfer learning based on a Convolutional Neural Network (CNN) trained on a large scale dataset of audio events, and then we calculate the similarity matrix derived from the pairwise similarity of these descriptors. The similarity matrix is subsequently fed to a CNN network that captures the temporal structures existing within its content. We train our network following a triplet generation process and optimizing the triplet loss function. To evaluate the effectiveness of the proposed approach, we have manually annotated two publicly available video datasets based on the audio duplicity between their videos. The proposed approach achieves very competitive results compared to three state-of-the-art methods. Also, unlike the competing methods, it is very robust to the retrieval of audio duplicates generated with speed transformations.

Read more
Multimedia

Augmented Informative Cooperative Perception

Connected vehicles, whether equipped with advanced driver-assistance systems or fully autonomous, are currently constrained to visual information in their lines-of-sight. A cooperative perception system among vehicles increases their situational awareness by extending their perception ranges. Existing solutions imply significant network and computation load, as well as high flow of not-always-relevant data received by vehicles. To address such issues, and thus account for the inherently diverse informativeness of the data, we present Augmented Informative Cooperative Perception (AICP) as the first fast-filtering system which optimizes the informativeness of shared data at vehicles. AICP displays the filtered data to the drivers in augmented reality head-up display. To this end, an informativeness maximization problem is presented for vehicles to select a subset of data to display to their drivers. Specifically, we propose (i) a dedicated system design with custom data structure and light-weight routing protocol for convenient data encapsulation, fast interpretation and transmission, and (ii) a comprehensive problem formulation and efficient fitness-based sorting algorithm to select the most valuable data to display at the application layer. We implement a proof-of-concept prototype of AICP with a bandwidth-hungry, latency-constrained real-life augmented reality application. The prototype realizes the informative-optimized cooperative perception with only 12.6 milliseconds additional latency. Next, we test the networking performance of AICP at scale and show that AICP effectively filter out less relevant packets and decreases the channel busy time.

Read more
Multimedia

Augmenting reality: On the shared history of perceptual illusion and video projection mapping

Perceptual illusions based on the spatial correspondence between objects and displayed images have been pursued by artists and scientists since the 15th century, mastering optics to create crucial techniques as the linear perspective and devices as the Magic Lantern. Contemporary video projection mapping inherits and further extends this drive to produce perceptual illusions in space by incorporating the required real time capabilities for dynamically superposing the imaginary onto physical objects under fluid real world conditions. A critical milestone has been reached in the creation of the technical possibilities for all encompassing, untethered synthetic reality experiences available to the plain senses, where every surface may act as a screen and the relation to everyday objects is open to alterations.

Read more
Multimedia

Automated Composition of Picture-Synched Music Soundtracks for Movies

We describe the implementation of and early results from a system that automatically composes picture-synched musical soundtracks for videos and movies. We use the phrase "picture-synched" to mean that the structure of the automatically composed music is determined by visual events in the input movie, i.e. the final music is synchronised to visual events and features such as cut transitions or within-shot key-frame events. Our system combines automated video analysis and computer-generated music-composition techniques to create unique soundtracks in response to the video input, and can be thought of as an initial step in creating a computerised replacement for a human composer writing music to fit the picture-locked edit of a movie. Working only from the video information in the movie, key features are extracted from the input video, using video analysis techniques, which are then fed into a machine-learning-based music generation tool, to compose a piece of music from scratch. The resulting soundtrack is tied to video features, such as scene transition markers and scene-level energy values, and is unique to the input video. Although the system we describe here is only a preliminary proof-of-concept, user evaluations of the output of the system have been positive.

Read more
Multimedia

Automatic Realistic Music Video Generation from Segments of Youtube Videos

A Music Video (MV) is a video aiming at visually illustrating or extending the meaning of its background music. This paper proposes a novel method to automatically generate, from an input music track, a music video made of segments of Youtube music videos which would fit this music. The system analyzes the input music to find its genre (pop, rock, ...) and finds segmented MVs with the same genre in the database. Then, a K-Means clustering is done to group video segments by color histogram, meaning segments of MVs having the same global distribution of colors. A few clusters are randomly selected, then are assembled around music boundaries, which are moments where a significant change in the music occurs (for instance, transitioning from verse to chorus). This way, when the music changes, the video color mood changes as well. This work aims at generating high-quality realistic MVs, which could be mistaken for man-made MVs. By asking users to identify, in a batch of music videos containing professional MVs, amateur-made MVs and generated MVs by our algorithm, we show that our algorithm gives satisfying results, as 45% of generated videos are mistaken for professional MVs and 21.6% are mistaken for amateur-made MVs. More information can be found in the project website: this http URL

Read more

Ready to get started?

Join us today