Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mohamed Jemni is active.

Publication


Featured researches published by Mohamed Jemni.


international conference on computers helping people with special needs | 2008

A System to Make Signs Using Collaborative Approach

Mohamed Jemni; Oussama Elghoul

The generation of gestures is essential for the creation and the development of applications dedicated to deaf persons, in particular the illiterates. In most cases, these application need to store gestures codification/description in data bases or dictionaries in order to process sign language, to translate text from or to sign language, to play sign in a 3D scene and so on. The system WebSign we have developed in our laboratory during the last years, is a translator from written text to sign language. It is based on a multi-community approach to respond to the needs of the locality of sign language. To do so, our system allows using specific dictionary for each community and a common dictionary shared by all communities. In this context it is fundamental to define an expressive language which allows describing signs. In order to facilitate the addition of new words without any programming skills, we have developed a web based human-software interface which allows the generation of words described by the defined language.


international conference on computers helping people with special needs | 2010

SportSign: a service to make sports news accessible to deaf persons in sign languages

Achraf Othman; Oussama El Ghoul; Mohamed Jemni

Sports are important in the life of deaf, as well as hearing persons, on physical, social and mental levels. However, despite that there exist many deaf sports organization in the world, the announcement of sports results and actualities is still done by written or vocal languages. In this context, we propose a Web based application to edit and diffuse sports news in sign languages. The system uses the technology of avatar and it is based on WebSign kernel developed by our research Laboratory of Technologies of Information and Communication (UTIC). Information is broadcasted using MMS containing video on sign languages and also published in SportSigns Web site.


international conference on advanced learning technologies | 2010

Personalizing Accessibility to E-Learning Environments

Mohsen Laabidi; Mohamed Jemni

Todays e-learning environments are still far from being accessible for people with disability. Furthermore, the availability of accessibility guidelines, the diversity of the e-learning platforms and evolution of assistive technologies do represent just a partial solution. In fact, adopting the accessibility since the design phases could provide a rational solution. Therefore, the application of Model Driven Architecture (MDA) with a user centred approach turns out the most appropriate. This paper presents a new approach based both on different level abstraction models and IMS Access for all specification in order to allow the generation of a learning experience in different accessibility setting.


international conference on advanced learning technologies | 2010

User Centered Model to Provide Accessible e-Learning Systems

Halima Hebiri; Mohsen Laabidi; Mohamed Jemni

In recent years, two major developments have been made. In one hand, E-learning has advanced in a remarkable manner, empowering education and creating very sophisticated environments. In the other hand, more importance has been attached to accessibility as it allows people with special needs to access to the Internet and to navigate and take advantage of web pages and services more efficiently. However, little attention has been devoted to make people with various disabilities benefit from the e-learning platforms. In this context, we are working on modeling the various types of learners with disabilities according to their preferences and their needs so as to insure their access to the various learning platforms. Our project consists on studying the field of accessible e-learning in order to give disabled learners the opportunity to access to the e-learning platforms just like any other students. This work is based on the OMG’s Model Driven Architecture.


international conference on embedded software and systems | 2008

Reconfigurable Hardware Implementations for Lifting-Based DWT Image Processing Algorithms

Sami Khanfir; Mohamed Jemni

A novel fast scheme for Discrete Wavelet Transform (DWT) was lately introduced under the name of lifting scheme. This new scheme presents many advantages over the convolution-based approach. For instance it is very suitable for parallelization. In this paper we present two new FPGA-based parallel implementations of the DWT lifting-based scheme. The first implementation uses pipelining, parallel processing and data reuse to increase the speed up of the algorithm. In the second architecture a controller is introduced to deploy dynamically a suitable number of clones accordingly to the available hardware resources on a targeted environment. These two architectures are able of processing large size incoming images or multi-framed images in real-time. The simulations driven on a Xilinx Virtex-5 FPGA environment has proven the practical efficiency of our contribution. In fact, the first architecture has given an operating frequency of 289 MHz, and the second architecture demonstrated the controllerpsilas capabilities of determining the true available resources needed for a successful deployment of independent clones, over a targeted FPGA environment and processing the task in parallel.


international conference on advanced learning technologies | 2008

Using ICT to Teach Sign Language

Mohamed Jemni; Oussama Elghoul

In spite of the fastest growing of technologies of information and communication in the education and the development of very sophisticated environments to improve learning and education, there are a few tools dedicated to deaf education due to the difficulty to create content in sign language. In this context, we present in this paper an ICT environment we developed to aid deaf people improving their social integration and communication capabilities. Our environment is a specialized LCS that generates multimedia courses to teach and learn sign language. These courses can be used either by deaf pupils to learn (or e-learn) sign language or also by hearing people to be able to communicate with deaf people. This educational environment uses mainly a Web-based interpreter of sign language developed in our research laboratory and called Websign. It is a tool that permits to interpret automatically written texts in visual-gestured-spatial language using avatar technology.


international conference on electrical engineering and software applications | 2013

R-wave detection using EMD and bionic wavelet transform

Bochra Khiari; Ezzedine Ben Braiek; Mohamed Jemni

The most striking waveform in an electrocardiogram (ECG) is the QRS complex. The detection of the R wave is the first step in any automatic analysis of the ECG. In this paper, we propose a new method based on a preprocessing of ECG signal in order to restore and enhance it properly. For this purpose, we use the intrinsic components issues from the empirical modal decomposition. To emphasize and extract R waves, we proceed to a decomposition of ECG signal by bionic wavelet transform. Finally thresholding time/amplitude is applied to determine the position of the R-wave. The algorithm tests have been carried out on the signals QT database.


international conference on electrical engineering and software applications | 2013

Toward a mobile service for hard of hearing people to make information accessible anywhere

Mehrez Boulares; Mohamed Jemni

Deaf and hard of hearing people can find it difficult to follow the rapid pace of our daily life. This problem is due to the lack of services that increase access to information. Regarding hearing impairment there is no specific solutions to make information accessible anywhere. Although this community has very specific needs related to the learning and understanding process of any written language. However, hearing impairment is an invisible disability which is quite frequent: it is estimated that more than 8% of worlds population suffers from hearing loss. According to many studies reading level of hearing impaired students is lower than reading level of hearing students. In fact many deaf people have difficulties with reading and writing; they cannot read and understand all the information found in a newspaper, in vending machine to take a conveyance, in instruction leaflet etc... Mainly all visual textual information are not accessible for this category of people with disabilities. However, a number of obstacles still have to be removed to make the information really accessible for all and this is crucial for their personal development and their successful integration. In this paper we propose a solution to this problem by providing a mobile translation system using the great technological advances in smart phones to improve the information accessibility anywhere. We rely on text image processing, virtual reality 3D modeling and cloud computing to generate a real-time sign language interpretation by using high virtual character quality.


international conference on algorithms and architectures for parallel processing | 2010

A parallel distributed algorithm for the permutation flow shop scheduling problem

Samia Kouki; Talel Ladhari; Mohamed Jemni

This paper describes a new parallel Branch-and-Bound algorithm for solving the classical permutation flow shop scheduling problem as well as its implementation on a cluster of six computers. The experimental study of our distributed parallel algorithm gives promising results and shows clearly the benefit of the parallel paradigm to solve large-scale instances in moderate CPU time.


USAB '09 Proceedings of the 5th Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society on HCI and Usability for e-Inclusion | 2009

A Sign Language Screen Reader for Deaf

Oussama El Ghoul; Mohamed Jemni

Screen reader technology has appeared first to allow blind and people with reading difficulties to use computer and to access to the digital information. Until now, this technology is exploited mainly to help blind community. During our work with deaf people, we noticed that a screen reader can facilitate the manipulation of computers and the reading of textual information. In this paper, we propose a novel screen reader dedicated to deaf. The output of the reader is a visual translation of the text to sign language. The screen reader is composed by two essential modules: the first one is designed to capture the activities of users (mouse and keyboard events). For this purpose, we adopted Microsoft MSAA application programming interfaces. The second module, which is in classical screen readers a text to speech engine (TTS), is replaced by a novel text to sign (TTSign) engine. This module converts text into sign language animation based on avatar technology.

Collaboration


Dive into the Mohamed Jemni's collaboration.

Top Co-Authors

Avatar

Mehrez Boulares

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Hazem Fkaier

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oussama El Ghoul

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Oussama Elghoul

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Michel Koskas

University of Picardie Jules Verne

View shared research outputs
Top Co-Authors

Avatar

Mohsen Laabidi

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Sami Khanfir

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Samia Kouki

École Normale Supérieure

View shared research outputs
Researchain Logo
Decentralizing Knowledge