Logo CNRS Logo UT2J


Sur ce site

Accueil du site > Actualités > New Joint Laboratory LETRA (ANR-14-LAB4-0003-01)

New Joint Laboratory LETRA (ANR-14-LAB4-0003-01)

par admin - 18 avril 2014

The Joint Laboratory LETRA (Laboratory of Technical Studies and Research on Hearing) based on the research and development of technological innovations designed to improve safety and accessibility to public buildings (ERP). These innovations are dedicated to all groups of people, especially people with disabilities. Research areas of joint lab LETRA are 1) non-speech sound signaling systems and 2) the improvement of speech understanding in public address systems. The ERP market is well known by ARCHEAN Technologies, our industrial partner. Innovations and patents resulting from this collaboration will allow :

- The company ARCHEAN Technologies to increase its competitiveness and maintain its local production (and its jobs) ;

- The MSHS-T CNRS (especially PETRA Facilities) to strengthen its scientific influence at national and international level, as well as to maintain and develop its scientific expertise.

The Joint Laboratory LETRA is made of the CNRS Unité de Service et de Recherche MSHS-T CNRS USR3414 and the ARCHEAN Technologies Company. The MSHS-T is involved in LETRA through the technological platforms it manages : PETRA ( (a part of the big facilities CCU)). The company ARCHEAN Technologies based in Montauban (82) is a company specialized Public Address for ERP and risk sites (nuclear power plants, Seveso sites...), and sound information for travelers.

Currently, both entities work together through a project funded by the Midi-Pyrénées region (AGILE IT program) that involves the development of an automatic setting system for hearing aid based on a cognitive model of perception. This first-time collaboration has allowed us to test the complementarity between the laboratory and the company. To organize a program of research and development on the long term, structuring as a Joint Laboratory now appears necessary.

The use of non-speech sound signals in ERP will be the main focus of our research. These signals are primarily dedicated to people with disabilities in two contexts : (1) risky situations, in order to optimize and secure the evacuation and (2) "normal" situations, to facilitate the orientation and wayfinding in the entire site for visually impaired and "normal" people, for example. Non-speech sounds are not language dependent and can therefore be useful to any people received in the ERP.

Scientific research will focus on the design of such signals, including a perceptual study on how they are received and understood by the listeners, as well as a behavioral study on how they are used. The adaptation of the behavioral tests to this new context will also be a highlight of the development, since the variability of practices related to the functional diversity of ERP will be considered.

The other axis will address the development of a device that automatically measures speech understanding in ERP, in order to adapt Public address systems efficiently and in real time to acoustic environment modifications. This device will be able to easily ensure high quality of sound diffusion. It will be based on a cognitive model of speech understanding that needs to be tested first in laboratory.