Зарегистрироваться
Восстановить пароль
FAQ по входу

Djeraba C., Lablack A., Benabbas Y. Multi-Modal User Interactions in Controlled Environments

  • Файл формата pdf
  • размером 5,29 МБ
  • Добавлен пользователем
  • Описание отредактировано
Djeraba C., Lablack A., Benabbas Y. Multi-Modal User Interactions in Controlled Environments
Springer, 2010. — 233 p.
This book presents a vision of the future in which computation will be humancentred and totally disseminated in a real environment. Computers will capture and analyze our multimodal behavior within our real environments, and hence they will be able to understand and predict our behavior: whatever, whenever, and however we need, and wherever we might be. Computers will enter the human world, tackling our goals, fulfilling our needs and helping us to achieve more while doing less. We will not have to explain these needs to computers; they will deduce them on the basis of behavior analysis. Such systems will boost our productivity and increase our wellbeing. They will help us to automate repetitive human tasks, optimize our gestures, find the information we need (when we need it, without forcing our eyes to examine thousands of items), and enable us to work together with other people through space and time. The development of such ambient intelligent systems, which could be seen as a Big Brother, needs to meet a certain number of criteria (legal, social, and ethical) in order to make them socially acceptable and to conform to what we might call Human Dignity. As a result, any such development and deployment must be done with proper investigation of the relevant legal, social and ethical issues.
The purpose of this book is to investigate the capture and analysis of user’s multimodal behavior (abnormal event detection, gaze and flow estimation) within a real environment in order to adapt the response of the computer/environment to the user. Such data is captured using non-intrusive sensors (cameras) installed in the environment. This multimodal behavioral data will be analyzed to infer user intention and will be used to assist the user in his/her day-to-day tasks by seamlessly adapting the system’s (computer/environment) response to his/her requirement. We aim to investigate how captured multimodal behavior is transformed into information that aids the user in dealing with his/her environment. The book considers both real-time and off-line return of information. The information is returned on classical output devices (display screens, media screens) installed in the environment. Thus, there is communication between the user and his environment and they form a loop: user (human)-environment. Communication happens in two ways: from the user to the environment (capture of multimodal user behavior) and from the environment to the user (output information as a response to the multimodal behavior) that will influence the multimodal user behavior.
This book is composed of chapters that describe different modalities: abnormal event, gaze and flow estimation. Each chapter describes the technical challenges, state-of-the-art techniques and proposed methods and applications. To illustrate the technical developments, we have examined two applications in security and marketing.
The intended audience is researchers (university teachers, PhD and Master students) and engineers in research and development areas.
Abnormal Event Detection
Flow Estimation
Estimation of Visual Gaze
Visual Field Projection and Region of Interest Analysis
Societal Recommendations
  • Чтобы скачать этот файл зарегистрируйтесь и/или войдите на сайт используя форму сверху.
  • Регистрация