A multimodal dataset for authoring and editing multimedia content: The MAMEM project

Spiros Nikolopoulos, Panagiotis C. Petrantonakis, Kostas Georgiadis, Fotis Kalaganis, Georgios Liaros, Ioulietta Lazarou, Katerina Adam, Anastasios Papazoglou-Chalikias, Elisavet Chatzilari, Vangelis P. Oikonomou, Chandan Kumar, Raphael Menges, Steffen Staab, Daniel Müller, Korok Sengupta, Sevasti Bostantjopoulou, Zoe Katsarou, Gabi Zeilig, Meir Plotnik, Amihai GotliebRacheli Kizoni, Sofia Fountoukidou, Jaap Ham, Dimitrios Athanasiou, Agnes Mariakaki, Dario Comanducci, Edoardo Sabatini, Walter Nistico, Markus Plank, Ioannis Kompatsiaris

Research output: Contribution to journalArticlepeer-review

Abstract

We present a dataset that combines multimodal biosignals and eye tracking information gathered under a human-computer interaction framework. The dataset was developed in the vein of the MAMEM project that aims to endow people with motor disabilities with the ability to edit and author multimedia content through mental commands and gaze activity. The dataset includes EEG, eye-tracking, and physiological (GSR and Heart rate) signals collected from 34 individuals (18 able-bodied and 16 motor-impaired). Data were collected during the interaction with specifically designed interface for web browsing and multimedia content manipulation and during imaginary movement tasks. The presented dataset will contribute towards the development and evaluation of modern human-computer interaction systems that would foster the integration of people with severe motor impairments back into society.

Original languageEnglish
Pages (from-to)1048-1056
Number of pages9
JournalData in Brief
Volume15
DOIs
StatePublished - Dec 2017

Bibliographical note

Funding Information:
This work is part of project MAMEM that has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No 644780 . Transparency document

Publisher Copyright:
© 2017 The Authors

ASJC Scopus subject areas

  • General

Fingerprint

Dive into the research topics of 'A multimodal dataset for authoring and editing multimedia content: The MAMEM project'. Together they form a unique fingerprint.

Cite this