D2.1 Requirements and Specification - CORBYS
D2.1 Requirements and Specification - CORBYS
D2.1 Requirements and Specification - CORBYS
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
<strong>CORBYS</strong><br />
Cognitive Control Framework for Robotic Systems<br />
(FP7 – 270219)<br />
Deliverable <strong>D2.1</strong><br />
<strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Contractual delivery date: Month 6<br />
Actual submission date: 31st July 2011<br />
Start date of project: 01.02.2011 Duration: 48 months<br />
Lead beneficiary: The University of Reading UR Responsible person: Prof. Atta Badii<br />
Revision: 1.0<br />
Project co-funded by the European Commission within the seventh Framework Program<br />
Dissemination Level<br />
PU Public X<br />
PP Restricted to other program participants (including the Commission Services)<br />
RE Restricted to a group specified by the consortium (including the Commission Services)<br />
CO Confidential, only for members of the consortium (including the Commission Services)
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Document Contributions History<br />
Author(s) Date Contributions<br />
Prof. Atta Badii (UR) 24-02-2011<br />
14-03-2011<br />
21-03-2011<br />
25-03-2011<br />
<strong>Requirements</strong> Eng. SoA Section Document Structure<br />
<strong>Requirements</strong> Eng. Domain Knowledge Framework<br />
<strong>Requirements</strong> Eng. Deliverable Integrated Structure<br />
<strong>Requirements</strong> Eng, Progress Verification-Structuring<br />
Rationale<br />
Zlatko.Matjacic (IRSZR) 18-02-2011 Background Information<br />
Zlatko.Matjacic (IRSZR) 25-03-2011 Gait Rehabilitation Domain Knowledge<br />
Matthias Spranger (NRZ) 14-03-2011 Gait Rehabilitation <strong>Requirements</strong> Document II<br />
Mathias Spranger (NRZ) 30-03-2011 Comments on UR’s document<br />
Matthias Spranger (NRZ) 14-04-2011 Gait Rehabilitation <strong>Requirements</strong> <strong>and</strong> User Types<br />
Michael.V<strong>and</strong>amme (VUB) 13-04-2011 State-of-the Art (Gait Rehabilitation Systems)<br />
Marco Creatura (BBT) 15-04-2011 State-of-the-Art in Non-Invasive Brain Computer Interface<br />
Detection of Motor Activities (BBT)<br />
Daniel Polani (UH) 19-04-2011 State-of-the-Art in Behaviour Generation, Anticipation <strong>and</strong><br />
Initiation<br />
Hanne Opsahl Austad 28-04-2011 State-of-the-Art in Sensors <strong>and</strong> Perception<br />
(SINTEF)<br />
Sinisa Slavnic (UB) 09-05-2011 State-of-the-Art in Architectures for Cognitive Robot<br />
Control<br />
Ali Khan<br />
Prof. Atta Badii<br />
Daniel Thiemert (UR)<br />
11-05-2011 State-of-the-Art in Situation Assessment<br />
<strong>Requirements</strong> Eng. Domain Knowledge (Gait rehabilitation<br />
meeting with experts 28/04/2011)<br />
Roko Tschakarow 20-05-2011 State-of-the-Art in Smart Integrated Actuators<br />
Rajkumar Raval<br />
Ali Khan (UR)<br />
26-05-2011 State-of-the-Market – First Demonstrator<br />
Frode Strisl<strong>and</strong> (SINTEF) 27-05-2011 Review of Human Sensing requirements<br />
Contribution to <strong>CORBYS</strong> glossary<br />
Marco Creatura (BBT) 30-05-2011 Reviewing BCI requirements (chapter 6.4 <strong>and</strong> 7.3)<br />
Danijela Ristic-Durrant 30-05-2011 State-of-the-Art in Robotic Autonomous Systems for<br />
(UB)<br />
examining Hazardous Environments<br />
Sinisa Slavnic (UB) 30-05-2011 Revision of Control sub-system requirements<br />
Ali Khan<br />
02-06-2011 Situation Assessment Architecture specific requirements<br />
Daniel Thiemert<br />
Prof. Atta Badii (UR)<br />
UI-REF methodology<br />
Rajkumar Raval<br />
Ali Khan (UR)<br />
02-06-2011 Task 4.3 specific requirements<br />
Christoph Salge (UH) 03-06-2011 Revision of SOIAA specific requirements<br />
Svetlana Grosu (VUB) 06-06-2011 <strong>Requirements</strong> specific to:<br />
Development of Low level control units (Task 7.4)<br />
Experimenting <strong>and</strong> Evaluating simulations (Task 6.4)<br />
Christoph Salge (UH) 13-06-2011 Provided SOIAA output requirements<br />
Christoph Salge (UH) 14-06-2011 Tasks 6.1 <strong>and</strong> 6.5 specific requirements<br />
Frode Strisl<strong>and</strong> (SINTEF) 15-06-2011 <strong>Requirements</strong> specific to system integration (WP8)<br />
Sinisa Slavnic (UB) 17-06-2011 Revision of requirements specific to Tasks 5.6, 6.2, 6.3, 6.6<br />
II
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Matthias Spranger (NRZ) 04-07-2011<br />
<strong>and</strong> 9.4).<br />
Addition of requirements specific to Task 3.3<br />
<strong>Requirements</strong> specific to Task 9.5<br />
NRZ, IRSZR, UR, UB, 30-06-2011 Chapter 4 revision – domain knowledge, background,<br />
VUB, OB, OBMS.<br />
requirements, demonstrator I<br />
OB 07-07-2011 Task specific requirements<br />
OBMS 07-07-2011 Task specific requirements<br />
SCHUNK 07-07-2011 Task specific requirements<br />
Markus Tuttemann (OB) 08-07-2011 State-of-the-Market (Demonstrator I) reduced / revised<br />
NRZ, IRSZR 10-07-2011 Chapter 4 revision – agreed, finalised requirements for<br />
demonstrator I<br />
Marco Creatura (BBT) 12-07-2011 Updating BCI requirements: BCISW11, BCI1 <strong>and</strong> 7.3.2.4<br />
Cornelius Glackin (UH) 13-07-2011 Revision SOIAA section 7.7<br />
UR 18-07-2011 Review, corrections in State-of-the-Art sections (Chapter 10<br />
onwards)<br />
Danijela Ristic-Durrant<br />
Adrian Leu (UB)<br />
16-07-2011 <strong>Requirements</strong> <strong>and</strong> SoM Demonstrator II added (Section 4.7,<br />
4.8)<br />
Section 9.2 requirements added, revised<br />
Matthias Spranger (NRZ) 23-07-2011 Chapter 4 revision<br />
Frode Strisl<strong>and</strong> (SINTEF) Revision section 5.2, chapters 8 <strong>and</strong> 10<br />
BBT 26-07-2011 Deliverable 2.1 review<br />
UB 27-07-2011 Deliverable 2.1 review Chapters 1-9<br />
UB 28-07-2011 Review of SOA sections<br />
UR 28-07-2011 Final integration of Reviewers comments, extensive editing<br />
<strong>and</strong> formatting<br />
III
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Glossary<br />
Term Definition<br />
<strong>CORBYS</strong> Roles<br />
<strong>CORBYS</strong> User ANY user interacting with the <strong>CORBYS</strong> system, for example,<br />
in case of gait rehabilitation system, users with the following roles: a<br />
patient, a practitioner or an engineer;<br />
In case of a mobile robotic system, users with the following role: a<br />
hasardous area examination officer.<br />
<strong>CORBYS</strong> End-user The companies/entities that use/exploit (aspects of) <strong>CORBYS</strong> technology<br />
in their commercial products or services.<br />
<strong>CORBYS</strong> Expert / Professional user A professional dealing with the <strong>CORBYS</strong> system based on a need to do<br />
technical maintenance, repairs or system configurations. For example: a<br />
Practitioner, a Developer, an Engineer.<br />
Operators/Tester/Assessor/Configurer/<br />
Expert/ Practitioner user/end-user/<br />
<strong>CORBYS</strong> Patient The person receiving gait rehabilitation therapy aided by the <strong>CORBYS</strong><br />
system<br />
<strong>CORBYS</strong> Practitioner The medical professional configuring <strong>and</strong> assessing rehabilitation therapy<br />
aided by the <strong>CORBYS</strong> system.<br />
<strong>CORBYS</strong> Domain Knowledge<br />
Sensor Fusion Method used to combine multiple independent sensors to extract <strong>and</strong><br />
refine information not available through single sensors alone.<br />
Situation Assessment Estimation <strong>and</strong> prediction of relation among objects in the context of their<br />
environment.<br />
Cognitive Control Capability to process variety of stimuli in parallel, to “filter” those that are<br />
the most important for a given task to be executed, to create an adequate<br />
response in time <strong>and</strong> to learn new motor actions with minimum assistance.<br />
Human-Robot Interaction Ability of a robotic system to mutually communicate with humans.<br />
Neural plasticity Ability of neural circuits, both in the brain <strong>and</strong> the spinal cord, to<br />
reorganise or change function.<br />
Security Robots Mobile platforms equipped with different sensors (cameras, laser,<br />
scanners, etc.) which allow autonomous or teleoperated navigation.<br />
Cognitive Processes Processes responsible for knowledge <strong>and</strong> awareness, they include the<br />
processing of experience, perception <strong>and</strong> memory.<br />
BCI Brain Computer Interface<br />
<strong>CORBYS</strong> Technology Components<br />
<strong>CORBYS</strong> User Interface User interface which suits the needs of the<br />
patient/therapists/developers/cognitive modules<br />
Environmental Sensors Sensors measuring the (physical) environment of the human-robot system,<br />
e.g. collision avoidance sensors.<br />
SAWBB Situation Awareness Blackboard<br />
SOIAA Self-Organising Informational Anticipatory Architecture<br />
Smart Actuators Highly integrated Mechatronic units incorporating a motor <strong>and</strong> the<br />
complete motion control electronics in one single unit.<br />
<strong>CORBYS</strong> Patient Sensors The sensor systems taking measurements on the patient<br />
<strong>CORBYS</strong> Physiological Sensors Sensors measuring physiological parameters on the patient, for example<br />
heart rate, EEG, muscle tension<br />
<strong>CORBYS</strong> Patient mechanical sensors Sensors measuring movements, forces, torques, joint angles <strong>and</strong> similar at<br />
any given position on the patient.<br />
Patient Inertial Measurement Units Sensors providing inertia information for specific parts of the human<br />
body.<br />
IV
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Table of Contents<br />
1 ABSTRACT ..................................................................................................................................................... 1<br />
2 EXECUTIVE SUMMARY AND REPORT SCOPE ................................................................................................... 1<br />
3 INTRODUCTION ............................................................................................................................................. 2<br />
4 REQUIREMENTS ENGINEERING ANALYSIS BASE .............................................................................................. 5<br />
4.1 KNOWLEDGE ELICITATION FROM CLINICAL EXPERTS FOR DEMONSTRATOR I ...................................................................... 5<br />
4.2 ENVIRONMENT OF THE DEMONSTRATOR I: GAIT REHABILITATION USER‐ROBOT INTERACTION ............................................. 11<br />
4.3 REQUIREMENTS FOR DEMONSTRATOR I .................................................................................................................. 13<br />
4.4 REQUIREMENTS FOR DEMONSTRATOR II ................................................................................................................. 16<br />
4.5 REQUIREMENTS ENGINEERING METHODOLOGY (UI‐REF) SALIENT FEATURES ................................................................ 17<br />
4.6 STATE‐OF‐THE‐MARKET ...................................................................................................................................... 21<br />
5 MECHATRONIC CONTROL SYSTEMS ............................................................................................................. 28<br />
5.1 CANONICAL SUB‐SYSTEMS .................................................................................................................................... 28<br />
5.2 HUMAN SENSING SYSTEMS (TASKS 3.1, 3.2; SINTEF) .............................................................................................. 28<br />
5.3 ROBOTIC SYSTEM MODELLING AND INTEGRATION OF MOTOR CONTROL UNITS (TASK 6.2, UB) ......................................... 35<br />
5.4 DESIGN AND DEVELOPMENT OF THE MOBILE PLATFORM (TASK 7.1, OBMS).................................................................. 38<br />
5.5 DESIGN AND DEVELOPMENT OF THE POWERED ORTHOSIS (TASK 7.2, OB) ..................................................................... 39<br />
5.6 DESIGN AND INTEGRATION OF ACTUATION SYSTEM (TASK 7.3, SCHUNK) .................................................................... 42<br />
5.7 DEVELOPMENT OF LOW‐LEVEL CONTROL UNITS (TASK 7.4, VUB) ................................................................................ 43<br />
5.8 REALIZATION OF THE ROBOTIC SYSTEM (TASK 7.5, SCHUNK) .................................................................................... 45<br />
6 HUMAN CONTROL SYSTEM .......................................................................................................................... 46<br />
6.1 CANONICAL SUB‐SYSTEMS .................................................................................................................................... 46<br />
6.2 BCI DETECTION OF COGNITIVE PROCESSES THAT PLAY KEY ROLES IN MOTOR CONTROL AND LEARNING (TASK 3.3, UB) ........... 47<br />
6.3 BCI SOFTWARE ARCHITECTURE (TASK 3.4, BBT) ..................................................................................................... 48<br />
6.4 ARCHITECTURE DECOMPOSITION AND DEFINITION (TASK 6.1, UH) .............................................................................. 50<br />
6.5 INTEGRATION OF COGNITIVE CONTROL MODULES (TASK 6.3, UB) ................................................................................ 51<br />
6.6 EXPERIMENTING AND EVALUATING SIMULATIONS (TASK 6.4, VUB) ............................................................................. 54<br />
6.7 ARCHITECTURE REVISION AND IMPROVEMENT (TASK 6.5, UH) ................................................................................... 55<br />
6.8 FINAL ARCHITECTURE INTEGRATION AND FUNCTIONAL TESTING (TASK 6.6, UB) ............................................................. 55<br />
7 ROBOHUMATIC SYSTEMS (GRACEFUL ROBOT‐HUMAN INTERACTIVE‐COOPERATIVE SYSTEMS) .................... 59<br />
7.1 CANONICAL SUB‐SYSTEMS .................................................................................................................................... 59<br />
7.2 BCI COGNITIVE INFORMATION (TASK 3.5, BBT) ...................................................................................................... 59<br />
7.3 DEVICE ONTOLOGY MODELLING (TASK 4.1, UR) ..................................................................................................... 63<br />
7.4 SELF‐AWARENESS REALISATION (TASK 4.2, UR) ...................................................................................................... 64<br />
7.5 ROBOT RESPONSE TO A SITUATION (TASK 4.3, UR) ................................................................................................... 68<br />
7.6 SOIAA: SELF‐ORGANIZING INFORMATIONAL ANTICIPATORY ARCHITECTURE (TASKS 4.4, 5.1, 5.2, 5.3, 5.4; UH)............... 74<br />
7.7 USER RESPONSIVE LEARNING AND ADAPTATION FRAMEWORK (TASK 5.5, UR) .............................................................. 81<br />
7.8 COGNITIVE ADAPTATION OF LOW LEVEL CONTROLLERS (TASK 5.6, UB) ......................................................................... 82<br />
8 SYSTEM INTEGRATION AND FUNCTIONAL TESTING (WP8, SINTEF) ............................................................... 85<br />
8.1 CONFORMANCE TESTING ON SUB‐SYSTEM AND SYSTEM LEVEL ..................................................................................... 85<br />
8.2 INTEGRATION OF SUB‐SYSTEMS ............................................................................................................................ 90<br />
9 EVALUATION (WP9, IRSZR) .......................................................................................................................... 94<br />
V
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
9.1 EVALUATION METHODOLOGY, BENCHMARKING, METRICS, PROCEDURES AND ETHICAL ASSURANCE (TASK 9.1, UR) ............... 94<br />
9.2 TRAINING ON <strong>CORBYS</strong> SYSTEM (TASK 9.2, IRSZR) ................................................................................................. 96<br />
9.3 CONTINUOUS ASSESSMENT OF THE TECHNOLOGY UNDER THE DEVELOPMENT (TASK 9.3, IRSZR) ....................................... 97<br />
9.4 EVALUATION OF THE RESEARCHED METHODS ON THE SECOND DEMONSTRATOR (TASK 9.4, UB) ........................................ 98<br />
9.5 EVALUATION AND FEEDBACK TO DEVELOPMENT (TASK 9.5, NRZ) ................................................................................ 99<br />
10 STATE‐OF‐THE‐ART IN SENSORS AND PERCEPTION (SINTEF) ....................................................................... 101<br />
10.1 INTRODUCTION TO SENSORS AND PERCEPTION ....................................................................................................... 101<br />
10.2 SENSOR PRINCIPLES FOR PERCEPTION IN HUMAN‐ROBOT INTERACTION ........................................................................ 101<br />
10.3 INTERPRETATION OF MULTIPLE SENSOR SIGNALS ..................................................................................................... 109<br />
10.4 SUMMARY ON TECHNOLOGY GAPS AND PRIORITIES FOR DEVELOPMENT IN <strong>CORBYS</strong> ...................................................... 111<br />
11 STATE‐OF‐THE‐ART IN SITUATION ASSESSMENT (UR) ................................................................................. 112<br />
11.1 INTRODUCTION ................................................................................................................................................ 112<br />
11.2 SITUATION ASSESSMENT .................................................................................................................................... 113<br />
11.3 RELEVANT APPROACHES .................................................................................................................................... 116<br />
11.4 SUMMARY ON TECHNOLOGY GAPS AND PRIORITIES FOR DEVELOPMENT IN <strong>CORBYS</strong> ...................................................... 120<br />
12 STATE‐OF‐THE‐ART IN BEHAVIOUR GENERATION, ANTICIPATION AND INITIATION (UH) ............................. 120<br />
12.1 INTRODUCTORY COMMENTS .............................................................................................................................. 120<br />
12.2 INFORMATION‐THEORETIC PRINCIPLES ................................................................................................................. 122<br />
12.3 SELF‐ORGANISED BEHAVIOUR AND GOAL GENERATION ........................................................................................... 125<br />
12.4 BEHAVIOUR ANTICIPATION, GENERATION AND INITIATION ....................................................................................... 128<br />
12.5 TECHNOLOGICAL GAPS ...................................................................................................................................... 130<br />
13 STATE‐OF‐THE‐ART IN ARCHITECTURES FOR COGNITIVE ROBOT CONTROL (UB) .......................................... 131<br />
13.1 ARCHITECTURES FOR COGNITIVE CONTROL OF ROBOTIC SYSTEMS ............................................................................... 132<br />
13.2 COGNITIVE ARCHITECTURES USED FOR CONTROLLING DIFFERENT ROBOTIC SYSTEMS ....................................................... 135<br />
13.3 <strong>CORBYS</strong> ENABLING POTENTIAL AND CONSTRAINTS (CURRENT GAPS/SHORTCOMINGS) ................................................... 143<br />
14 STATE‐OF‐THE‐ART IN SMART INTEGRATED ACTUATORS (SCHUNK) ........................................................... 143<br />
14.1 INTRODUCTION TO SMART INTEGRATED ACTUATORS ............................................................................................... 143<br />
14.2 BASIC ACTUATOR TECHNOLOGIES ........................................................................................................................ 143<br />
14.3 CONTROL TECHNIQUES, INTERFACING, STANDARDISED DRIVE MODULES ...................................................................... 145<br />
14.4 <strong>CORBYS</strong> ENABLING POTENTIAL AND CONSTRAINTS (CURRENT GAPS/SHORTCOMINGS) ................................................... 146<br />
14.5 TECHNOLOGY INNOVATION REQUIREMENTS GAPS FILTER ELEMENTS ........................................................................... 146<br />
15 STATE‐OF‐THE‐ART IN NON‐INVASIVE BRAIN COMPUTER INTERFACE (BBT) ................................................ 147<br />
15.1 INVASIVE VS. NON‐INVASIVE BCI TECHNOLOGY AND ROBOTICS ................................................................................ 147<br />
15.2 HARDWARE FOR NON‐INVASIVE BRAIN‐COMPUTER INTERFACES ............................................................................... 149<br />
15.3 SOFTWARE FOR NON‐INVASIVE BRAIN‐COMPUTER INTERFACES ................................................................................ 152<br />
15.4 THE ROLE OF EEG ARTEFACTS IN NON‐INVASIVE BCIS ............................................................................................ 155<br />
15.5 DECODING THE COGNITIVE PROCESS REQUIRED IN <strong>CORBYS</strong> .................................................................................... 157<br />
16 STATE‐OF‐THE‐ART IN GAIT REHABILITATION SYSTEMS (VUB) .................................................................... 163<br />
16.1 GAIT REHABILITATION ....................................................................................................................................... 163<br />
16.2 GAIT REHABILITATION ROBOTS ............................................................................................................................ 165<br />
16.3 ROBOT CONTROL STRATEGIES FOR GAIT ASSISTANCE ................................................................................................ 170<br />
17 STATE‐OF‐THE‐ART IN ROBOTIC SYSTEMS FOR EXAMINING HAZARDOUS ENVIRONMENTS ......................... 174<br />
VI
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
17.1 ROBOTIC SYSTEM FOR AUTOMATED SAMPLING ....................................................................................................... 175<br />
17.2 INTELLIGENT AUTOMATED INVESTIGATION OF A HAZARDOUS ENVIRONMENT ................................................................ 176<br />
18 CONCLUSION ............................................................................................................................................. 177<br />
19 REFERENCES .............................................................................................................................................. 178<br />
VII
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Table of Figures<br />
FIGURE 1: NORMAL WALKING IN SAGITTAL PLANE (TOP) AND IN FRONTAL PLANE (BOTTOM) 7<br />
FIGURE 2: TOE WALKING IN SAGITTAL PLANE 8<br />
FIGURE 3: CROUCH GAIT IN SAGITTAL PLANE 9<br />
FIGURE 4: STIFF KNEE GAIT IN SAGITTAL PLANE (TOP) AND IN FRONTAL PLANE (BOTTOM) 10<br />
FIGURE 5: STIFF KNEE GAIT WITH CIRCUMDUCTION IN SAGITTAL PLANE (TOP) AND IN FRONTAL PLANE (BOTTOM) 11<br />
FIGURE 6: USUAL WORKFLOW IN A REHABILITATION UNIT 12<br />
FIGURE 7: PRIORITISATION PROCESS IN UI‐REF 19<br />
FIGURE 8: DISTRIBUTION OF TOTAL COST OF MS IN EUROPE (YEAR 2005) BY RESOURCE USE COMPONENTS 22<br />
FIGURE 9: NON‐FATAL INJURIES PER 1000 BY SEX AND AGE GROUP 24<br />
FIGURE 10: ASENDRO EOD ROBOT 25<br />
FIGURE 11: DRAGON RUNNER 26<br />
FIGURE 12: TECHNOROBOT RIOTBOT WITH REMOTE CONTROL 27<br />
FIGURE 13: IROBOT 510 PACKBOT 27<br />
FIGURE 14: EOD‐ROBOTER MADE BY TELEROB 28<br />
FIGURE 15: WILLHELM EINTHOVEN’S SETUP FOR MEASURING THE ECG SIGNALS (LEFT) 102<br />
FIGURE 16: ILLUSTRATION OF THE ECG SIGNAL AND THE EFFECT OF AN ABRUPT MECHANICAL DISTURBANCE. 103<br />
FIGURE 17: 12 LEAD ECG 103<br />
FIGURE 18: ECG ELECTRODES. TO THE LEFT IS AN EXAMPLE OF DISPOSABLE ELECTRODES 104<br />
FIGURE 19: THE LEFT IMAGES SHOW THE BIOMETRICS LTD EMG MONITORING SYSTEM. 104<br />
FIGURE 20: NONIN ONYX II 9560 AND DISPOSABLE 7000A ADULT SENSOR 106<br />
FIGURE 21: THE XSENS MTX INERTIAL MONITORING UNIT OFFERING SIMULTANEOUS 107<br />
FIGURE 22. TEKSCAN F‐SCAN® SYSTEM , 108<br />
FIGURE 23: GONIOMETERS FROM PLUX (LEFT) AND BIOMETRICS LTD 108<br />
FIGURE 24: HIDALGO EQUIVITAL MEASURING UNIT (LEFT) AND BELT (RIGHT) 109<br />
FIGURE 25: SINTEF ESUMS CHEST UNIT BELT 109<br />
FIGURE 26: LAMBERT'S APPROACH TO SEMANTIC FUSION 113<br />
FIGURE 27: CASE‐BASED REASONING PROCESS 117<br />
FIGURE 28: THE ILLUSTRATION OF TYPICAL THREE LAYERS ARCHITECTURE 134<br />
FIGURE 29: ARMAR ARCHITECTURE (BURGHART ET AL.2005) 136<br />
FIGURE 30: A HUMANOID ROBOT ARMAR (BURGHART ET AL.2005) 137<br />
FIGURE 31: IMA MULTIAGENT‐BASED COGNITIVE ROBOT ARCHITECTURE (KAWAMURA ET.AL. 2004 138<br />
FIGURE 32: THE LAYERS OF ICUB ARCHITECTURE (SANDINI ET AL. 2007) 140<br />
FIGURE 33: ICUB COGNITIVE ARCHITECTURE (SANDINI ET AL. 2007) 140<br />
FIGURE 34: ARCHITECTURE OF CARE‐O‐BOT (HANS ET. AL. 2001) 141<br />
FIGURE 35: CARE‐O‐BOT 142<br />
FIGURE 36: ROTARY ELASTIC CHAMBERS – ACTUATOR (IAT BREMEN) 144<br />
FIGURE 37: FLEXIBLE FLUIDIC ACTUATOR (AIA KIT, KARLSRUHE) 144<br />
FIGURE 38: DYNAMIXEL SMART ACTUATORS INCLUDING REDUCTION GEAR, CONTROLLER, MOTOR AND DRIVER 145<br />
FIGURE 39: SCHUNK MODULAR SMART ACTUATOR SYSTEM WITH ROTARY AND LINEAR DRIVES 146<br />
FIGURE 40: EEG DRY DEVICES 151<br />
FIGURE 41: DRY AND PORTABLE RECORDING EEG SYSTEM PROVIDED FROM G.TECH: DRY ELECTRODES (LEFT) 152<br />
FIGURE 42: EXAMPLE EEG ARTEFACTS 155<br />
FIGURE 43: VOLUNTARY HAND MOVEMENT PHASES: MOTION INTENTION, 158<br />
FIGURE 44: GAIT REHABILITATION ROBOTS: END‐EFFECTOR BASED (E.G. HAPTICWALKER) 165<br />
FIGURE 45: END‐EFFECTOR TYPE DEVICES: (FROM LEFT TO RIGHT) HAPTICWALKER®, G‐EO®, ARTHUR 166<br />
FIGURE 46: COMMERCIALLY AVAILABLE EXOSKELETON TYPE DEVICES: 167<br />
FIGURE 47: BILATERAL PROTOTYPES: (FROM LEFT TO RIGHT) LOPES, PAM/POGO, WALKTRAINER 168<br />
VIII
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
FIGURE 48: UNILATERAL AND SINGLE JOINT PROTOTYPES: 168<br />
FIGURE 49: ASSISTIVE EXOSKELETONS: (FROM LEFT TO RIGHT) REWALK., BODY WEIGHT SUPPORT ASSIST, SUBAR 169<br />
FIGURE 50: POWER AUGMENTING EXOSKELETONS: (FROM LEFT TO RIGHT) BLEEX, SARCOS EXOSKELETON, 169<br />
FIGURE 51: HIGH‐LEVEL CONTROL STRATEGIES IN ROBOT‐ASSISTED GAIT REHABILITATION: 170<br />
FIGURE 52: BASELINE SAMPLE COLLECTION FOR LABORATORY ANALYSIS (BRUEMMER ET AL. 2002) 174<br />
FIGURE 53: MOBILE SECURITY ROBOTS 175<br />
FIGURE 54: AUTONOMOUS VS. MANUAL INVESTIGATION OF CONTAMINATION AREA INCLUDING SAMPLING 176<br />
FIGURE 55: (A) HARDWARE SETUP OF THE RECOROB RECONNAISSANCE ROBOTIC SYSTEM, 176<br />
IX
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Table of Tables<br />
TABLE 1: WHO‐MONICA PROJECT 6 EU POPULATION. 21<br />
TABLE 2: PREVALENCE (PER 100 000) OF MS IN EUROPE BY AGE (BEST ESTIMATES) 23<br />
TABLE 3: INCIDENCE (PER 100 000/ YEAR) OF MS IN EUROPE 23<br />
TABLE 4: MAIN PLATFORM COMPARISON 153<br />
X
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
1 Abstract<br />
Deliverable <strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong> formalises the prioritised requirements for the <strong>CORBYS</strong><br />
Cognitive Control Architectecture <strong>and</strong> demonstrator domains as exemplars of use cases to be evaluated within<br />
the <strong>CORBYS</strong> project. The focus of <strong>CORBYS</strong> is on robotic systems that have a symbiotic relationship with<br />
humans. Such robotic systems have to cope with highly dynamic environments as humans are dem<strong>and</strong>ing,<br />
curious <strong>and</strong> often act unpredictably. <strong>CORBYS</strong> will design <strong>and</strong> implement a cognitive robot control<br />
architecture that allows the integration of 1) high-level cognitive control modules, 2) a semantically-driven<br />
self-awareness module, <strong>and</strong> 3) a cognitive framework for anticipation of, <strong>and</strong> synergy with, human behaviour<br />
based on biologically-inspired information-theoretic principles. These modules, supported with an advanced<br />
multi-sensor system to facilitate dynamic environment perception, will endow the robotic systems with highlevel<br />
cognitive capabilities such as situation-awareness, <strong>and</strong> attention control. This will enable the adaptation<br />
of robot behaviour, to the user’s variable requirements, to be directed by cognitively adapted control<br />
parameters. <strong>CORBYS</strong> will provide a flexible <strong>and</strong> extensible architecture to benefit a wide range of<br />
applications; ranging from robotised vehicles <strong>and</strong> autonomous systems such as robots performing object<br />
manipulation tasks in an unstructured environment to systems where robots work in synergy with humans.<br />
The latter class of systems will be a special focus of <strong>CORBYS</strong> innovation as there exist important classes of<br />
critical applications where support for humans <strong>and</strong> robots sharing their cognitive capabilities is a particularly<br />
crucial requirement to be met. <strong>CORBYS</strong> control architecture will be validated within two challenging<br />
demonstrators: i) a novel mobile robot-assisted gait rehabilitation system <strong>CORBYS</strong>; ii) an existing<br />
autonomous robotic system. The <strong>CORBYS</strong> demonstrator to be developed during the project will be a selfaware<br />
system capable of learning <strong>and</strong> reasoning that enables it to optimally match the requirements of the user<br />
at different stages of rehabilitation in a wide range of gait disorders.<br />
2 Executive Summary <strong>and</strong> Report Scope<br />
This deliverable document (<strong>D2.1</strong>) reports the activities, effort <strong>and</strong> work performed under Work Package 2<br />
(WP2 <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong>s) of the <strong>CORBYS</strong> project. This document defines the end-user groups<br />
<strong>and</strong> end-user requirements, elicits requirements with defined priorities, <strong>and</strong> specifies in detail the list of<br />
requirements for all the sub-systems as well as the inter-dependencies. The document also reviews state-ofthe-art<br />
achievements in the Science <strong>and</strong> Technology areas relevant to the project.<br />
An introduction is given in Chapter 3, followed by the <strong>Requirements</strong> Engineering Analysis Base presented in<br />
Chapter 4. This includes sections on requirements engineering methodology UREIRF salient features, <strong>and</strong><br />
knowledge elicitation from clinical partners regarding the first demonstrator, consisting of end-user<br />
demographics <strong>and</strong> gait biomechanics in normal <strong>and</strong> pathological walking. <strong>Requirements</strong> for the first<br />
demonstrator <strong>and</strong> the second demonstrator are also given in Chapter 4. <strong>Requirements</strong> elicitation<br />
methodologies employed, together with the involvement of stakeholders, establish a prioritised hierarchy of<br />
requirements to be fulfilled by the project during its lifetime; these are also reported in this chapter. This<br />
chapter concludes with a state-of-the-market relevant to <strong>CORBYS</strong> solutions.<br />
Chapter 5 focuses on the mechatronic control systems of <strong>CORBYS</strong> such as human sensing systems, robotic<br />
system motor control units, the mobile platform of the gait rehabilitation system, powered orthosis, actuation<br />
systems etc. Chapter 6 reports the human control system of <strong>CORBYS</strong> including non-invasive BCI detection<br />
of cognitive process for motor control <strong>and</strong> learning, cognitive control modules. Chapter 7 deals with the<br />
robohumatic systems i.e. graceful robot-human interactive cooperation systems. This includes self-aware<br />
realisation, situational response, user responsive learning <strong>and</strong> adaptation, anticipation etc.<br />
1
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Chapter 8 <strong>and</strong> 9 detail system integration <strong>and</strong> functional testing for the <strong>CORBYS</strong> solutions including<br />
conformance testing at sub-systems <strong>and</strong> system level, formulation of appropriate evaluation methodologies,<br />
identification of relevant benchmarks, metrics <strong>and</strong> procedures.<br />
Chapters 10 – 17 report state-of-the-art in relevant topics such as sensors <strong>and</strong> perception, situation assessment,<br />
anticipation <strong>and</strong> initiation, cognitive robot control architectures, smart integrated actuators, non-invasive BCI,<br />
gait rehabilitation systems <strong>and</strong> hasardous area examining robots. Chapter 18 concludes the report.<br />
3 Introduction<br />
In this introductory chapter we motivate a framework for focusing on the relevant domain knowledge prerequisites<br />
of <strong>CORBYS</strong> requirements engineering process. Accordingly we set out some domain observations;<br />
thus identifying a set of important perspectives, referred to as facets, of the relevant domain knowledge that<br />
this section has to explicate <strong>and</strong> formalise to prepare the way for the subsequent requirements engineering<br />
phase.<br />
We conclude this chapter by setting out indicative content markers for each of the 9 facets thus identified.<br />
This will serve to characterise the expected structure of the relevant knowledge to be contributed by respective<br />
Partners for each facet as deemed relevant to the design <strong>and</strong> evaluation of the <strong>CORBYS</strong> Cognitive System.<br />
This analysis is aimed at concluding a rational structuring of each section of the deliverable document so as to<br />
avoid gaps that could otherwise emerge in the relevant knowledge map later when we hope to integrate the<br />
constituent parts to be contributed by each Partner per their specialist area of expertise.<br />
Deriving the analysis base for relevant knowledge acquisition<br />
For clarity, it should be stated that in this document we distinguish between the following four subsystems of<br />
the <strong>CORBYS</strong> architecture:<br />
i) The <strong>CORBYS</strong> Mechatronics i.e. the <strong>CORBYS</strong> Control Hardware Embodiment: by which we<br />
refer to that assembly of sensors, actuators, <strong>and</strong> control hardware such as servos, differential<br />
motors, <strong>and</strong> other parts of the embodiment of any integrated framework of mechanical, electronic<br />
<strong>and</strong> control components.<br />
ii) The <strong>CORBYS</strong> Cognitive System i.e. the <strong>CORBYS</strong> Intelligent Initiative-Taking Intelligent<br />
Controller: by which we refer to the intelligent sub-systems that enable the smart (i.e. the robotic)<br />
capability of an adaptively-assistive architecture such as <strong>CORBYS</strong>.<br />
In particular in usage-contexts whereby such an architecture is capable of seamless integration<br />
with the human body-area systems to deliver close real-time responsive man-machine<br />
cooperativity in support of a person’s well-being, life/work-style goals in a highly personalised<br />
manner, we classify such a system as displaying what we refer to as intimate-assistive or “robohumatic”<br />
capabilities.<br />
This means that although the core design architecture of <strong>CORBYS</strong> cognitive system should be<br />
generally applicable to various usage-contexts, the design of its situation-awareness framework<br />
that integrates directly with the target host platform <strong>and</strong> user environments will have to be<br />
mindful of the degree to which the application domain requires intimate man-machine assistivity.<br />
The more closely the <strong>CORBYS</strong> system has to integrate with human body-area systems in<br />
achieving its man-machine co-working , the more critical it would be to ensure that the design of<br />
its situation-awareness is optimally adaptive (including consideration of additional ethical<br />
2
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
compliance criteria including informed consent, ability to withdraw at any time, ethical approval<br />
etc.) whereas <strong>CORBYS</strong> cognitive co-working applications that typically do not involve direct<br />
machine-human-body-area systems integration should pose less exacting requirements re human–<br />
factors <strong>and</strong> related technology acceptance criteria. For example <strong>CORBYS</strong> Demonstrator I, the<br />
robot assisted human gait rehabilitation application domain will involve more “robo-humatic”<br />
user (therapist/patient) design requirements than the <strong>CORBYS</strong> Demonstrator II the hazardous<br />
area examining robotic system.<br />
A <strong>CORBYS</strong> User Interface: for the practitioner/end-user, who would be re-configuring <strong>CORBYS</strong>,<br />
(re) setting its performance goals <strong>and</strong> monitoring the outcomes of such performance <strong>and</strong> his/her<br />
own recommended scenario e.g. therapy. The <strong>CORBYS</strong> Assist interface is to provide an<br />
appropriate interface for Operators/Tester/Assessor/Configurer/ Expert/ Practitioner user/enduser/<br />
any man-in-the-loop, who may or may not be the person engaged with the <strong>CORBYS</strong> system<br />
in routine co-working but who at any rate may need to have a simple interface through which to<br />
(re)configure <strong>and</strong>/or (re)set the <strong>CORBYS</strong> system for various operational modes to satisfy specific<br />
performance targets <strong>and</strong> to assess the system performance e.g. a Gait Therapist or Emergency<br />
Hazardous Area Examining Officer<br />
iii) <strong>CORBYS</strong> data logging system: to serve off-line (outer-loop) learning, pattern discovery <strong>and</strong><br />
Case Based Reasoning (CBR) so that insights can be harvested by the system for off-line learning<br />
<strong>and</strong> evolutionary performance refinement; as well as this becoming a source of experiential<br />
knowledge that can be shared amongst the user community of practice for most beneficial sociotechnical<br />
deployment of the <strong>CORBYS</strong> architecture in a given domain e.g. in gait therapy<br />
management.<br />
iv) <strong>CORBYS</strong> Evaluation Environment: which must conform to appropriate Testability Framework<br />
Criteria thus providing a proving ground for safe, sense-ful, non-trivial, realistic, repeatable <strong>and</strong><br />
scale-able validation <strong>and</strong> use-ability evaluation of all the above sub-systems.<br />
Accordingly for the purpose of our requirement engineering analysis we distinguish between what we refer to<br />
as the <strong>CORBYS</strong> Mechatronics or the embodiment of the target <strong>CORBYS</strong> system the specification <strong>and</strong><br />
configuration of which will be expected to be largely application dependent so that it could integrate with<br />
legacy application-specific host environments<br />
<strong>and</strong>,<br />
the cognitive system which is that part of the <strong>CORBYS</strong> architecture which is to provide the intelligent adaptive<br />
capabilities e.g. as needed in graceful man-machine teamwork in a given application arena. This technical<br />
objective of man-machine cooperativity support which is essentially the goal of the <strong>CORBYS</strong> architecture<br />
requires the semantic integration of several interacting dynamic environments as listed below.<br />
By graceful cooperativity support we mean that the intervention steps taken by the <strong>CORBYS</strong> cognitive system<br />
should fluidly blend in with the human/co-worker activity flow to create a seamless confluence of mutual<br />
effort i.e. efficient <strong>and</strong> effective orchestration of teamwork towards the shared goal whilst ensuring that the<br />
limits of human’s/co-worker’s comfortable performance are not exceeded nor under-actualised as deemed<br />
appropriate.<br />
This deliverable has to focus on the parametrics of the environments involved i.e. the states <strong>and</strong> process flows<br />
of the entities in the domain environments with which <strong>CORBYS</strong> has to gracefully interact. As such this<br />
3
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
deliverable will have to formally set out all the relevant epistemology (the structure of knowledge) <strong>and</strong><br />
teleology (the theories of sense-ful action <strong>and</strong> purpose) essentially at two levels of abstraction: i) Generic<br />
<strong>CORBYS</strong> Framework Architecture <strong>Requirements</strong>, ii) Domain-Specific <strong>Requirements</strong> for each Proof-of-<br />
Concept Demonstrator Domain; i.e. for the Gait Therapist Assistant, <strong>and</strong>, the Hazardous Area Examining<br />
Assistant applications.<br />
Thus the aim of this deliverable is to focus on a formal explication of the knowledge needed to help specify<br />
the requirements to be fulfilled by the <strong>CORBYS</strong> Architecture during the various modes of its operational life<br />
i.e. during:<br />
i. configuration<br />
ii. usage<br />
iii. goals (re)setting<br />
iv. maintenance,<br />
v. validation <strong>and</strong><br />
vi. refinement<br />
Phases of the <strong>CORBYS</strong> system operation<br />
It is also important to clarify that we recognise at least two <strong>CORBYS</strong> user groups:<br />
i) the <strong>CORBYS</strong> Professional User or <strong>CORBYS</strong> Expert User who would prescribe the manner in<br />
which the <strong>CORBYS</strong> system should be (re)configured, <strong>and</strong> have its goals (re)set so as to provide a<br />
sense-ful <strong>and</strong> useful service with maximum benefit to the end-user, <strong>and</strong>,<br />
ii) the <strong>CORBYS</strong> End-User who would be the human engaged in some activity that is intended to be<br />
directly supported through timely <strong>and</strong> gracefully cooperative initiatives taken by the <strong>CORBYS</strong><br />
system to help the human in achieving his goals.<br />
Thus the <strong>CORBYS</strong> Consortium Partners should make contributions for this deliverable, each according to<br />
their expertise-related responsibilities <strong>and</strong> the level of planned effort; to respond to the questions re:<br />
What the design, development <strong>and</strong> validation of a <strong>CORBYS</strong> system needs to know about:<br />
process-flows, states, spaces, observability, controllability, degrees of freedom, resources constraints,<br />
optimal timing, interfaces etc relevant to the intervention steps by a cognitive system architecture to support<br />
man-machine interactive-cooperativity for the integrated man-machine system as a whole to achieve a given<br />
target state (goal state) as may be set by the domain experts e.g. i) by a gait therapist, or, ii) by a hazardous<br />
area examination officer – who are the typical practitioner users in the two <strong>CORBYS</strong> Project Demonstrator<br />
domains.<br />
Accordingly the design of the <strong>CORBYS</strong> framework architecture has to be informed by the epistemology i.e.<br />
the structure of knowledge relevant to both the general principles of man-machine mixed initiative-taking<br />
system design, <strong>and</strong>, the ontology of the two application arenas chosen as the Demonstrator proving grounds.<br />
Such application domain ontology includes the semantic parametrics of the entities, states <strong>and</strong> processes,<br />
goals as well as the spatio-temporal <strong>and</strong> resource constraints appertaining to each of the two demonstrator<br />
domains<br />
From the above observations follows our action plan which is to formalise the relevant knowledge structured<br />
within 9 facets of the domain with a focus of our analysis base as outlined in the indicative contents to be<br />
agreed under each of the relevant 10 sections set out below. For each of the facets, by reference to a uniform<br />
4
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
analysis set (of entities, states, processes, constraints <strong>and</strong> requirements) we seek to examine <strong>and</strong> identify all<br />
the requirements relevant to the design <strong>and</strong> validation of <strong>CORBYS</strong> as a framework Cognitive Systems<br />
architecture <strong>and</strong> for its demonstrator domains.<br />
The 9 facets of our requirements analysis <strong>and</strong> elicitation are as follows:<br />
i) The Human System<br />
ii) The <strong>CORBYS</strong> Mechatronic System<br />
iii) The <strong>CORBYS</strong> Cognitive System<br />
iv) The <strong>CORBYS</strong> Learning (on/offline) Component<br />
v) The <strong>CORBYS</strong> Testability, Validation <strong>and</strong> Usability Evaluation Framework<br />
vi) The Operational Environment Parametrics<br />
vii) The Demonstrator Application I: Robot-assisted Gait Rehabilitation<br />
viii) The Demonstrator Application II: Robotic examination of hazardous/contaminated areas<br />
ix) Socio-technically acceptable <strong>CORBYS</strong> Solution System <strong>and</strong> its Business Model Sustainability<br />
4 <strong>Requirements</strong> Engineering Analysis Base<br />
4.1 Knowledge elicitation from clinical experts for Demonstrator I<br />
This section sets out the requirements of the first <strong>CORBYS</strong> demonstrator resulting from discussions among<br />
relevant partners of the Consortium including NRZ, IRSZR, OB, OBMS, VUB, UB, <strong>and</strong> UR.<br />
4.1.1 EndUser Demographics<br />
End-users of the <strong>CORBYS</strong> demonstrator I, gait rehabilitation system, are patients, who could have the<br />
following diseases <strong>and</strong> h<strong>and</strong>icaps:<br />
� Lesions or diseases of the central nervous system such as traumatic brain injury (incidence of severe<br />
TBI in Germany 30.000), stroke (of 250.000 patients suffering from stroke yearly in Germany approx.<br />
30.000 could benefit from a walking aid system), encephalitis, hypoxic brain injury before or during<br />
birth Preterm birth (= infantile cerebral palsy, incidence in Germany 3000), or during later life (= due<br />
to cardiac disease or suffocation), degenerative brain diseases such as Parkinson´s disease (36.000),<br />
inflammatory brain diseases such as multiple sclerosis (56.000), or brain tumors. Lesions of the<br />
central nervous system lead to spastic paresis (=increased muscle tone) <strong>and</strong> the need for re-modelling<br />
by exerting repeated exercises <strong>and</strong> active-corrective.<br />
� Diseases <strong>and</strong> trauma of the spinal cord (5000). Lesions of the spinal cord also lead to a spastic paresis,<br />
but there is little chance for re-modelling<br />
� Diseases of the peripheral nervous system such as Guillain Barré syndrome, polyneuropathies, or<br />
neuromuscular diseases (disabling approx. 10.000). Lesions of the peripheral nervous system lead to a<br />
flaccid paresis (= reduced muscle tone) <strong>and</strong> the need for support <strong>and</strong> gaining strength<br />
� Orthopedic problems such as endoprothesis of knee (in Germany 160.000) or hip (in Germany<br />
210.000). Patients would benefit from a gait training system during the first 1-2 weeks after operation<br />
Depending on the site <strong>and</strong> size of the lesion, patients suffer from pareses of one to four limbs: hemi- (arm <strong>and</strong><br />
leg of one side), para- (both legs), tetra- (all four limbs) or monoparesis (one limb). Additionally, other<br />
movement disorders can occur alone or in combination with paresis: ataxia (uncoordinated movements) or<br />
5
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
extrapyramidal gait disorders with involuntary movements (e.g. tremor, ballistic or other not controllable<br />
movements.<br />
However, it is not wise to define groups of patients, neither by etiological diagnosis nor by gait patterns, since<br />
gait patterns are highly individual. The reasons for this are:<br />
� the aetiology of a disease (e.g. stroke, traumatic brain injury, etc) may lead to very different symptoms<br />
(e.g. hemiparesis, spasticity, etc.)<br />
� the symptoms can be more or less severe, leading to very different requirements.<br />
� Additionally, different symptoms can appear simultaneously <strong>and</strong> influence each other, e.g. a certain<br />
degree of spasticity in case of severe weakness is necessary to st<strong>and</strong> <strong>and</strong> walk.<br />
� And on top, cognitive functions <strong>and</strong> capacities, pain, anxiety etc. are often also involved <strong>and</strong> may<br />
change the gait pattern again.<br />
4.1.2 Gait biomechanics in normal <strong>and</strong> pathological walking<br />
4.1.2.1 Normal walking<br />
Figure 1 displays positions of human trunk, pelvis <strong>and</strong> lower extremities in seven instances of gait cycle<br />
(between two consecutive contacts of the same leg). In general human walking may be decomposed into<br />
sagittal, frontal <strong>and</strong> transversal movement, with sagittal <strong>and</strong> frontal movement being the main contributor to<br />
dynamic stability as well as to forward propulsion <strong>and</strong> forward progression. When observing human walking<br />
we notice:<br />
� alternating phases of flexion <strong>and</strong> extension in ankle, knee <strong>and</strong> hip joint in sagittal plane (red arrows)<br />
� alternating phases of pelvic internal <strong>and</strong> external rotation (red box)<br />
� alternating phases of hip abduction <strong>and</strong> adduction combined with ankle, knee <strong>and</strong> hip flexion/extension<br />
during WEIGHT TRANSFER <strong>and</strong> FOOT CLEARANCE in frontal plane (red arrows) - this also<br />
implies the necessity for pelvic sideways movement (red arrows)<br />
� alternating phases of pelvic up <strong>and</strong> down movement/tilt in frontal plane (red box)<br />
� ankle varus/valgus in frontal plane (blue arrows)<br />
6
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Figure 1: Normal walking in sagittal plane (top) <strong>and</strong> in frontal plane (bottom)<br />
in seven instances of gait cycle (GC): initial contact (IC), weight acceptance (WA), mid stance (MSt), terminal stance (TSt),<br />
initial swing (ISw), mid swing (MSw) <strong>and</strong> terminal swing (TSw). Red boxes indicate normal pelvic rotation <strong>and</strong> tilt whereas<br />
arrows indicate normal movements<br />
Only a well coordinated <strong>and</strong> synchronised movement in all joints can assure dynamic stability, reliable foot<br />
contact/positioning, proper weight transfer, propulsion <strong>and</strong> sufficient foot clearance. Therefore, normal<br />
bipedal locomotion as imposed by <strong>CORBYS</strong> gait rehabilitation system must satisfy the following four<br />
requirements:<br />
� antigravity support of the body (exoskeleton)<br />
� stepping movements<br />
o reliable foot contact/positioning – flexion/extension in ankle, knee <strong>and</strong> hip (exoskeleton)<br />
o WEIGHT TRANSFER <strong>and</strong> FOOT CLEARANCE – combined <strong>and</strong> synchronised pelvic rotation/till<br />
<strong>and</strong> ankle, knee <strong>and</strong> hip flexion/extension as well as hip abduction/adduction <strong>and</strong> internal/external<br />
rotation (exoskeleton)<br />
o forward progression – ankle, knee <strong>and</strong> hip flexion/extension (exoskeleton)<br />
� adequate degree of dynamic equilibrium (moving platform)<br />
� propulsion (exoskeleton)<br />
All four requirements need to be coordinated simultaneously <strong>and</strong> continuously. A viable approach would be<br />
to first have the <strong>CORBYS</strong> system in the »follow me« mode where the powered orthosis <strong>and</strong> moving platform<br />
would minimise the interface forces between the walking subject <strong>and</strong> the <strong>CORBYS</strong> system. Alternatively,<br />
<strong>CORBYS</strong> system could impose a normal walking pattern <strong>and</strong> the needed moments in the joints could be<br />
estimated from the current/torque supplied by exoskeleton motors.<br />
4.1.2.2 Pathological walking<br />
Pathological walking may be caused by a long list of diseases, however, the abnormalities they impose on the<br />
7
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
biomechanics of walking fall into four functional categories:<br />
� deformity (muscle <strong>and</strong> joint contractures on allowing for a sufficient passive mobility)<br />
� muscle weakness (insufficient muscle strength)<br />
� impaired control (sensory loss, spasticity: selective control is impaired, primitive locomotor patterns<br />
emerge: mass flexion <strong>and</strong> mass extension in all three joints, muscles change phasing, proprioception<br />
is impaired)<br />
� pain<br />
The <strong>CORBYS</strong> gait rehabilitation system should be concerned only with the impaired control <strong>and</strong> muscle<br />
weakness. The other impairment categories should be treated in other ways (physical therapy, casting <strong>and</strong><br />
medical treatment). The general approach in walking rehabilitation should be either to train a “normal”<br />
walking pattern, or, if muscle weakness or muscle contracture will not allow a normal pattern, a<br />
compensatory, “optimised” movement patterns is permitted (or adequate orthotic aids are given) in order to<br />
achieve functional walking.<br />
The following section addresses the four most frequently occurring gait deviations due to impaired control,<br />
describes which <strong>and</strong> how principal walking mechanisms are affected <strong>and</strong> demonstrates how <strong>CORBYS</strong> system<br />
should behave to restore normal walking pattern.<br />
4.1.2.2.1 Toe walking<br />
Figure 2: Toe walking in sagittal plane<br />
Principal characteristics of toe walking are:<br />
� initial contact with forefoot (metatarsals joints)<br />
� pronounced plantar flexion / dorsiflexion deficit in ankle throughout the gait cycle<br />
� the heel touches the ground in mid stance or may even remain above the ground throughout the stance<br />
phase; likewise, in terminal stance a premature heel rise is common<br />
� increased knee <strong>and</strong> hip flexion throughout the gait cycle<br />
They impose:<br />
� insecure initial contact<br />
� propulsion is achieved predominantly by rapid hip extension instead of forceful extension in all joint<br />
<strong>CORBYS</strong> system should restore:<br />
� adequate ankle dorsiflexion prior to contact for proper heel contact <strong>and</strong> weight transfer (this may<br />
normalise the movement in other joints as well)<br />
� forceful extension in ankle, knee <strong>and</strong> hip during push off to assure normal propulsion<br />
8
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
� adequate ankle dorsiflexion <strong>and</strong> knee flexion in swing phase for normal foot clearance<br />
4.1.2.2.2 Crouch gait<br />
Figure 3: Crouch gait in sagittal plane<br />
Principal characteristics of crouch walking are:<br />
� plantigrade locomotion (podials <strong>and</strong> metatarsals flat on the ground)<br />
� pronounced knee flexion / knee extension deficit throughout the gait cycle<br />
� pronounced ankle dorsiflexion throughout the gait cycle<br />
� increased hip flexion<br />
They impose:<br />
� propulsion is achieved predominantly by rapid hip extension instead of forceful extension in all joint<br />
� low C of G due to pronounced knee flexion in stance phase<br />
<strong>CORBYS</strong> system should restore:<br />
� normal ankle plantar flexion <strong>and</strong> knee extension prior to contact for proper heel contact <strong>and</strong> weight<br />
transfer<br />
� normal knee extension to raise the C of G (combined with pelvic vertical translational DOF)<br />
� forceful extension in ankle, knee <strong>and</strong> hip during push off to assure normal propulsion<br />
4.1.2.2.3 Stiff knee<br />
9
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Figure 4: Stiff knee gait in sagittal plane (top) <strong>and</strong> in frontal plane (bottom)<br />
Principal characteristics of stiff knee walking are:<br />
� inadequate knee flexion throughout the gait cycle<br />
� excessive dorsiflexion<br />
� inadequate knee flexion stimulates excessive contralateral trunk tilt <strong>and</strong> pelvic vertical movement to<br />
assure sufficient foot clearance<br />
They impose:<br />
� insecure <strong>and</strong> unstable weight transfer due to constant knee extension<br />
� propulsion is achieved predominantly by rapid hip extension instead of forceful extension in all joints<br />
� excessive pelvic vertical movement in swing phase compensates the lacking knee flexion to assure<br />
sufficient foot clearance<br />
<strong>CORBYS</strong> system should restore:<br />
� normal knee flexion to allow secure <strong>and</strong> stable weight transfer in initial contact <strong>and</strong> midstance<br />
� forceful extension in ankle, knee <strong>and</strong> hip during push off to assure normal propulsion<br />
� normal knee flexion to assure sufficient foot clearance (excessive pelvic vertical movement will no<br />
longer be necessary)<br />
4.1.2.2.4 Stiff knee with circumduction<br />
10
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Figure 5: Stiff knee gait with circumduction in sagittal plane (top) <strong>and</strong> in frontal plane (bottom)<br />
Principal characteristics of stiff knee walking with circumduction are:<br />
� inadequate knee flexion throughout the gait cycle<br />
� excessive dorsiflexion<br />
� inadequate knee flexion stimulates contralateral trunk tilt <strong>and</strong> pelvic vertical movement to assure<br />
sufficient foot clearance<br />
They impose:<br />
� insecure <strong>and</strong> unstable weight transfer due to constant knee extension<br />
� propulsion is achieved predominantly by rapid hip extension instead of forceful extension in all joints<br />
� combined excessive pelvic vertical movement <strong>and</strong> hip adduction in swing phase compensates the<br />
lacking knee flexion to assure sufficient foot clearance<br />
<strong>CORBYS</strong> system should restore:<br />
� normal knee flexion to allow secure <strong>and</strong> stable weight transfer in initial contact <strong>and</strong> midstance<br />
� forceful extension in ankle, knee <strong>and</strong> hip during push off to assure normal propulsion<br />
� normal knee flexion to assure sufficient foot clearance (excessive pelvic vertical movement <strong>and</strong> hip<br />
adduction will no longer be necessary)<br />
4.2 Environment of the demonstrator I: gait rehabilitation userrobot<br />
interaction<br />
<strong>CORBYS</strong> demonstrator should be an institution-based gait locomotion rehabilitation training instrument. It<br />
should be an intelligent gait rehabilitation device; a mobile gait trainer. Patients could be treated both as<br />
inpatients <strong>and</strong> outpatients. The system could be integrated in the treatment workflow of a rehabilitation<br />
department which is illustrated in Figure 6.<br />
The gait rehabilitation robotic system could be integrated in this workflow in the following way:<br />
- The patient´s gait is analysed on admission on a treadmill with sensors measuring kinematics, forces<br />
<strong>and</strong> resistance in hip, knee <strong>and</strong> ankle joint as well as force on ground via foot plates parts. The <strong>CORBYS</strong><br />
11
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
professional user (therapist) is optimizing the patient´s gait with his/her manual techniques, the system is<br />
“learning” the optimal gait applicable for this individual patient (=”teaching mode”). The data relevant<br />
for the support given by the robotic system are transferred to the system. The sensors used as a diagnostic<br />
tool on the treadmill will not be the same as in the robotic system.<br />
Clinical, informal, formal <strong>and</strong> apparative diagnostic procedures <strong>and</strong><br />
Specific <strong>and</strong> goal orientated<br />
treatment<br />
Reassessment of achieved goals<br />
setting new goals<br />
Figure 6: usual workflow in a rehabilitation unit<br />
- The patient is performing overground walking with the system on his own, but under supervision of a<br />
therapist. The system is optimising his gait <strong>and</strong> measuring the efforts, energy <strong>and</strong> result. This increases<br />
the treatment time without therapist <strong>and</strong> increases the time for reciprocal repetitive training<br />
- The therapist is evaluating the parameters measured by the system <strong>and</strong> reevaluates the gait on the<br />
treadmill, again optimising the gait <strong>and</strong> by this changing the walking parameters of the system.<br />
Alternatively the system itself corrects the gait with its cognitive structure by approximating the patients<br />
gait to normal gait pattern or to the therapist assisted, learned gait pattern“.<br />
As a result, the gait rehabilitation robot system could allow more independence from the need to have<br />
therapist supervision at all time. The therapist may supervise more than one patient at the same time more<br />
than one robot system. The patients will have more repeatative training time, which leads to a more affective<br />
rehabilitation outcome. However, the role of therapist cannot completely be replaced by the technology.<br />
The gait rehabilitation robot system should interact with the patient at following different levels:<br />
Level 1: co-operation of the powered orthosis <strong>and</strong> mobile platform with the user on the level of basic<br />
overground walking: walking along the straight line at a constant speed. This level would cover the “gentle”<br />
alteration of the existing walking patterns: symmetry, weight-bearing capabilities, “normalisation” of<br />
kinematic <strong>and</strong> kinetic patterns in all three joints of lower extremity in all three planes of motion, balance<br />
control (COP vs. COM control). This level is not concerned with “cognitive” issues as this should be<br />
12<br />
Clinical,<br />
(Re)Integration into patient´s<br />
environment
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
automated movement. This level, however, should allow incorporation of “targeted walking” input that can<br />
be determined by a <strong>CORBYS</strong> professional user. For example, first the existing patterns like toe walking,<br />
crouch, stiff-knee, knee hyperextension, Trendelenburg etc. would be identified <strong>and</strong> appropriate changes in<br />
the desired kinematic <strong>and</strong> kinetic patterns would be made. The iterative nature of walking should be exploited<br />
in the design of a suitable control of powered orthosis <strong>and</strong> mobile platform.<br />
An important aspect of Level 1 operating mode would be training of isolated components of skills required in<br />
walking such as proper weight transfer training (starting from appropriate placement of the leg in swing onto<br />
the ground, followed by weight transfer during double stance <strong>and</strong> concluded by adequate control of trunk in<br />
the frontal plane during the single stance).<br />
Level 2: starting/stopping the walking, changing the speed of walking, turning during walking – change of<br />
walking direction. This level would be heavily concerned with cognitive interaction between the man <strong>and</strong><br />
machine. A very motivating treatment modality would be avoiding the obstacles on the route that would<br />
represent a purposeful journey for a patient (going from the therapy session to their room for example).<br />
Level 3: the pathologic gait pattern of the patient is corrected by the gait rehabilitation robot system itself in<br />
an autonomous mode by comparing the measured joint angles, torques, velocities etc with a normal gait<br />
pattern or the <strong>CORBYS</strong> professional user assisted/corrected gait pattern <strong>and</strong> correcting false movements of<br />
the patient.<br />
4.3 <strong>Requirements</strong> for demonstrator I<br />
REQUIREMENTS FOR THE FIRST DEMONSTRATOR<br />
MANDATORY DESIRABLE, OPTIONAL<br />
1. Inclusion criteria to use the <strong>CORBYS</strong> gait rehabilitation system<br />
All symptoms leading to disturbed gait:<br />
- weakness of legs<br />
- increased tonus (spasticity) of legs<br />
- lack of coordination of legs<br />
- involuntary movement of legs<br />
- sensoric disorders of legs: disturbed afferences<br />
- Contractures < 30° in ankle, knee <strong>and</strong> hip<br />
- weakness of legs <strong>and</strong> trunk<br />
- lack of coordination of legs <strong>and</strong> trunk<br />
- involuntary movement of <strong>and</strong> trunk<br />
- in this case, the system must also support the trunk,<br />
which is unstable<br />
2. Exclusion criteria for use of the <strong>CORBYS</strong> gait rehabilitation system<br />
- blindness<br />
- cognitive impairments to use the system<br />
- no arm or h<strong>and</strong> function<br />
- massive, suddenly outbreaking dynamic spasticity, ballistic<br />
movement disorder<br />
- gr<strong>and</strong> mal epilepsy with frequent seizures<br />
3. Informal diagnostic measurements by the robot system<br />
- degree of passive movement in hip, knee <strong>and</strong> ankle joint - analysis of balance problems<br />
(analysis for fixed contractures)<br />
- measurement of fatigue / tiredness: Measure cycle<br />
- resistance to a passive movement (analysis for dynamic time<br />
contractures)<br />
Co-ordination will find out mistakes <strong>and</strong><br />
abnormalities <strong>and</strong> thus allow to identify tiredness as<br />
Within a gait cycle the following parameters should betired<br />
patient will increase mistakes <strong>and</strong> this can be<br />
measured<br />
detected by co-ordination feedback<br />
- torque (= voluntary muscle strength) in hip, knees <strong>and</strong> ankle<br />
joints both sides<br />
13
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
- velocity of movements in hip (sagittal) <strong>and</strong> knee both sides<br />
- changes of angles of joints over time within the gait cycle<br />
(=curves of gait cycle: hip sagittal, knee <strong>and</strong> ankle joints)<br />
- orchestration of coordinated movements in all joints as<br />
measured by the changes of joint angles over time<br />
- “foot l<strong>and</strong>ing”: first contact of which part of the foot to the<br />
ground (should be the heal)<br />
These measurements must be compared with normal gait<br />
pattern<br />
Evaluation measurements:<br />
- how much torque / energy was supplied by system both to<br />
support the weak muscle <strong>and</strong> to hamper spasticity<br />
- patient’s perspective, expectations, experience <strong>and</strong> opinion<br />
regarding the demonstrator should be documented.<br />
4. St<strong>and</strong>ardised diagnostic measurements<br />
- step length<br />
- timed walk tests: steps per time, 6 min test. 10 m test<br />
5. General requirements to system design<br />
- adaptation to patient´s height: 160-200 cm<br />
- adaptation to patient´s weight: 50 kg-100 kg<br />
- infinitely variable body weight release to decrease ground<br />
reaction force provided through the frame of the machine<br />
rather than overhead support (harness). In addition, the<br />
powered orthosis can provide body weight support as well.<br />
The ability of mobile platform to hold the patient could<br />
contribute to the design of orthosis.<br />
- max. width 100 cm<br />
- Vmax: 3 km/h<br />
- walking curves<br />
- degrees of freedom (max angles):<br />
- Pelvis: (up, down) translational actuated; (forward,<br />
backward) translational actuated; (left, right) translational<br />
actuated; frontal, rotational passive (actuated indirectly through<br />
hip motion).. As the exoskeleton will be attached to mobile<br />
platform, active DoF on the mobile platform induce pelvis<br />
movements. Design considerations in pelvis are considerably<br />
challenging. Since the orthosis is attached to the moving<br />
platform the system is redundant, which implies that more<br />
than one design solution is possible that would meet the<br />
necessary movement requirements. The above describes one<br />
design possibility.<br />
- Hip: 3: sagittal (40 °) <strong>and</strong> frontal (10°) actuated, (desired)<br />
horizontal (10°) passive<br />
- Knee: 1: sagittal (70°) actuated<br />
- Ankle: 2: sagittal (90°)actuated, horizontal passive<br />
- sensor:<br />
- measurement of angles in all actuated joints<br />
- measurement of torques applied by the system in all actuated<br />
joints<br />
- measurement of velocity of movement in all actuated joints<br />
- measurement of the patient´s resistance against a movement<br />
generated by the system<br />
active or passive compliance to deal with spasticity:<br />
- system must slowdown movement in case of useful, but<br />
increased movement <strong>and</strong> in case of spasticity<br />
14<br />
- adaptation to patients weight: 40 kg-125 kg<br />
- actuator in both ankle joints 1<br />
- safety: obstacle identification including stairs up<br />
<strong>and</strong> down, collision avoidance<br />
- system should identify patient´s intention via EEG,<br />
speech, eye tracking or weight shift, etc for<br />
initiation of gait, steering, turning, stopping etc.<br />
- system should evaluate, if the identified intention is<br />
meaningful<br />
- system utilisation without continuous supervision<br />
of a therapist<br />
- turning on the spot<br />
- autonomous mode: gait rehabilitation robot<br />
systemoptimizeses the pathologic gait pattern by<br />
itself, withoud the impact of the therapist, by<br />
comparing the patients gait pattern with a<br />
normalized gait parametes including angle, torque,<br />
<strong>and</strong> velocity in all actuated DoF in hip, knee <strong>and</strong><br />
ankle.<br />
- modular configuration of the system: frame <strong>and</strong><br />
exoskeleton can be separated, exoskeleton can be<br />
split: the less severely affected a patient is, the<br />
fewer modules will be needed: frame, actuators,<br />
sensors<br />
- h<strong>and</strong>ling: One therapist must setup system within<br />
10 minutes<br />
- h<strong>and</strong>ling: patient must set up system by himself<br />
- utilization in everyday life, at home
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
- system must not give way in case of little spasticity<br />
(isometric reaction), but give way in case of massive<br />
spasticity (isotonic reaction)<br />
- degree of giving way in case of massive spasticity (in % of<br />
normal strength) must be adjustable individually <strong>and</strong> joint by<br />
joint by the therapist<br />
- safety: patient must not fall with the system or sink down<br />
within the system<br />
- h<strong>and</strong>ling: System must be optimised easily by the therapist in<br />
case of pressure ulcera<br />
6. Treatment goals<br />
- indoor walking on plane <strong>and</strong> even surfaces<br />
- outdoor walking<br />
- economic walking<br />
7. Treatment methods<br />
- system learns optimal gait automatically by comparing the<br />
analysed gait pattern of the patient with the normal gait<br />
pattern via feedback loops<br />
- system learns optimal gait for the individual patient during<br />
training session with therapist on the treadmill (=”teach-in”)<br />
- for both legs independently<br />
- system repeats the therapist´s treatment manouvres<br />
15<br />
- motivation of the patient is important <strong>and</strong> hence the<br />
use of reward-based positive psychology is<br />
beneficial. In the <strong>CORBYS</strong> demonstrator,<br />
envisaged to be a moving platform, visual<br />
motivational feedback is foreseen to be distracting<br />
<strong>and</strong> hence not advisable. However, audio<br />
motivational feedback may be used. <strong>CORBYS</strong><br />
professional (physiotherapists) may encourage the<br />
patients by using pep talk <strong>and</strong> raising the volume of<br />
their voice (verbal encouragement).<br />
4.3.1 Cognitive capability<br />
<strong>CORBYS</strong> demonstrator should not just be a rehabilitation system but also a system with cognition. It should<br />
learn the areas in the gait cycle which need improvement <strong>and</strong> present this to the <strong>CORBYS</strong> professional user<br />
(therapist). Two main aspects in which <strong>CORBYS</strong> cognitive capability can contribute/innovate:<br />
• Cyclical nature of gait (Propelling force - push off, pull off - <strong>and</strong> efficient weight transfer)<br />
• movement planing (inituation, stopping, turning)<br />
In the former cognitive aspect, <strong>CORBYS</strong> should support all three planes i.e. sagittal, frontal <strong>and</strong> transversal. It<br />
should learn over many cycles of gait what the problem is, <strong>and</strong> subsequently introduce corrective measures<br />
not intra-cycle but inter- cycle, or inter-session. It can also support maintenance of balance (dynamic<br />
walking). In contrast to state-of-the-art systems where therapists adjust gait parameters such as gait velocity,<br />
<strong>CORBYS</strong> can control this adjustment. For example, if the patient is tired, adjust the speed to suit the<br />
performance. Another relevant example can be of spasticity – if the patient is experiencing spasticity, unlike<br />
state-of-the-art systems such as Lokomat which can be programmed to switch off the motors upon reaching a<br />
threshold, the <strong>CORBYS</strong> demonstrator should be able due to its cognitive capabilities to adjust parameters<br />
automatically to control or dampen spasticity using mechanical actuation. In general, the <strong>CORBYS</strong> gait<br />
rehabilitation system should be able to identify the support needed by the patient, provide it at that level <strong>and</strong><br />
then gradually lessen the support. Dampening of the spasticity should also be supported using motorised<br />
stimulation (not Function Electrical Stimulation).<br />
In the second cognitive aspect, <strong>CORBYS</strong> should assist the patient in movement planning during over ground<br />
walking with the <strong>CORBYS</strong> demonstrator I. <strong>CORBYS</strong> can assist the patient <strong>and</strong> provide them with a<br />
purposeful goal to move around. It can provide help the patient such as planning the movement, execution of
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
the movement, making a turn, planning the route etc.<br />
Regarding the "cognitive" aspects of the project one can actually think of having cognitive control, situation<br />
awareness etc. at both levels of control: lower-level that is concerned with automated movement of lower<br />
extremities (here "cognitive" control of weight shifting manoeuvre could be very relevant) <strong>and</strong> higher-level<br />
that should be concerned with purpose of movement - going to designated place while making all necessary<br />
turns <strong>and</strong> obstacle avoidance. Also, in this way both categories of patients, more impared <strong>and</strong> less impared,<br />
may be considered. With the more impaired, the emphasis will be on "lower-level" aspects: proper weight<br />
bearing <strong>and</strong> weight shifting manoeuvres as well as appropriate automated movement of legs while with less<br />
impaired patients the movement planning <strong>and</strong> trajectory execution would be a challenge.<br />
4.3.2 Ethical compliance<br />
� Informed consent will be obtained from all end-users (patients)<br />
� The possibility for the patient to quit experiments at any time without having any disadvantage for<br />
their further treatment will be ensured<br />
� Votes of the local ethical committees (in Bremen, Germany <strong>and</strong> Ljubljana, Slovenia) will be obtained<br />
4.4 <strong>Requirements</strong> for demonstrator II<br />
The robotic system which will be used as the second demonstrator has been developed by UB within the<br />
German national project RecoRob (Hericks et al., 2011). It consists of a 7DoF (degrees of freedom) robot arm<br />
which is mounted on the mobile platform. The robot is equipped with sensors for environment perception as<br />
well as with sensors for platform navigation <strong>and</strong> robot arm control. In this project two experimental setups<br />
will be designed in which the robot works in a team with a human to investigate contaminated/hasardous<br />
environments. Both experimental scenarios of the second demonstrator will be designed to test different<br />
functionalities of the <strong>CORBYS</strong> cognitive control architecture, with the emphasis on alternating the humanrobot<br />
lead taking in exploratory scenarios. In both scenarious different testings will be carried out, partial<br />
testing of some aspects of cognitive architecture as well as full testing of complete functionalities. An example<br />
of partial testing is demonstration of goal selfgeneration in autonomous object manipulation. In full testing,<br />
the robot is considered as a partner of an external agent (e.g. firefighter) in a cooperative activity.<br />
In the first scenario named augmented teleoperation, the robot is teleoperated by the human so that it is sent<br />
into the contaminated area to collect samples used to determine contamination levels. The robot’s primary<br />
task is to follow the operator's instructions for navigation <strong>and</strong> manipulation. However, if communication fails,<br />
the robot has to be able to take the initiative in completing the task, in spite of the loss of human comm<strong>and</strong>.<br />
Also, if an unexpected obstacle is sensed, the robot has to be capable of reasoning to “veto” dangerous human<br />
comm<strong>and</strong>s to avoid running into obstacles. In this case, in order to complete the task, the robot has to takeover<br />
the goal-setting initiative <strong>and</strong> to self-generate a goal. In the second experimental scenario, the robot is a<br />
co-worker in investigation of hasardous environment. The robot has the role of a transportation robot as it<br />
helps the human in carrying the containers with the collected samples. At the beginning of the investigation<br />
mission, the robot follows the human partner keeping the constant distance between them. Based on sensory<br />
information (e.g. vision sensors) the robot analyses the human’s behaviour <strong>and</strong> deduces the human's goal. If<br />
there is an unexpected change in human behaviour such as, for example, a change in direction of movement so<br />
that human is approaching the robot, the robot has to change its behaviour so to stop <strong>and</strong> to allow the human<br />
to place the containers with the samples collected onto the robot.<br />
In order to enable the evaluation of <strong>CORBYS</strong> cognitive control architecture on the second demonstrator the<br />
16
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
following requirements have to be fullfiled.<br />
1. Integration (wrapping) of modules<br />
of existing control architecture (e.g<br />
robot arm control module) of the<br />
second demonstrator into <strong>CORBYS</strong><br />
control architecture.<br />
2. Analysis of usability of existing<br />
sensors for <strong>CORBYS</strong> demonstration<br />
REQUIREMENTS FOR THE SECOND DEMONSTRATOR<br />
MANDATORY<br />
High level <strong>CORBYS</strong> cognitive control modules to be evaluated on existing<br />
robotic system for examining hasardous environments. A detailed strategy<br />
will be defined later. It depends on the requirements of <strong>CORBYS</strong> control<br />
(software) architecture.<br />
Existing sensors (e.g. vision sensors) are to be analysed in the <strong>CORBYS</strong><br />
context. It is to be tested whether existing sensors can provide necessary<br />
information for cognitive modules. Should additional sensors be integrated?<br />
Should the functionalities of the vision module be extended?<br />
3. Situation Awareness Situation Awareness input to semi-autonomous (autonomous) robot control<br />
4. Identification <strong>and</strong> anticipation of<br />
human co-worker purposeful<br />
behaviour<br />
Cognitive input to semi-autonomous (autonomous) robot control for<br />
overtaking initiative when needed<br />
4.5 <strong>Requirements</strong> Engineering Methodology (UIREF) Salient Features<br />
<strong>Requirements</strong> Elicitation <strong>and</strong> Prioritisation<br />
UI-REF (Badii 2008) incorporates an integrated set of models, techniques <strong>and</strong> tools to allow the system<br />
designers to negotiate articulate <strong>and</strong> prioritise the set of use-contexts <strong>and</strong> their related requirements <strong>and</strong><br />
evaluation context criteria <strong>and</strong> priorities in a way that overcomes the problems of needs articulation <strong>and</strong><br />
memory bias on the part of the users. To arrive at a prioritised set of requirements, it will be necessary at least<br />
to:<br />
i) agree <strong>and</strong> set out the list of domain prototypical entities or actors as stakeholders;<br />
ii) define the characteristics of the prototypical entities involved in the typical usage arena envisaged for<br />
the target system (e.g. actors, devices, system of systems);<br />
iii) establish the generalisation ontology of usage-contexts each related to their distinct features as needed<br />
by the relevant user (sub)groups in their target prototypical scenarios;<br />
iv) define the key differentiators of usage contexts (context switches) <strong>and</strong> the prototypical actors’ needs<br />
hierarchies in each of the identified prototypical target context-scenarios;<br />
v) define the prototypical workflows <strong>and</strong> state diagrams, thus establishing the domain user/practitioner’s<br />
(sub)-goal <strong>and</strong> (sub)-task hierarchies;<br />
vi) deduce the user’s needs priorities in terms of ICT-enabled features to facilitate user’s task fulfilment in<br />
each situated context-scenario of the application domain as identified <strong>and</strong> demarcated (situated-usageclass)<br />
under respectively iii) <strong>and</strong> iv) above.<br />
User-intimate approaches often yield a vast amount of raw data <strong>and</strong> need appropriate abstraction <strong>and</strong> context<br />
layering, to reflect the natural partitions within the domain. This is so as to arrive at actionable insight as to<br />
the most deeply-valued needs for most users belonging to each of the target usage-context types within the<br />
17
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
spectrum of usage-context classes to be addressed by the target system.<br />
Specifically in identifying all domain objects <strong>and</strong> actors <strong>and</strong> delineating the boundaries of roles <strong>and</strong><br />
responsibility spaces, rights/privileges for each actor <strong>and</strong>/or device in the domain, we are essentially<br />
negotiating a phenomenological analysis of the domain with the users. By further specifying domain<br />
knowledge, taxonomies <strong>and</strong> a tentative ontology for the domain <strong>and</strong> negotiating this with various user<br />
communities, it is possible to conclude an appropriately partitioned ontology of the world of the users for<br />
whom the system is intended. Such generalisation ontology serves as a values expression language of the<br />
most-deeply-valued-needs for various usage-contexts.<br />
Building on this increasingly deeper underst<strong>and</strong>ing paves the way for formalising the domain knowledge<br />
structure including tacit knowledge, causal, process-related <strong>and</strong> structural knowledge thus adequately<br />
specifying the domain knowledge structure. This comprises a most important element of the experientiallyderived<br />
tactical/ strategic problem solving knowledge. The domain knowledge is clearly the provenance of<br />
the various user-classes (as distinguished by their usage-contexts) who are to use the system in pursuit of their<br />
everyday practice. It is these communities of practice who are expected to make available to the requirements<br />
engineers <strong>and</strong> other stakeholders their domain knowledge so as to promote deeper underst<strong>and</strong>ing of their<br />
domain requirements including end-to-end interoperability <strong>and</strong> meta-operability across all implicated entities<br />
including legacy norms, processes, policies, etc. In eliciting the domain knowledge we can establish the goal<br />
structures knowledge for the application domain.<br />
UI-REF advocates that the requirements are classified into the following descending-order priority categories<br />
for implementation; these range from m<strong>and</strong>atory, as the highest priority class, through to desirable, as the<br />
medium priority class, to optional, the lowest priority class. Prioritisation of requirements is deduced from<br />
careful analysis of user-stated priorities which can be aided also by a consideration of Purpose-Hurry-<br />
Frequency Criteria set re degrees of intimacy <strong>and</strong> immediacy of the required services <strong>and</strong> patterns of<br />
interactive online support required to facilitate the user’s life/work-style patterns.<br />
The above priority levels are formally elaborated as a spectrum of target platform core affordances plus<br />
additional ones as follows:<br />
M<strong>and</strong>atory – These are those core design features which are perceived by the majority of a user group as<br />
offering the most needed added-value(s) <strong>and</strong> that can be accommodated by the target system. This category is<br />
expected to include the selected functional <strong>and</strong> non-functional requirements (look-<strong>and</strong>-feel interfaces) that are<br />
the common core to all the usage-contexts within the target usage spectrum; including features supporting<br />
scalability, modularity <strong>and</strong> open design so as to enable the incremental evolution of the system to offer further<br />
features to satisfy future requirements <strong>and</strong> customisation as appropriate.<br />
Desirable – These are those features that are desirable, but not highest priority design features as c<strong>and</strong>idates to<br />
be accommodated as far as possible, within the resources <strong>and</strong> technological constraints appertaining to the<br />
lifecycle of the project.<br />
Optional – These are those features said by some users to be of the lowest priority <strong>and</strong>/or are anyway highly<br />
contextualised to particular (sub)-sectors of the user group <strong>and</strong> as such falling into the less common, <strong>and</strong>/or<br />
possibly more controversial <strong>and</strong> conflictual category.<br />
Once the raw user-stated requirements are aggregated through all elicitation instruments, channels <strong>and</strong><br />
modalities, they have to be transcribed, tabulated <strong>and</strong> cross-checked to prune duplications <strong>and</strong> delete clearly<br />
out-of-scope requirements. UI-REF promotes a negotiation-based resolution of requirements into the three<br />
18
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
categories to reflect the priorities of the majority of users.<br />
Next additional checks have to be done to flag up, for negotiation with the stakeholders, the possible<br />
deletions, demotions, promotions <strong>and</strong> new additions of specific requirements to be consensually resolved into<br />
the set of m<strong>and</strong>atory, desirable <strong>and</strong> optional requirements for a first prototype. The need for the following<br />
refinement steps arises as a natural consequence of the fact that the users in stating their requirements can not<br />
be expected to be either exhaustive or factor-in the technology, market <strong>and</strong> practice constraints (SoA, SoM,<br />
SoP); the trends of which they are not necessarily expected to be fully aware. Further, users are expected to<br />
articulate their own perceived requirements which may or may not be complete <strong>and</strong> may be incompatible with<br />
other users’ requirements or project resources or in conflict with the technological <strong>and</strong>/or market imperatives<br />
<strong>and</strong> trends.<br />
Figure 7: Prioritisation process in UI-REF<br />
Accordingly it will next be necessary to “factor-in” the influence of the push-pull forces <strong>and</strong> their dynamics<br />
over the near to medium term to ensure that the target system to be delivered will represent the highest rate-ofreturn<br />
on investment for all stakeholders in order to have the highest chance of take-up <strong>and</strong> widest diffusion,<br />
usability <strong>and</strong> technology-policy-process interoperability <strong>and</strong> convergence potential given the current <strong>and</strong><br />
emergent technological <strong>and</strong> practice environment that it will have to integrate with i.e. it will be as scalable<br />
<strong>and</strong> sustainable as possible. Such pull <strong>and</strong> push factors representing constraints <strong>and</strong> affordances invoked as<br />
requirements filters <strong>and</strong> augmenters can be best understood by performing respectively a SoA, SoP <strong>and</strong> SoM<br />
analysis, where the State-of-X is represented by the latest update on the state of current modus oper<strong>and</strong>i, gaps,<br />
<strong>and</strong>, available enabling <strong>and</strong> emergent innovations from the viewpoint of X.<br />
Prioritisation in UI-REF is performed at several levels including stakeholder, intimate or non-intimate usagecontext-type,<br />
specific usage-contexts within each type, usage-context implementation sequencing, usability<br />
sensitive evaluation of system functionalities whose usability assessment is ranked <strong>and</strong> weighted so as to<br />
reflect their priorities in the order of the most-deeply-valued needs of the user as prioritised per the UI-REF<br />
requirements engineering process. Such prioritisation processes will use a variety of situated techniques as<br />
appropriate to best suit particular users <strong>and</strong> their usage-contexts for example Nirvana, Ablation <strong>and</strong> Noah’s<br />
Arc, virtual user, nested videos, Frequency-Purpose-Hurry (FPH)-based analysis of user specified needs <strong>and</strong><br />
thus the resulting UI-REF Effects-Side-Effects-Affects multi-dimensional impact prediction <strong>and</strong> evaluation<br />
matrices.<br />
19
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Total requirements: 345 (M<strong>and</strong>atory: 309, Desirable: 22, Optional: 14)<br />
In total, 345 requirements have been gathered, out of which 309 are classed as m<strong>and</strong>atory, 22 as desirable, <strong>and</strong><br />
14 as optional requirements. All the requirements are detailed under chapters 5 – 9 i.e. mechatronic control<br />
systems, human control system, Robohumatic systems, system integration <strong>and</strong> functional testing for the<br />
<strong>CORBYS</strong> solutions, <strong>and</strong> evaluation.<br />
Using technology filters, specific usage-context filters for demonstrator I <strong>and</strong> II, 44 main requirements for the<br />
<strong>CORBYS</strong> project have been selected falling under three main categories: Cognitive systems, Demonstrator I<br />
specific, Demonstrator II specific.<br />
4.5.1 Cognitive systems<br />
1. HSS8: Sensor output related to identification of psycho-physiological states<br />
2. HSS9: Sensor output related to identification of intentional states<br />
3. SAWR16: Current context of SAWBB<br />
4. RRS6: Identification of reflexive capability to be enabled by the FPGA sub-system<br />
5. RRS7: Identification of ‘Navigation’ related reflexive behaviour enabled by the FPGA sub-system<br />
6. RRS8: Identification of ‘Obstacle avoidance’ related reflexive behaviour enabled by the FPGA subsystem<br />
7. RRS9: Identification of ‘Safety’ related reflexive behaviour enabled by the FPGA sub-system<br />
8. RSM8: BCI Architecture<br />
9. FAI12: User interface<br />
10. FAI13: User interface for patients<br />
11. FAI14: User interface for therapist<br />
12. FAI15: User interface for developers<br />
13. DMI2: Motor intention study<br />
14. DMI3: Algorithm for detection of motor intention<br />
15. ADD1: SOIAA Module<br />
16. FAI8: Self-motivated gait <strong>and</strong> goal Generation sub-system<br />
17. SOIAA1: Sensorimotor data from the robot platforms<br />
18. SOIAA6: Models <strong>and</strong> algorithms for the identification <strong>and</strong> anticipation of human purposeful<br />
behaviour<br />
19. FAI7: Robot cognitive control architecture<br />
4.5.2 First Demonstrator: Mobile Robotassisted Gait Rehabilitation System<br />
20. HSS6: Sensor output related to physical effort assessments<br />
21. HSS7: Sensor output related to gait parameter assessments<br />
22. HSS15: Online access to past rehabilitation sessions<br />
23. RSM11: Modelling of the walking in the rehabilitation system<br />
24. MPD4: Body weight support.<br />
25. ACT4: Actuator models for robotic system modelling <strong>and</strong> integration of motor control units<br />
26. ACT5: The actuator system of the orthosis<br />
27. ACT6: Actuation of the mobile platform<br />
28. LLC1: Mechanical design of the powered orthosis <strong>and</strong> mobile platform<br />
29. LLC2: Electro-mechanical components<br />
30. LLC5: Low level control loop<br />
31. REAL2: Integration of the actuators to the orthosis<br />
32. REAL3: Integration of the actuators for the mobile manipulator<br />
33. REAL4: Sensor integration into the orthosis<br />
34. HSS21: Intended duration of continuous usage of physiological sensors<br />
20
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
35. HSS34: Physiological sensor biocompatibility issues<br />
36. HSS36: Physiological sensor hygienic issues<br />
37. MPD10: Patient stability<br />
4.5.3 Second Demonstrator: Robotic Systems for examining Hazardous Environments<br />
38. EASD1: Detailed specification of the evaluation scenarios.<br />
39. EASD2: Integration (wrapping) of modules of existing control architecture (e.g. robot arm control<br />
module) of the second demonstrator into <strong>CORBYS</strong> control architecture.<br />
40. EASD3: Analysis of usability of existing sensors for <strong>CORBYS</strong> demonstration<br />
41. EASD4: Situation Awareness<br />
42. EASD5: Identification <strong>and</strong> anticipation of human co-worker purposeful behaviour<br />
43. EASD6: Extension of existing sensor modules (e.g. vision)<br />
44. EASD7: Evaluation of <strong>CORBYS</strong> cognitive control architecture on the existing robotic system<br />
4.6 StateoftheMarket<br />
This section presents a state-of-the-market on systems relevant to <strong>CORBYS</strong>. In the first subsection, focus is<br />
given to the market for gait rehabilitation systems, <strong>and</strong> in the second subsection, focus is shifted to the market<br />
for autonomous robotic systems for assistance of humans in examining hazardous areas.<br />
4.6.1 First Demonstrator: Mobile Robotassisted Gait Rehabilitation System<br />
The following subsections reports statistics on the neurological conditions identified by the clinical partners in<br />
the <strong>CORBYS</strong> Consortium as most relevant in terms of the application of <strong>CORBYS</strong> technologies to be<br />
developed, to gait rehabilitation for patients suffering from these conditions.<br />
4.6.1.1 Stroke<br />
Table 1 below shows data from the WHO-MONICA Project (WHO MONICA Project cited in Major <strong>and</strong><br />
Chronic Diseases Report 2007) reported for the age range 35-64 years as to mean stroke event rates derived<br />
from the last 3 years of surveillance. Annual change in stroke events <strong>and</strong> 28-days case fatality are also<br />
reported (Major <strong>and</strong> Chronic Diseases Report 2007).<br />
Table 1: WHO-MONICA Project 6 EU population.<br />
Age-st<strong>and</strong>ardised average attack rate per stroke events (fatal <strong>and</strong> non fatal) per 100,000: mean of the last 3 years of the 10year<br />
surveillance in men <strong>and</strong> women ages 35-64 years; 28-day case fatality; average annual trend in 10 years of stroke events.<br />
21
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
4.6.1.2 Multiple Sclerosis<br />
The cost per MS case in Europe ranges from €10 000 to €54 000, with a mean of €31 000 (Sobocki P 2007<br />
cited in Major <strong>and</strong> Chronic Diseases Report 2007). The distribution of the estimated total cost of MS in<br />
Europe in 2005 by resource use components is reported in Figure 8.<br />
Figure 8: Distribution of total cost of MS in Europe (year 2005) by resource use components<br />
22
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
MS prevalence by gender, age <strong>and</strong> European Country (where data were available) <strong>and</strong> MS total annual<br />
incidence rates by European Country are summarised in Table 2 <strong>and</strong> Table 3 (Major <strong>and</strong> Chronic Diseases<br />
Report 2007).<br />
Table 2: Prevalence (per 100 000) of MS in Europe by age (best estimates)<br />
Table 3: Incidence (per 100 000/ year) of MS in Europe<br />
23
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
4.6.1.3 Injuries<br />
4.6.1.3.1 Fatal injuries<br />
The injury pyramid for EU shows on the top a total of 250000 fatal injuries. These injuries mostly comprise<br />
of children, adolescents <strong>and</strong> young adults. The Netherl<strong>and</strong>s has the lowest rate of fatal injuries in EU <strong>and</strong><br />
estimated more than 100,000 lives could be saved each year if the countries lower their injury mortality rate to<br />
the current Netherl<strong>and</strong>s rate (Injuries in the European Union Statistics Summary 2005-2007).<br />
4.6.1.3.2 Nonfatal injuries<br />
An estimated 60 million people every year are medically treated for an injury. 42 million people are treated in<br />
hospital for injuries every year <strong>and</strong> out which 7 million are admitted for severe injuries which is about 19000<br />
people every day. The total estimated cost of injury related hospital treatment is €15 billion, this is because of<br />
the patients receive more than 50 million hospital days of treatment every year accounting to 9% of all<br />
hospital days (DG Sanco 2004 cited in Injuries in the European Union Statistics Summary 2005-2007). There<br />
are more than 3 million people permanently disabled in the European Union due these non-fatal injuries<br />
(Labour Force Survey 2002 cited in Injuries in the European Union Statistics Summary 2005-2007). Figure<br />
below shows non-fatal injuries per 1000 by sex <strong>and</strong> age group.<br />
4.6.1.3.3 Trends<br />
Figure 9: Non-fatal injuries per 1000 by sex <strong>and</strong> age group<br />
Fatal home <strong>and</strong> leisure injuries are growing because of falls among elderly people. Non-Fatal injuries are also<br />
on the rise in the home <strong>and</strong> leisure areas where as in traffic <strong>and</strong> work-place it is level. There is going to be<br />
rise in the number of disabled people because of the decline in fatal injuries <strong>and</strong> stable or increasing non-fatal<br />
injuries. The injury hotspots that are identified are children, adolescents, older people, vulnerable road users,<br />
sports, product <strong>and</strong> services related accidents, interpersonal violence, <strong>and</strong> self harm. 75% of all injury deaths<br />
are due to accidental injuries <strong>and</strong> 25% are due to intentional injuries. The most fatalities occur due to suicide<br />
24
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
<strong>and</strong> road injuries both in absolute <strong>and</strong> relative terms (in relation to the number of hospital treated injuries).<br />
70% of all injuries in EU are treated in hospitals. Home, leisure <strong>and</strong> sports are the biggest domain of all<br />
hospital treated injuries accounting for 74% (Assessing disability 2004 cited in Injuries in the European Union<br />
Statistics Summary 2005-2007). Road injuries account for 10% of all hospital treated injuries or a total of 4.3<br />
million victims every year as estimated by EU IDB (Injuries in the European Union Statistics Summary 2005-<br />
2007).<br />
4.6.1.4 Wheel Chair Market Europe (Business Wire 2009)<br />
The top five countries in wheelchair market are Germany, United Kingdom, Italy, Spain, <strong>and</strong> France. The<br />
market increased by 5.8% in total units sold <strong>and</strong> 5.2% in value in 2008 in the top 5 countries. The wheelchair<br />
market is expected to grow steadily after 2008 <strong>and</strong> increase to approximately 300,000 units sold in 2011<br />
which will amount to €737.3 million in sales. More people are surviving accidents <strong>and</strong> people live longer<br />
because of better medical treatment <strong>and</strong> equipment. This has resulted in an increase in the growth of<br />
wheelchair market.<br />
4.6.2 Second Demonstrator: Robotic Systems for examining Hazardous Environments<br />
Nowadays intelligent remotely operated vehicles are often employed in scenarios including investigation of<br />
environments like toxic or irradiated environments or areas hardly reachable by a person or dangerous for a<br />
person to walk through. Many intelligent vehicles have been developed in the past years <strong>and</strong> some of them<br />
are already available on the market.<br />
Remotely operated vehicles have a large variety of sensors for environment perception as well as a robust<br />
platform with wheels or chains for propulsion, but they do not act independently, being remotely operated by<br />
a person. Some remotely operated vehicles include some intelligence on board in order to monitor its<br />
surroundings <strong>and</strong> sense dangerous situations (such as detecting <strong>and</strong> avoiding obstacles), which makes the<br />
remote control significantly easier for the operator. The wireless communication channel is critical for remote<br />
controlled vehicles, because it should enable a real-time transfer of all data required by the user.<br />
Remotely operated vehicles exist in different size <strong>and</strong> weight categories, depending on the applications which<br />
they should serve. They are employed in military as well as civilian applications. Many models exist on the<br />
market, modified to specific applications. The scenarios in which such vehicles are useful vary from<br />
reconnaissance <strong>and</strong> sample collection to Explosive Ordnance Disposal (EOD). Some of these vehicles are<br />
presented in the following paragraphs.<br />
Figure 10: Asendro EOD robot<br />
25
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
ASENDRO EOD developed by Diehl BGT Defence (Modular Robot ASENDRO, 2007) is a robot which<br />
allows rapid <strong>and</strong> reliable detection of suspicious objects as well as dangerous zones <strong>and</strong> is the first small robot<br />
worldwide which can be used for both reconnaissance <strong>and</strong> EOD (Explosive Ordnance Disposal) tasks. The<br />
robot is equipped with a manipulator arm including a two-jaw gripper which is used for moving <strong>and</strong> h<strong>and</strong>ling<br />
objects such as door <strong>and</strong> window h<strong>and</strong>les. The control of the arm is done through the new type of<br />
telepresence technology, which means that the arm is moved synchronously with the operator's h<strong>and</strong> or head<br />
movement. This feature enables the operator to act as if he himself were on site. Stereo cameras are attached<br />
at the tip of the arm, which transmit their data to a head-up display. Thus, the operator can see threedimensionally<br />
"with the robot's eyes" <strong>and</strong> estimate distances from objects.<br />
The modularity of the ASENDRO EOD allows it to become becomes the reconnaissance robot ASENDRO<br />
SCOUT through replacement of the payload module. Controlled from a safe distance, the robot explores the<br />
operational site. A thermal <strong>and</strong> a video camera is then mounted on the extendable reconnaissance arm <strong>and</strong><br />
transfers images as well as sound to the comm<strong>and</strong> centre.<br />
Figure 11: Dragon runner<br />
The Dragon Runner (Dragon Runner Reconnaissance Robot) originally developed by the National Robotics<br />
Engineering Centre (NREC) in 2002-2003, is a rugged, ultra-compact, lightweight <strong>and</strong> portable<br />
reconnaissance robot developed for urban operations (UO). The prototype model of the Dragon Runner<br />
measures around 23cm in length, 20cm wide <strong>and</strong> 7.5cm tall with a weight of about 7kg. The basic robot<br />
operates as a tough, low-observable ground sensor providing corner views to users moving at a speed of about<br />
8km/h.<br />
In May 2007, Automatika was acquired by Foster-Miller, a QinetiQ North America company, which made the<br />
Dragon Runner commercial. In November 2009, QinetiQ received contracts for around 100 Dragon Runner<br />
robots from the UK Ministry of Defence (MoD) to support its military operations in Afghanistan. The value<br />
of the contract was £12m ($19m) <strong>and</strong> includes the provision of technical services <strong>and</strong> spares.<br />
26
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Figure 12: TechnoRobot RiotBot with remote control<br />
TechnoRobot specialises in the design <strong>and</strong> manufacture of robots intended for the security <strong>and</strong> defence<br />
sectors. After two years of development, TechnoRobot has launched RiotBot (TechnoRobot – Rapidly<br />
Deployable Remotely Operated Less-Lethal Support Robos), a robotic platform requested for years by<br />
different international agencies involved in high-risk operations for its security <strong>and</strong> defence. To design the<br />
RiotBot, a strong R&D team was formed within the enterprise by more than 20 technicians working closely<br />
alongside various special operations tactical experts.<br />
RiotBot was designed to intervene in high-risk missions to remove the threat to human team members. The<br />
principle objective is to minimise injury within both the intervention group <strong>and</strong> the opposing force. To that<br />
end, the RiotBot is equipped with a nonlethal weapon. It is also equipped with a laser site for greater weapon<br />
acuracy <strong>and</strong> a powerful flashlight for nighttime operations. RiotBot is equipped with six-wheel-drive<br />
capabilities, <strong>and</strong> can reach speeds exceeding 20km/h. A light-weight portable control unit allows the RiotBot<br />
operator to view images captured by the robot's camera in real time from over a kilometer away.<br />
Figure 13: iRobot 510 Packbot<br />
510 PackBot (GROUND ROBOTS – 510 PACKBOT®) produced by iRobot is a tactical mobile robot that<br />
performs multiple missions while keeping warfighters <strong>and</strong> first responders out of harms way. Through its<br />
modularity, adaptability <strong>and</strong> exp<strong>and</strong>ability it can be used in various scenarios like bomb disposal, surveillance<br />
/ reconnaissance <strong>and</strong> hazardous material detection. 510 PackBot easily climbs stairs, rolls over rubble <strong>and</strong><br />
navigates narrow passages with sure-footed efficiency, traveling at speeds of up to 9 km/h.<br />
510 PackBot relays real-time video, audio <strong>and</strong> other sensor readings while the operator stays at a safe st<strong>and</strong>off<br />
distance. The operator can view a 2-D or 3-D image of the robot on the control unit, allowing for precise<br />
positioning. PackBot also features a game-style h<strong>and</strong> controller for faster training <strong>and</strong> easier operation in the<br />
27
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
field. More than 3,000 PackBot robots have been delivered to military <strong>and</strong> civil defence forces worldwide.<br />
Figure 14: EOD-Roboter made by telerob<br />
tEODor (telerob) st<strong>and</strong>s for "Explosive Ordnance Disposal <strong>and</strong> observation robot" <strong>and</strong> is a remote controlled<br />
EOD robot produced by telerob. It has unique features such as a programmable six-axis manipulator,<br />
additional linear axis, automatic tool exchange, integrated diagnostic system <strong>and</strong> parallel operation of up to<br />
five disruptors. It sold more than 350 units in 39 countries to military <strong>and</strong> police units worldwide.<br />
5 Mechatronic Control Systems<br />
5.1 Canonical subsystems<br />
� Sensoring, data acquisition, fusion <strong>and</strong> interpretation (TASK 3.1, SINTEF)<br />
� Sensor network design (TASK 3.2, SINTEF)<br />
� Robotic system modelling <strong>and</strong> integration of motor control units (TASK 6.2, UB)<br />
� Design <strong>and</strong> development of the mobile platform (TASK 7.1, OMS)<br />
� Design <strong>and</strong> development of the powered orthosis (TASK 7.2, OB)<br />
� Design <strong>and</strong> integration of actuation system (TASK 7.3, SCHUNK)<br />
� Development of low-level control units (TASK 7.4, VUB)<br />
� Realization of the robotic system (TASK 7.5, SCHUNK)<br />
5.2 Human sensing systems (Tasks 3.1, 3.2; SINTEF)<br />
Relevant tasks:<br />
� Sensoring, data acquisition, fusion <strong>and</strong> interpretation (TASK 3.1, SINTEF)<br />
� Sensor network design (TASK 3.2, SINTEF)<br />
<strong>CORBYS</strong> Overarching <strong>Requirements</strong> on the human sensing systems (physiological sensors system excluding<br />
BCI components) <strong>and</strong> interfaces to other WPs <strong>and</strong> partners.<br />
5.2.1 Functional <strong>Requirements</strong><br />
5.2.1.1 Processes<br />
On selecting physiological sensor components for <strong>CORBYS</strong> <strong>and</strong> properties of individual sensor<br />
components<br />
28
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Requirement No. HSS1<br />
Name: Sensors implemented in the <strong>CORBYS</strong> system<br />
Description: The actual sensors selected will be defined in the detailed specification<br />
Reason / Comments: Comment<br />
Anticipated sensors:<br />
� Heart rate<br />
� EMG (muscular activity)<br />
� EDR <strong>and</strong>/or humidity/sweat sensors<br />
� Inertial measurement units (3 axis accelerometer, gyroscope <strong>and</strong> magnetometer)<br />
� The requirements for mechanical sensing (force, torque, angular joint movements,<br />
force/pressure distribution) need to be discussed with the consortium<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS2<br />
Name: Sensor locations<br />
Description: The actual sensors selected will be defined in the detailed specification.<br />
Reason / Comments: Comment:<br />
A complete discussion on the physical/mechanical patient-robot interface configuration is<br />
required here. Some aspects related to sensing:<br />
� Homing positions of inertial sensors<br />
� Positions of robot mechanical support to the patient (such as limb fixation) <strong>and</strong><br />
patient movement actuators<br />
� Actuator position movement vs. limb position movement: Both the robot <strong>and</strong> the<br />
body sensors can be used to find e.g. the position <strong>and</strong> angle of a knee. Which is<br />
best, or is redundancy required?<br />
� Can certain sensing components be integrated in the robot itself, rather than being<br />
mounted on the patient?<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS3<br />
Name: Patient user size<br />
Description: Adult users. Height, weight circumferences will be discussed with clinical partners<br />
Reason / Comments: Comment:<br />
� A complete discussion on the physical/mechanical patient-robot interface<br />
configuration is required here. Some aspects related to sensing:<br />
� Homing positions of inertial sensors<br />
� Positions of robot mechanical support to the patient (such as fixing on of limb) <strong>and</strong><br />
patient movement actuators<br />
� Actuator position movement vs. limb position movement: Both the robot <strong>and</strong> the<br />
body sensors can be used to find e.g. the position <strong>and</strong> angle of a knee. Which is<br />
best, or is redundancy required?<br />
� Can certain sensing components be integrated in the robot itself, rather than being<br />
mounted on the patient?<br />
Indicative priority M<strong>and</strong>atory<br />
Sensor Output<br />
Requirement No.: HSS4<br />
Name: Sensor output of primary parameter values<br />
Description: Definition on how human sensor parameters are shared with the rest of the <strong>CORBYS</strong><br />
system, as well as in export to log files: Details are to be defined.<br />
Reason / Comments: (Example:<br />
Heart rate will be in [beats/minute], based on integration of values during the past 5 seconds.<br />
The value will be updated every second)<br />
29
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS5<br />
Name: Safety-related sensor output – information derived from sensor fusion of multiple sensors<br />
Description: Status information or flags should be raised if sensor readings indicate a potentially<br />
hazardous situation. To be discussed with partners with stakes in the design of <strong>CORBYS</strong><br />
control system<br />
Reason / Comments: Detailed definition will be developed later, <strong>and</strong> will depend upon the entire <strong>CORBYS</strong><br />
system configuration<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS6<br />
Name: Sensor output related to physical effort assessments<br />
Description: Details are to be defined.<br />
Reason / Comments: Comments:<br />
� Detailed definition will be developed later, <strong>and</strong> will depend upon the entire<br />
<strong>CORBYS</strong> system configuration<br />
� Synergies with BCI measurements <strong>and</strong> development of artificial muscle actuators<br />
must be explored<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS7<br />
Name: Sensor output related to gait parameter assessments<br />
Description: Details are to be defined.<br />
Reason / Comments: Comments:<br />
� Detailed definition will be developed later, <strong>and</strong> will depend upon the entire<br />
<strong>CORBYS</strong> system configuration<br />
� Synergies with BCI measurements <strong>and</strong> development of artificial muscle actuators<br />
must be explored<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS8<br />
Name: Sensor output related to identification of psycophysiological states<br />
Description: Details are to be defined.<br />
Reason / Comments: Comment:<br />
� Detailed definition will be developed later, <strong>and</strong> will depend upon the entire<br />
<strong>CORBYS</strong> system configuration<br />
� Synergies with BCI measurements must be explored<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS9<br />
Name: Sensor output related to identification of intentional states<br />
Description: Details are to be defined.<br />
Reason / Comments: Comment:<br />
� Detailed definition will be developed later, <strong>and</strong> will depend upon the entire<br />
<strong>CORBYS</strong> system configuration<br />
� Synergies with BCI measurements must be explored<br />
Indicative priority M<strong>and</strong>atory<br />
Some overarching design constraints on the physiological sensors <strong>and</strong> the rest of the <strong>CORBYS</strong> system<br />
Requirement No.: HSS10<br />
Name: Data processing <strong>and</strong> signal analysis requirements<br />
Description: Details not yet decided.<br />
Reason / Comments: Generally speaking, the project needs to sum up all the data processing requirements, both<br />
capacity <strong>and</strong> platform wise.<br />
30
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS11<br />
Name: Integrating physiological sensor measurements (SINTEF) with BCI (BBT)<br />
Description: Details not yet decided.<br />
Reason / Comments: Discussions between BBT <strong>and</strong> SINTEF regarding finding a shared platform for integrating<br />
sensor signals.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS12<br />
Name: Interfacing physiological sensor measurement system with the main <strong>CORBYS</strong> cognitive<br />
robot control system<br />
Description:<br />
Reason / Comments: The main alternatives for feeding the sensor data stream into a computer:<br />
� Developing a dedicated data acquisition printed circuit board allowing integration<br />
with the computer using st<strong>and</strong>ard cables (parallel, serial RS232 or USB)<br />
� Purchase a dedicated data acquisition board from for example National Instruments<br />
� Integrate sensors using st<strong>and</strong>ards based wireless communication, e.g. Bluetooth or<br />
WiFi.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS13<br />
Name: Mains requirements<br />
Description: It must be anticipated that some of the measurement equipment will require 220V/50Hz<br />
Reason / Comments: The project needs to sum up all the power requirements<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS14<br />
Name: Sensor network architecture requirements<br />
Description:<br />
Reason / Comments: The project needs to compile a summary of all sensors <strong>and</strong> actuators (with detailed operation<br />
characteristics <strong>and</strong> worst-case values) in order to specify the total sensor network<br />
architecture.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS15<br />
Name: Online access to past rehabilitation sessions<br />
Description: Online access to past (<strong>and</strong> possibly ongoing) therapy sessions implies a software<br />
architecture solution, as well as probably a WiFi node on the <strong>CORBYS</strong> system<br />
Reason / Comments: A software architecture incorporating the requirements will have to be implemented.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS16<br />
Name: <strong>CORBYS</strong> system intermittence, delay <strong>and</strong> synchronisation requirements for sensors <strong>and</strong><br />
actuators<br />
Description:<br />
Reason / Comments: A shared underst<strong>and</strong>ing of signal propagation will have to be reached between the partners.<br />
Indicative priority M<strong>and</strong>atory<br />
5.2.1.2 Interfaces<br />
User interfaces, hardware/software interfaces i.e. what the system must connect to.<br />
Requirement No.: HSS17<br />
Name: Number of physiological sensor probes on the patients<br />
Description: The detailed number is TBD. No limitations are stated at this stage.<br />
31
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Reason / Comments: Comment: One important design challenge in order to make the system usable is to select a<br />
minimum sensor configuration that gives all essential information while allowing easy<br />
mounting <strong>and</strong> dismounting of a low number of sensor devices.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS18<br />
Name: Number of physiological sensor probes on the patients<br />
Description: For ease-of-use purposes, it will be desirable to combine several sensors into single devices,<br />
thereby reducing the experienced system complexity<br />
Reason / Comments:<br />
Indicative priority Desirable<br />
Requirement No.: HSS19<br />
Name: Signal connection of physiological sensors to the <strong>CORBYS</strong> system<br />
Description: Sensor data signals will be sent through electrical wires/cables<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS20<br />
Name: Signal connection of physiological sensors to the <strong>CORBYS</strong> system<br />
Description: For ease of use purposes, possibilities to make some sensor units transmit data using<br />
wireless communication protocols will be considered<br />
Reason / Comments:<br />
Indicative priority Optional<br />
5.2.2 Non Functional <strong>Requirements</strong><br />
5.2.2.1 Performance<br />
Requirement No.: HSS21<br />
Name: Intended duration of continuous usage of physiological sensors<br />
Description: Anticipation: A therapy session will last up to 2 hours<br />
Reason / Comments: Usage scenario perspectives must be provided by clinical partners. The intended duration of<br />
usage affect the sensor technologies to chose from<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS22<br />
Name: Intended duration of continuous usage of physiological sensors<br />
Description: Anticipation: 8 hour sessions<br />
Reason / Comments:<br />
Indicative priority Desirable<br />
Requirement No.: HSS23<br />
Name: Intended duration of continuous usage of physiological sensors<br />
Description: If <strong>CORBYS</strong> becomes a “community walker” gait assistance system, usage sessions could<br />
last from morning to evening.<br />
Reason / Comments:<br />
Indicative priority Optional<br />
5.2.2.2 Safety <strong>and</strong> reliability<br />
Physiological sensors safety <strong>and</strong> comfort to use:<br />
Requirement No.: HSS24<br />
Name: Electrical measurement system safety<br />
32
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Description: Within Consortium electronic systems are acceptable as long as they are tested <strong>and</strong> deemed<br />
safe for the <strong>CORBYS</strong> users (e.g. complete user shielding from 220V/50Hz).<br />
Reason / Comments: Comments:<br />
It is important to discuss local human subjects research requirements with respect to<br />
carrying out human subjects testing using “investigational” sensors <strong>and</strong> concepts like<br />
<strong>CORBYS</strong><br />
Keep in mind that the system should be safe for both patient <strong>and</strong> professionals<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS25<br />
Name: Electrical measurement system safety<br />
Description: CE Medical device st<strong>and</strong>ard electrical safety<br />
Reason / Comments:<br />
Indicative priority Desirable<br />
Requirement No.: HSS26<br />
Name: Electrical measurement system safety<br />
Description: Medical device CE approvals on all sensor components (For a commercial product after<br />
<strong>CORBYS</strong>)<br />
Reason / Comments:<br />
Indicative priority Optional<br />
Requirement No.: HSS27<br />
Name: Sensor systems should not be invasive or excessively obtrusive<br />
Description: In vivo (implanted) sensor systems are not a part of the <strong>CORBYS</strong> physiological<br />
measurement system<br />
Sensor concepts probing human fluidic samples (blood, urine, saliva etc) are excluded<br />
Sensor concepts probing human body openings (such as rectal core temperature<br />
measurements <strong>and</strong> breath air gas analysis) are excluded<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS28<br />
Name: Limitations in the range of acceptable users<br />
Description: Based on user safety concerns, the physiological measurements system might not be used on<br />
patient groups such as:<br />
� Patients with electronic implants<br />
� Patients with certain dermatologic conditions<br />
� Patients with limitations in cognitive capabilities<br />
� Others - to be decided<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS29<br />
Name: Mounting <strong>and</strong> removal sensors on the patient<br />
Description: The physiological sensors will be mounted <strong>and</strong> removed by trained clinical rehabilitation<br />
professionals<br />
Reason / Comments: Usage scenario perspectives must be provided by clinical partners<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS30<br />
Name: Mounting of individual sensor components directly on the user’s skin<br />
Description: Certain physiological sensors (for example electrode based) can be placed at optimum<br />
measurement locations, directly on the skin of the patient.<br />
Reason / Comments: Usage scenario perspectives must be provided by clinical partners :<br />
This requirement implies that some sensors will have to be placed under the patient’s layers<br />
of clothing, <strong>and</strong> possibly expose some body parts during mounting<br />
33
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS31<br />
Name: Mounting of individual sensor components directly on the user’s skin<br />
Description: Less optimal, but more user friendly locations can be used.<br />
Reason / Comments:<br />
Indicative priority Desirable<br />
Requirement No.: HSS32<br />
Name: Placement of sensor components on the patient<br />
Description: All sensor components should be clearly marked in order to reduce the risk of placing<br />
sensors in incorrect measurement positions (e.g. mix left <strong>and</strong> right)<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS33<br />
Name: Placement of sensor components on the patient<br />
Description: Preferably automated detection mechanisms to avoid the risk of incorrect placement<br />
Reason / Comments:<br />
Indicative priority Optional<br />
5.2.2.3 Other<br />
Requirement No.: HSS34<br />
Name: Physiological sensor biocompatibility issues<br />
Description: Sensors should not cause irritations/inflamatic responses or pain during the designated<br />
duration of <strong>CORBYS</strong> rehabilitation sessions.<br />
Sensors can be temporarily attached to the patient using e.g. medical grade adhesive tape<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS35<br />
Name: Physiological sensor biocompatibility issues<br />
Description: EC Medical device st<strong>and</strong>ard biocompatibility testing of all materials interfacing the patient<br />
Reason / Comments:<br />
Indicative priority Optional<br />
Requirement No.: HSS36<br />
Name: Physiological sensor hygienic issues<br />
Description: Physiological sensor interfacing the patient’s skin directly should be possible to clean or<br />
replace from patient to patient:<br />
Single use probes<br />
Multiple use probes that have smooth surfaces <strong>and</strong> that can be cleaned in appropriate<br />
detergents<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: HSS37<br />
Name: Time required to mount or dismount all physiological sensors<br />
Description: For a trained user, it should be possible to mount all sensors within the maximum time<br />
required for the entire start-up <strong>and</strong> shut-down times targeted for the entire <strong>CORBYS</strong> system.<br />
Time allocated for the physiological sensor system alone is TBD.<br />
Reason / Comments: The consortium need to discuss this issue considering all the different procedures<br />
(adjustments, configurations, fixing of body <strong>and</strong> limbs, mounting of sensors, initiation of<br />
protocols) required prior to or after a therapy session.<br />
Indicative priority M<strong>and</strong>atory<br />
34
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
5.3 Robotic system modelling <strong>and</strong> integration of motor control units (TASK 6.2,<br />
UB)<br />
Relevant tasks:<br />
� Robotic system modelling <strong>and</strong> integration of motor control units<br />
Modelling of the robotics systems should be used for evaluating the concepts, algorithms <strong>and</strong> speeding-up<br />
development process of hardware <strong>and</strong> software. The following systems will be modelled:<br />
� actuation units,<br />
� sensor units,<br />
� environment (human will also be modelled as an environment subpart),<br />
� robot’s motor control units <strong>and</strong> cognitive modules.<br />
The developed models will be the starting point on research on robot control <strong>and</strong> cognitive control<br />
architectures. MSC Adams, MATLAB will be used for modelling <strong>and</strong> simulation of the robotic system, its<br />
components <strong>and</strong> environment.<br />
5.3.1 Functional <strong>Requirements</strong><br />
5.3.1.1 Processes<br />
Inputs:<br />
Requirement No. RSM1<br />
Name: Model of mechanical construction of the demonstrator<br />
Description: CAD file: STEP or similar<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. RSM2<br />
Name: Actuator models<br />
Description: CAD file: STEP or similar<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. RSM3<br />
Name: Low level control algorithms<br />
Description: MATLAB, Program Code, Documentation<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. RSM4<br />
Name: Signal processing algorithms<br />
Description: MATLAB, Program Code, Documentation<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. RSM5<br />
Name: Self-awareness modules<br />
35
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Description: MATLAB, Program Code, Documentation<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. RSM6<br />
Name: Reflection modules<br />
Description: MATLAB, Program Code, Documentation<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. RSM7<br />
Name: SOIAA architecture<br />
Description: MATLAB, Program Code, Documentation<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. RSM8<br />
Name: BCI architecture<br />
Description: MATLAB, Program Code, Documentation<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. RSM9<br />
Name: Patient motion data<br />
Description: Motion data files<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible<br />
Indicative priority M<strong>and</strong>atory<br />
Outputs:<br />
The output of this task is not a functional sub-system, rather appropriate simulation environment where<br />
functionality of all other sub-systems will be tested using realistic models of robotic systems <strong>and</strong> their<br />
environment.<br />
Requirement No. RSM10<br />
Name: Training data<br />
Description: Output of the modelling <strong>and</strong> simulation process will be training data in the form of<br />
kinematics <strong>and</strong> dynamics data of the modelled robotic system <strong>and</strong> interaction with its<br />
environment.<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible<br />
Indicative priority M<strong>and</strong>atory<br />
Processing:<br />
Requirement No. RSM11<br />
Name: Modelling of the walking in the rehabilitation system<br />
Description: The model of human body will be coupled with the model of the rehabilitation system <strong>and</strong><br />
walking in the system will be modelled. Simple position controller will be used to “guide”<br />
the patient walk.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. RSM12<br />
Name: Modelling of different behaviours of the system<br />
36
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Description: More complex control algorithms will be simulated <strong>and</strong> behaviour of the controllers will be<br />
observed as well as interaction forest <strong>and</strong> torques between human body model <strong>and</strong> the<br />
rehabilitation system.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. RSM13<br />
Name: Cognitive control<br />
Description: The higher control structures will be modelled <strong>and</strong> the behaviour of the system will be<br />
observed when system is controlled by cognitive modules.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Data flow:<br />
Requirement No. RSM14<br />
Name: Software inter-module communication<br />
Description: Since several software modules will be used in this task (cognitive modules, sensor modules<br />
<strong>and</strong> other), it is necessary to agree with the responsible partners about the communication<br />
protocols.<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible<br />
Indicative priority M<strong>and</strong>atory<br />
5.3.1.2 Interfaces<br />
Requirement No. RSM15<br />
Name: Hardware in the loop simulation<br />
Description: It is possible to include mechatronical sub-systems of the robotic system directly in the<br />
simulation in order to prove concepts <strong>and</strong> test algorithms.<br />
Reason / Comments: Will be discussed with partners.<br />
Indicative priority Optional<br />
5.3.1.3 Goals <strong>and</strong> expectations<br />
Requirement No. RSM16<br />
Name: Sub-system evaluation<br />
Description: Help in evaluating the concepts, algorithms <strong>and</strong> performance of the researched robotic<br />
system. Results will be used for improvements of simulated sub-systems.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
5.3.1.4 Operating environment<br />
Requirement No. RSM17<br />
Name: MATLAB <strong>and</strong> MSC Adams<br />
Description: The robotic system <strong>and</strong> all other software modules will be modelled using MATLAB <strong>and</strong><br />
MSC Adams software on Windows platform.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
37
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
5.4 Design <strong>and</strong> development of the mobile platform (Task 7.1, OBMS)<br />
In this task the design <strong>and</strong> development of the mobile robotised platform will be carried out.<br />
5.4.1 Functional <strong>Requirements</strong><br />
5.4.1.1 Processes<br />
Inputs<br />
Requirement No. MPD1<br />
Name: <strong>Requirements</strong> <strong>and</strong> mobile platform specifications<br />
Description: The actual mobile platform will be defined in the detailed specification.<br />
Reason / Comments: Comment:<br />
A complete discussion on the number <strong>and</strong> type of DOFs, their position <strong>and</strong> specifications<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. MPD2<br />
Name: Modelling, simulation, optimization<br />
Description: Modelling, simulation <strong>and</strong> optimization of the robotic system, its components <strong>and</strong><br />
environment, as the starting point on research on robot control <strong>and</strong> cognitive control<br />
architectures will performed by UB in Task 6.2.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Processing<br />
Requirement No. MPD3<br />
Name: Mechanical design of the mobile robotic platform.<br />
Description: Design of the mobile robotic platform based on requirements <strong>and</strong> specifications defined in<br />
deliverables 2.1 <strong>and</strong> 2.2<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. MPD4<br />
Name: Body weight support.<br />
Description: Infinitely variable body weight release to decrease ground reaction force provided through<br />
the frame of the machine rather than overhead support (harness).<br />
Reason / Comments: The ability of mobile platform to hold the patient could contribute to the design of powered<br />
orthosis<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: MPD5<br />
Name: Mechanical development (construction)<br />
Description:<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: MPD6<br />
Name: Actuation of the mobile platform<br />
Description: The actual actuators selected will be defined in the detailed specification.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: MPD7<br />
38
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Name: Sensor integration<br />
Description: The actual sensors selected will be defined in the detailed specification.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Output<br />
Requirement No.: MPD8<br />
Name: CAD model of the mobile platform for robotic system modelling<br />
Description: CAD model for robotic system modelling <strong>and</strong> integration of motor control units in Task 6.2<br />
Reason / Comments: The developed models for simulation of the robotic system, its components <strong>and</strong> environment<br />
will be the starting point on research on robot control <strong>and</strong> cognitive control architectures.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: MPD9<br />
Name: Functional testing<br />
Description:<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
5.4.2 Non Functional <strong>Requirements</strong><br />
5.4.2.1 Safety <strong>and</strong> reliability<br />
Requirement No.: MPD10<br />
Name: Patient stability<br />
Description:<br />
Reason / Comments:<br />
Patient must not fall with the system or collapse within the system<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: MPD11<br />
Name: Obstacle avoidance<br />
Description: Obstacle identification including stairs up <strong>and</strong> down, collision avoidance<br />
Reason / Comments:<br />
Indicative priority Desirable<br />
5.5 Design <strong>and</strong> development of the powered orthosis (Task 7.2, OB)<br />
In this task the design <strong>and</strong> development of the exoskeleton/orthotic device, which will be later integrated into<br />
a mobile platform, will be carried out.<br />
5.5.1 Functional <strong>Requirements</strong><br />
5.5.1.1 Processes<br />
Inputs<br />
Requirement No. POD1<br />
Name: <strong>Requirements</strong> <strong>and</strong> orthosis specifications<br />
Description: The actual orthosis will be defined in the detailed specification.<br />
39
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Reason / Comments: Comment:<br />
A complete discussion on the number <strong>and</strong> type of DOFs, their position <strong>and</strong> specifications<br />
(RangeOfMotion, torques <strong>and</strong> forces), cycle numbers<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. POD2<br />
Name: Modelling, simulation, optimisation<br />
Description: Modelling, simulation <strong>and</strong> optimisation of the powered orthosis system, its components <strong>and</strong><br />
environment, as the starting point on research on robot control <strong>and</strong> cognitive control<br />
architectures will performed by UB in Task 6.2.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Processing<br />
Requirement No.: POD3<br />
Name: Mechanical design of the powered orthosis<br />
Description: Design of the powered orthosis based on requirements <strong>and</strong> orthosis specifications defined in<br />
deliverables 2.1 <strong>and</strong> 2.2<br />
Reason / Comments: An integrated design of appropriate type of joints <strong>and</strong> joint mechanisms or adaption<br />
interfaces allowing integration of the actuator system (to be developed by SCHUNK in<br />
WP7), <strong>and</strong> sensorics (to be developed by SINTEF in WP3), is preferred.<br />
Modular design will make the sensors-actuators integration to the joints more convenient as<br />
well as later the complete (powered) HKAFO integration to the mobile platform (to be<br />
developed by OBMS in WP7).<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: POD4<br />
Name: Design according to patient user size<br />
Description: Adaptation to patient’s height <strong>and</strong> weight<br />
Reason / Comments: The orthosis design should be adaptable so that the resulting orthosis can be easily fitted to<br />
various body schemas that are anatomical structures of end-users (patients with different<br />
age, size <strong>and</strong> weight). The specification of these data shall be provided in cooperation with<br />
the clinical partners.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: POD5<br />
Name: Mechanical development (construction)<br />
Description: Definition of adaptation interface to actuators in cooperation with SCHUNK <strong>and</strong> to drive<br />
frame with OBMS (especially concerning pelvic motion).<br />
M<strong>and</strong>atory requirements for clinical application (DOF)<br />
Reason / Comments: To calculate the needed strength of joint <strong>and</strong> bars the load values have to be known.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: POD6<br />
Name: Sensor integration<br />
Description: The actual sensors selected will be defined in the detailed specification.<br />
Reason / Comments: The sensory needed for the mechanical values shall be integrated in the orthosis (size <strong>and</strong><br />
position have to be delivered by the partner providing the sensor), sensory for e.g. vital<br />
parameters have to be applied separately <strong>and</strong> not being implemented in the orthosis.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: POD7<br />
40
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Name: Actuation of the orthosis<br />
Description: Appropriate actuators will be defined in the detailed specification.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Output<br />
Requirement No.: POD8<br />
Name: CAD model of the orthosis for robotic system modelling<br />
Description: CAD mode for robotic system modelling <strong>and</strong> integration of motor control units in Task 6.2<br />
Reason / Comments: The developed models for simulation of the robotic system, its components <strong>and</strong> environment<br />
will be the starting point on research on robot control <strong>and</strong> cognitive control architectures.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: POD9<br />
Name: Functional testing<br />
Description:<br />
Reason / Comments:<br />
The system shall be tested in conjunction with motors (SCHUNK) <strong>and</strong> driving frame<br />
(OBMS) to ensure that no safety hazards occur <strong>and</strong> that all mechanical components work<br />
together<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: POD10<br />
Name:<br />
Functional system to be attached to mobile platform.<br />
Description:<br />
Reason / Comments:<br />
Functional system to be attached to mobile platform in Task 7.5 (SCHUNK)<br />
Indicative priority M<strong>and</strong>atory<br />
5.5.1.2 Interfaces<br />
Requirement No.: POD11<br />
Name:<br />
Mobile platform interface module<br />
Description: An interface module on orthosis to be able to attach it to the mobile platform<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: POD12<br />
Name:<br />
Patient interface<br />
Description: A patient interface that fits the orthosis to the patient’s body (orthopaedic requirements)<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
5.5.2 Non Functional <strong>Requirements</strong><br />
5.5.2.1 Safety <strong>and</strong> reliability<br />
Requirement No.: POD13<br />
Name: Mounting time required to mount the system<br />
Description: A therapist must setup system within 10 minutes<br />
41
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Reason / Comments:<br />
Indicative priority Desirable<br />
Requirement No.: POD14<br />
Name: H<strong>and</strong>ling<br />
Description: Patient must set up system by himself<br />
Reason / Comments:<br />
Indicative priority Desirable<br />
5.6 Design <strong>and</strong> integration of actuation system (Task 7.3, SCHUNK)<br />
In this task, depending on established requirements SCHUNK will provide adequate actuators for actuating<br />
the mobile platform demonstrator <strong>and</strong> the powered orthosis demonstrator’s joints.<br />
5.6.1 Functional <strong>Requirements</strong><br />
5.6.1.1 Processes<br />
Inputs<br />
Requirement No. ACT1<br />
Name: <strong>Requirements</strong> <strong>and</strong> actuators specifications<br />
Description: The actual actuators will be defined in the detailed specification.<br />
Reason / Comments: Comment:<br />
A complete discussion on the number <strong>and</strong> type of DOFs, their position <strong>and</strong> specifications<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. ACT2<br />
Name: Modelling, simulation, optimization<br />
Description: Modelling, simulation <strong>and</strong> optimization of the robotic system, its components <strong>and</strong><br />
environment.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Processing<br />
<strong>Specification</strong> is needed on operation range, load situations, cycle times <strong>and</strong> load cycles,<br />
voltage range, power supply, cabling, electrical interfacing, fusing, temperature range, IP<br />
protection, brakes <strong>and</strong> emergency situations.<br />
Requirement No.: ACT3<br />
Name:<br />
Smart <strong>and</strong> safe actuators<br />
Description: Smarter <strong>and</strong> safer actuator control by integrating force <strong>and</strong> torque sensors into the control.<br />
Reason / Comments: Research <strong>and</strong> develop new functions for smarter <strong>and</strong> safer actuator control by integrating<br />
force <strong>and</strong> torque sensors into the control. The research on the actuators is important, because<br />
of high miniaturisation <strong>and</strong> safe operation dem<strong>and</strong>s. Conformance for safe stop operation<br />
(STO) must be performed.<br />
Indicative priority M<strong>and</strong>atory<br />
42
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Output<br />
Requirement No.: ACT4<br />
Name: Actuator models for robotic system modelling <strong>and</strong> integration of motor control units (Task<br />
6.2)<br />
Description: The developed models will be the starting point of research on robot control <strong>and</strong> cognitive<br />
control architectures, modelling <strong>and</strong> simulation of the robotic system, its components <strong>and</strong><br />
environment.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: ACT5<br />
Name:<br />
The actuator system of the orthosis<br />
Description: The actual actuators selected will be defined in the detailed specification. The actuator<br />
system of the orthosis will be developed by SCHUNK: output power, power source, motor<br />
characteristics (Voltage, current consumption, torque, velocity) with nominal <strong>and</strong> absolute<br />
ratings. <strong>and</strong> will be integrated to the orthosis through the Task 7.5<br />
Risk analysis in conformance with the European machine guidelines is expected.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: ACT6<br />
Name: Actuation of the mobile platform<br />
Description: The actual actuators selected will be defined in the detailed specification. The actuator<br />
system of the mobile manipulator will be developed by SCHUNK: output power, power<br />
source, motor characteristics (Voltage, current consumption, torque, velocity) with nominal<br />
<strong>and</strong> absolute ratings. <strong>and</strong> will be integrated to the mobile platform through the Task 7.5<br />
Risk analysis in conformance with the European machine guidelines is expected.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
5.6.2 Non Functional <strong>Requirements</strong><br />
5.6.2.1 Safety <strong>and</strong> reliability<br />
Requirement No.: ACT7<br />
Name: Safety<br />
Description: The safety solutions integrate intrinsic safety design, but also necessary dual channel<br />
(redundant) sensor integration <strong>and</strong> observation according to man machine cooperation<br />
st<strong>and</strong>ard DIN/ISO 10218<br />
Reason/Comments<br />
Indicative priority M<strong>and</strong>atory<br />
5.7 Development of lowlevel control units (Task 7.4, VUB)<br />
From the control point of view, the computational <strong>and</strong> sensing abilities of the robotic system impose<br />
additional limitations on the robot’s performance. The key contribution to a control system will consists of<br />
distributed set of interacting microcontroller units which will provide robust full monitoring <strong>and</strong> easy<br />
exp<strong>and</strong>ability.<br />
43
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
5.7.1 Functional <strong>Requirements</strong><br />
5.7.1.1 Processes<br />
Inputs<br />
Requirement No.: LLC1<br />
Name: Mechanical design of the powered orthosis <strong>and</strong> mobile platform<br />
Description: Number <strong>and</strong> type of DOFs, their position <strong>and</strong> specifications<br />
Reason/comments To define main characteristics of the LLC is necessary to clarify specifications of all<br />
actuated DOF<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: LLC2<br />
Name: Electro-mechanical components<br />
Description: Actuators for every DOF of powered orthosis <strong>and</strong> mobile platform; output power, power<br />
source, motor characteristics (Voltage, current consumption, torque, velocity) with nominal<br />
<strong>and</strong> absolute ratings.<br />
Reason/Comments The performance of <strong>CORBYS</strong> control architectures <strong>and</strong> the constraints on actuators assume<br />
a special importance in functionality of whole system. Controller manipulates the system<br />
inputs to obtain the desired effect on the output of the system. In order to have more<br />
accuracy in control some characteristics of electro-mechanical components to be discussed<br />
with partners.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: LLC3<br />
Name: Controller Inputs<br />
Description: Determine the sensors that will be connected to the controller unit to define number <strong>and</strong> type<br />
of the inputs required. Data sheet characteristics for all sensors.<br />
Reason/Comments After the sensors will be chosen it will be necessary to classify which ones will be connected<br />
to an acquisition data system <strong>and</strong> ones that can be read <strong>and</strong> used by LLC as feedback in low<br />
level control loop.<br />
Indicative priority M<strong>and</strong>atory<br />
Outputs<br />
Requirement No.: LLC4<br />
Name: Controller Outputs<br />
Description: Specific output ports required to drive actuators or provide information to other control<br />
subsystems.<br />
Reason/Comments Besides the driving of the actuator it is necessary to provide power, references or data to<br />
other modules of the system, so the peripherals <strong>and</strong> special hardware components should be<br />
considered.<br />
Indicative priority M<strong>and</strong>atory<br />
Processing<br />
Requirement No.: LLC5<br />
Name: Low level control loop<br />
Description: Developing a control scheme to accomplish low level control for each DOF of the system.<br />
Reason/Comments After all input <strong>and</strong> output parameters will be defined it will be necessary to develop a<br />
control scheme <strong>and</strong> implement the low level source code. LLC will operate in accordance<br />
with the main processing unit real time cycle, all to be discussed with partners.<br />
Indicative priority M<strong>and</strong>atory<br />
44
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Data flow<br />
Requirement No.: LLC6<br />
Name: Communication protocol<br />
Description: Every LLC will be a node in the system network which will receive <strong>and</strong> provide data,<br />
communication protocol should be fast, safe <strong>and</strong> reliable.<br />
Reason/Comments Considering real time operation it is crucial to have a real time communication protocol.<br />
LLC should integrate modules necessary to accomplish this task.<br />
Indicative priority M<strong>and</strong>atory<br />
5.7.1.2 Interfaces<br />
Requirement No.: LLC7<br />
Name: Test interface<br />
Description: Test Graphic User Interface<br />
Reason/Comments A graphical user interface should be designed to test the functionality of the LLC before<br />
being integrated in the system.<br />
Indicative priority M<strong>and</strong>atory<br />
5.7.2 Non Functional <strong>Requirements</strong><br />
5.7.2.1 Performance<br />
Requirement No.: LLC8<br />
Name: Dimensions <strong>and</strong> placement in system<br />
Description:<br />
Reason/Comments The placement of LLC will influence hardware design complexity <strong>and</strong> wiring architecture.<br />
To be discussed with the Consortium.<br />
Indicative priority Desirable<br />
5.7.2.2 Safety <strong>and</strong> reliability<br />
Requirement No.: LLC9<br />
Name: Safety functional parameters<br />
Description: Will be chosen to conform with the general requirements for safety in human-robot<br />
interactions (ex: admissible voltage for motor drivers)<br />
Reason/Comments<br />
Indicative priority M<strong>and</strong>atory<br />
5.8 Realization of the robotic system (Task 7.5, SCHUNK)<br />
In this task SCHUNK will be responsible for assembly of complete robotic system that should be fully<br />
functional on completing this task.<br />
5.8.1 Functional <strong>Requirements</strong><br />
5.8.1.1 Processes<br />
Inputs<br />
Requirement No. REAL1<br />
Name: Completed components of the robotic system.<br />
45
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Description: The finalised robotic system will include mobile platform <strong>and</strong> powered orthosis both with<br />
integrated actuators, sensors, batteries <strong>and</strong> on-board electronics. All based on CAN bus<br />
communication <strong>and</strong> battery power supply.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Processing<br />
Requirement No.: REAL2<br />
Name:<br />
Integration of the actuators to the orthosis<br />
Description: Manufacturing <strong>and</strong> assembling tasks for small scale miniaturized actuators based on<br />
electronic motors <strong>and</strong> motor control circuitry. Create drawings, bill of materials, 3D models<br />
<strong>and</strong> discuss them with other involved partners (OB, OBMS).<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: REAL3<br />
Name: Integration of the actuators for the mobile manipulator<br />
Description: Manufacturing <strong>and</strong> assembling tasks for small scale miniaturised actuators based electronic<br />
motors <strong>and</strong> motor control circuitry. Create drawings, bill of materials, 3D models <strong>and</strong><br />
discuss them with other involved partners (UB).<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: REAL4<br />
Name: Sensor integration into the orthosis.<br />
Description: Assembly <strong>and</strong> electrical interfacing tasks. Software integration of data interface. Develop<br />
diagnostic software together with partner SINTEF <strong>and</strong> UB.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Output<br />
Requirement No.: REAL5<br />
Name: Functional test of mobile manipulator <strong>and</strong> orthosis.<br />
Description: Diagnostics software to control the actuators <strong>and</strong> fetch sensor data<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
6 Human Control System<br />
6.1 Canonical subsystems<br />
� BCI detection of cognitive processes that play key roles in motor control <strong>and</strong> learning (TASK 3.3,<br />
UB)<br />
� Design of a brain computer software architecture (TASK 3.4, BBT)<br />
� Cognitive control Architecture decomposition <strong>and</strong> definition (TASK 6.1, UH)<br />
� Integration of cognitive control modules (TASK 6.3, UB)<br />
� Experimenting <strong>and</strong> evaluation (TASK 6.4, VUB)<br />
� Architecture revision <strong>and</strong> improvement (TASK 6.5, UH)<br />
� Final architecture integration <strong>and</strong> functional testing (TASK 6.6, UB)<br />
46
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
6.2 BCI detection of cognitive processes that play key roles in motor control <strong>and</strong><br />
learning (Task 3.3, UB)<br />
Relevant task:<br />
� BCI detection of cognitive processes that play key roles in motor control <strong>and</strong> learning<br />
� The goal of this task is to detect cognitive information related to motor intention (e.g. intention of legs<br />
motion) via Brain-Computer Interface. Electroencephalography (EEG) data will be acquired during<br />
motor execution experiments on healthy subjects. The aim is to investigate the feasibility of<br />
distinguishing between intended movement <strong>and</strong> unintended movement.<br />
6.2.1 Functional <strong>Requirements</strong><br />
6.2.1.1 Processes<br />
Inputs:<br />
Requirement No. DMI1<br />
Name: Experimental protocol design<br />
Description: Documentation that describes the Motor intention BCI experiments through three main<br />
elements: synopsis, data collection <strong>and</strong> analysis.<br />
Reason / Comments: This protocol is necessary to conduct the motor intention studies<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. DMI2<br />
Name: Motor intention study<br />
Description: EEG data in .dat format recorded from healthy subjects<br />
Reason / Comments: This data is necessary for offline signal processing<br />
Indicative priority M<strong>and</strong>atory<br />
Outputs:<br />
Requirement No. DMI3<br />
Name: Algorithm for detection of motor intention<br />
Description: MATLAB Program Code<br />
Reason / Comments: This algorithm detects the intention of movement from stored EEG data<br />
Indicative priority M<strong>and</strong>atory<br />
Processing:<br />
Requirement No. DMI4<br />
Name: Cue-based motor intention detection algorithm<br />
Description: MATLAB Program Code (bci_Calibration.m)<br />
Reason / Comments: This algorithm detects motor intention based on a calibration session that contains EEG data<br />
locked to an event signal (offline).<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. DMI5<br />
Name: Self-paced motor intention detection algorithm<br />
Description: MATLAB Program Code (bci_Process.m)<br />
Reason / Comments: This algorithm presents the second approach to detect motor intention based on data<br />
recorded online.<br />
47
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Indicative priority M<strong>and</strong>atory<br />
6.2.1.2 Interfaces<br />
This task will use hardware <strong>and</strong> software that is available at UB to conduct the studies. Therefore, no<br />
interfaces to other sub-systems are necessary to conduct the BCI experiments at UB.<br />
6.2.1.3 Roles <strong>and</strong> responsibilities<br />
To conduct the experiments that will help to investigate the detection of human cognitive information related<br />
to the motor tasks, the design of a detailed experimental protocol is necessary. This protocol will be designed<br />
<strong>and</strong> discussed in collaboration with BBT partner. The experiments will be conducted by UB with healthy<br />
users.<br />
UB will provide results of EEG offline <strong>and</strong> online analysis for the detection of motor intention to BBT in<br />
order to integrate the program code into the Brain Computer Interface software architecture that will be<br />
developed in task 3.4.<br />
6.2.1.4 Goals <strong>and</strong> expectations<br />
The goal of this task is the detection of motor intention (e.g., before actual leg or feet movement is executed).<br />
It is expected that the signal processing can not differentiate between intention of moving right from left leg or<br />
foot.<br />
6.2.1.5 Operating environment<br />
Laboratory: University of Bremen<br />
6.2.1.6 Resources<br />
Hardware:<br />
Data will be acquired through a Porti32 amplifier (Twente Medical Systems International, Netherl<strong>and</strong>s).<br />
Software:<br />
BCI2000 framework software for signal acquisition <strong>and</strong> stimulus presentation will be used.<br />
Materials:<br />
33 Electrodes <strong>and</strong> 3 EEG caps are required.<br />
6.3 BCI Software Architecture (Task 3.4, BBT)<br />
Relevant task:<br />
� Design of a brain computer software architecture (TASK 3.4, BBT)<br />
This task will provide software architecture for the integration of the BCI-related devices <strong>and</strong> of the neural<br />
decoding mechanisms required in <strong>CORBYS</strong> (Task 3.5).<br />
48
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
6.3.1 Functional <strong>Requirements</strong><br />
6.3.1.1 Interfaces<br />
Requirement No. BCISW1<br />
Name: BCI communication interface<br />
Description: External interface that communicates with other subsystems using a TCP/IP messages<br />
protocol.<br />
Reason / Comments: Easy communication <strong>and</strong> integration between subsystems.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. BCISW2<br />
Name: Therapist GUI<br />
Description: The graphical user interface (GUI) allows the therapist to interact with the BCI software.<br />
Reason / Comments: Facilitate the use of the BCI software to non-computer experts.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. BCISW3<br />
Name: Subject GUI / User GUI<br />
Description: In the training <strong>and</strong> decoding process subjects are asked to perform some tasks.<br />
Reason / Comments: The graphical user interface (GUI) displays comm<strong>and</strong>s to the subject (e.g. a visual cue<br />
indicating that the subject intends to start walking).<br />
Indicative priority M<strong>and</strong>atory<br />
6.3.1.2 Goals <strong>and</strong> expectations<br />
Requirement No. BCISW4<br />
Name: BCI software portability<br />
Description: The BCI software can run over different operating systems (Windows, Linux, etc.)<br />
Reason / Comments: The BCI software can be easily moved from an environment to another.<br />
Indicative priority Desirable<br />
6.3.1.3 Resources<br />
Requirement No. BCISW5<br />
Name: EEG sensor cap size<br />
Description: The cap is available in 3 sizes (small, medium <strong>and</strong> large); the most appropriate one needs to<br />
be chosen depending on the subject head circumference. Anyway the medium-sized cap is<br />
suitable for over 95% of all adult subjects.<br />
Reason / Comments: An appropriate cup allows achieving a better signal quality.<br />
Indicative priority Desirable<br />
Requirement No. BCISW6<br />
Name: EEG Electrodes location <strong>and</strong> number<br />
Description: The EEG electrodes are inserted via small holes in the cap. Their position on the scalp,<br />
indicated on the cap according to the extended international 10/20 system, <strong>and</strong> number<br />
depends on what brain areas are activated during a specific cognitive task. Ongoing<br />
<strong>CORBYS</strong> research will identify those (Tasks 3.3 <strong>and</strong> 3.5).<br />
Reason / Comments: Specific electrodes placements <strong>and</strong> number related to the cognitive processes required in<br />
<strong>CORBYS</strong> are needed.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. BCISW7<br />
49
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Name: Subject screen / User screen<br />
Description: The graphical user interface (GUI) displays comm<strong>and</strong>s to the subject (e.g. a visual cue<br />
indicating that the subject has to start walking).<br />
Reason / Comments: Hardware needed for Requirement No.BCISW3<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. BCISW8<br />
Name: Therapist screen<br />
Description: The graphical user interface (GUI) allows the therapist to interact with the BCI software.<br />
Reason / Comments: Hardware needed for Requirement No. BCISW2<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. BCISW9<br />
Name: BCI processing unit<br />
Description: The minimum computing power needed depend on the results of the ongoing <strong>CORBYS</strong><br />
research (e.g. laptop, netbook, personal digital assistant, etc.).<br />
Reason / Comments: Hardware required for the BCI software.<br />
Indicative priority M<strong>and</strong>atory<br />
6.3.2 Non Functional <strong>Requirements</strong><br />
6.3.2.1 Other<br />
Requirement No. BCISW10<br />
Name: EEG system montage<br />
Description: A fast <strong>and</strong> easy EEG system montage, cap <strong>and</strong> electrodes placement, is required. Associated<br />
with these requirements the most appropriate system will be used.<br />
Reason / Comments: A rehabilitation scenario where the subject feels comfortable is desired.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. BCISW11<br />
Name: EEG system portability<br />
Description: A reduced size <strong>and</strong> weight EEG system is required. Associated with these requirements the<br />
most appropriate system will be used.<br />
Reason / Comments: A rehabilitation scenario where the subject feels comfortable is desired.<br />
Indicative priority M<strong>and</strong>atory<br />
6.4 Architecture decomposition <strong>and</strong> definition (Task 6.1, UH)<br />
The possible approaches of including cognitive structures developed in WP4-5 into robot control structures<br />
will be investigated here. In particular, it will define the coupling of the SOIAA architecture developed in<br />
WP5 to the <strong>CORBYS</strong> framework. The core SOIAA architecture will obtain information about the current<br />
status of the human-robot system primarily in the form of low-semantics, uninterpreted sensoric<br />
measurements, such as (depending on scenario) location, positions of human <strong>and</strong> robot, angles of human<br />
limbs, forces <strong>and</strong> accelerations, <strong>and</strong> transform this into movement patterns proposed by SOIAA. SOIAA will<br />
then send a request to the <strong>CORBYS</strong> architecture to carry out these movements. The interface to the <strong>CORBYS</strong><br />
architecture will filter the behaviours (i.e. movement comm<strong>and</strong>s) to be constrained with respect to known<br />
constraints in dynamics with respect to fundamental feasibility, stability, directionality <strong>and</strong> other requirements<br />
determined externally or explicitly imposed <strong>and</strong> stored in the <strong>CORBYS</strong> knowledge base (see Task 5.4). This<br />
ensures compliance with fundamental feasibility <strong>and</strong> safety constraints imposed on the system.<br />
In the second phase, the <strong>CORBYS</strong>-SOIAA interaction will be extended to support a higher-semantics<br />
50
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
(interpreted) sensoric data stream from <strong>CORBYS</strong> to SOIAA. It will combine the semantically richer,<br />
interpreted, sensoric <strong>and</strong> metasensoric data provided by the <strong>CORBYS</strong> architecture (e.g. from the BCI interface<br />
or from the higher levels of the knowledge base) with the low-semantics SOIAA core via fusion techniques.<br />
These higher-level data can be beneficial to the human intention estimate of SOIAA, by providing more<br />
refined <strong>and</strong> informed estimates. However, since they may be “tainted” by misinterpretation or wrong<br />
assumptions in estimating the human-robot take-over/h<strong>and</strong>-over goal-setting transitions, the two-level<br />
approach will allow to separate the influence of more precise <strong>and</strong> objective raw sensoric datastreams from less<br />
reliable, more ambiguous interpreted sensoric <strong>and</strong> meta-sensoric data streams.<br />
6.4.1 Functional <strong>Requirements</strong><br />
6.4.1.1 Processes<br />
Inputs<br />
Requirement No. ADD1<br />
Name: SOIAA Module<br />
Description: <strong>Specification</strong> / Documentation/ Dependent inputs UH<br />
Detailed description of the SOIAA module is given in WP 5 Specs. The task 6.1 requires not<br />
necessarily a finished version of SOIAA, but a detailed underst<strong>and</strong>ing of what SOIAA is<br />
doing, <strong>and</strong> the necessary input information detailed in the SOIAA requirements.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. ADD2<br />
Name: External Safety filtering<br />
Description: To safely reintegrate SOIAA into the <strong>CORBYS</strong> structure it is necessary that some safety<br />
filtering for the outputs of the SOIAA module is provided.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Output<br />
Requirement No. DAA3<br />
Name: <strong>Specification</strong> of SOIAA Integration Interface<br />
Description: A detailed account of how the different outputs of the SOIAA system should be integrated<br />
into the overall <strong>CORBYS</strong> framework.<br />
Reason / Comments: This requires in-depth discussions with our partners, specifically regarding the level of<br />
cognitive involvement in low level task of SOIAA.<br />
Indicative priority M<strong>and</strong>atory<br />
6.5 Integration of cognitive control modules (Task 6.3, UB)<br />
Relevant task:<br />
� Integration of cognitive control modules (TASK 6.3, UB)<br />
The cognitive control modules that will be developed in Work-Packages 4 <strong>and</strong> 5 should be integrated with the<br />
models of every robotic subsystem. All subsystems will be consolidated into one functional model of the<br />
cognitive robotic system.<br />
51
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
6.5.1 Functional <strong>Requirements</strong><br />
6.5.1.1 Processes<br />
Inputs<br />
Requirement No. CCM1<br />
Name: Semantically-Driven Self-Awareness Module (SDSA) sub-system<br />
Description: Program Code / Documentation<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. CCM2<br />
Name: Situation Assessment Architecture<br />
Description: Program Code / Documentation<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. CCM3<br />
Name: Reflection modules<br />
Description: Program Code/ Documentation<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. CCM4<br />
Name: Expectation Modelling Engine<br />
Description: Program Code/ Documentation<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. CCM5<br />
Name: SOIAA sub-system<br />
Description: Program Code / Documentation<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. CCM6<br />
Name: Identification <strong>and</strong> anticipation of human purposeful behaviour <strong>and</strong> information flow<br />
Description: Program Code / Documentation<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. CCM7<br />
Name: Self-motivated gait <strong>and</strong> goal generation sub-system<br />
Description: Program Code / Documentation<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. CCM8<br />
Name: Definition of the cognitive sub-system structure<br />
Description: The definition of the structure of the cognitive sub-system is output of the Task 6.1 <strong>and</strong><br />
represent the joint work of UR, UH, UB <strong>and</strong> VUB.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
° - The form of the input will be determined in discussion with the partner responsible<br />
52
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Outputs:<br />
Requirement No. CCM9<br />
Name: Cognitive sub-system<br />
Description: The cognitive sub-system will be responsible for overall behaviour of the robotic system. It<br />
will be responsible for long-range activities based on high-level goals <strong>and</strong> anticipation of the<br />
environment. The cognitive control subsystem will be also responsible for re-planning<br />
activities in a case when that situation dem<strong>and</strong>s.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
6.5.1.2 Interfaces<br />
Requirement No. CCM10<br />
Name: Connection to sensing network sub-system.<br />
Description: Sensing network sub-system should provide pre-processed sensor data for cognitive<br />
modules in appropriate time rate.<br />
Reason / Comments: The form of sensor data – TBD.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. CCM11<br />
Name: Connection to actuation sub-system<br />
Description: The sub-system will provide following information to actuation sub-system: requests for<br />
control action, control parameters <strong>and</strong> /or referent trajectories for real-time control units.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. CCM12<br />
Name: Connection to HRI<br />
Description: Connection interface – TBD.<br />
Reason / Comments:<br />
Indicative priority Optional<br />
6.5.1.3 Operating environment<br />
Requirement No. CCM13<br />
Name: ROS environment<br />
Description: The cognitive sub-system will be one of the entities of the robot control architecture that will<br />
be implemented using the ROS (Robot Operating System) software.<br />
Reason / Comments: ROS is framework for robot software development. It provides libraries <strong>and</strong> tools to help<br />
software developers create robot applications. It provides hardware abstraction, device<br />
drivers, libraries, visualisers, message-passing, package management, <strong>and</strong> more.<br />
Indicative priority M<strong>and</strong>atory<br />
53
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
6.6 Experimenting <strong>and</strong> evaluating simulations (Task 6.4, VUB)<br />
6.6.1 Functional <strong>Requirements</strong><br />
6.6.1.1 Processes<br />
Inputs<br />
Requirement No.: EES1<br />
Name: Model of mechanical design for <strong>CORBYS</strong> system<br />
Description: CAD representation of mechanical construction<br />
Reason/Comments<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: EES2<br />
Name: Design <strong>and</strong> modelling for actuation system<br />
Description: Information <strong>and</strong> documentation<br />
Reason/Comments<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: EES3<br />
Name: Models for motor control units <strong>and</strong> cognitive sub-system<br />
Description: MSC Adams, MATLAB files<br />
Reason/Comments<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: EES4<br />
Name: Models of cognitive control sub-system<br />
Description: MSC Adams, MATLAB files<br />
Reason/Comments<br />
Indicative priority M<strong>and</strong>atory<br />
Processing<br />
Requirement No.: EES5<br />
Name: Simulation scenarios for healthy persons<br />
Description: Development of various experiments for testing all functionalities <strong>and</strong> possibilities of the<br />
robot, evaluation of accuracy <strong>and</strong> performance in execution time. Type of experiments will<br />
be decided during development of project.<br />
Reason/Comments<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: EES6<br />
Name: Simulation scenarios for impaired persons<br />
Description: A number of different simulation tests will be performed in function of typology of endusers<br />
– per their locomotion abnormalities<br />
Reason/Comments<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: EES7<br />
Name: Experiments <strong>and</strong> evaluation for optimal parameters settings<br />
Description: To obtain more clear results we will develop some simulation scenarios for adapting the<br />
optimal settings in diverse situations.<br />
Reason/Comments<br />
Indicative priority M<strong>and</strong>atory<br />
54
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
6.6.2 Non Functional <strong>Requirements</strong><br />
6.6.2.1 Other<br />
Requirement No.: EES8<br />
Name: Methodology <strong>and</strong> design of result data<br />
Description: The data interpretation <strong>and</strong> sharing will be adopted.<br />
Reason/Comments<br />
Indicative priority Desirable<br />
6.7 Architecture revision <strong>and</strong> improvement (Task 6.5, UH)<br />
According to simulation results obtained in previous task revision <strong>and</strong> improvement of control architecture,<br />
cognitive modules <strong>and</strong> simulation models will be performed.<br />
6.7.1 Functional <strong>Requirements</strong><br />
6.7.1.1 Processes<br />
Inputs<br />
Requirement No.: ARI1<br />
Name: Simulation Results from Task 6.4<br />
Description:<br />
Reason/Comments<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: ARI2<br />
Name: Measurable Cognitive Success Criteria<br />
Description: To determine if the status of the project is satisfactory, or where improvement should be made<br />
it is necessary to define some measurable criteria that the results of Task 6.4 can be compared<br />
with. The exact criteria are yet to be decided, <strong>and</strong> should be informed by the overall <strong>CORBYS</strong><br />
design documentation <strong>and</strong> requirements.<br />
Reason/Comments<br />
Indicative priority M<strong>and</strong>atory<br />
Outputs<br />
Requirement No.: ARI3<br />
Name: Improvement of Cognitive Architecture<br />
Description: This outputs specific nature depends heavily on the not yet available test results. In case the<br />
test results are satisfactory, in regard to the system specifications, then this output becomes<br />
optional.<br />
Reason/Comments<br />
Indicative priority M<strong>and</strong>atory<br />
6.8 Final architecture integration <strong>and</strong> functional testing (Task 6.6, UB)<br />
Relevant task:<br />
� Final architecture integration <strong>and</strong> functional testing (Task 6.6, UB)<br />
In this task, all modules <strong>and</strong> system components that have been researched <strong>and</strong> developed through WP 3-5<br />
55
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
will be combined into the final cognitive control architecture. Thereby the research <strong>and</strong> development on<br />
cognitive control architecture <strong>and</strong> all modules will be finished <strong>and</strong> verified in order to accomplish real system<br />
evaluation.<br />
6.8.1 Functional <strong>Requirements</strong><br />
6.8.1.1 Processes<br />
Inputs:<br />
Requirement No. FAI1<br />
Name: Sensor network architecture <strong>and</strong> sensor processing sub-systems<br />
Description: Program Code / Documentation<br />
Reason / Comments: All sensor data shall be displayable in the different kinds of user interface<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. FAI2<br />
Name: SDSA sub-system<br />
Description: Program Code / Documentation<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. FAI3<br />
Name: Situation Assessment Architecture<br />
Description: Program Code / Documentation°<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. FAI4<br />
Name: Reflection modules<br />
Description: Program Code / Documentation<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. FAI5<br />
Name: Expectation Modelling Engine<br />
Description: Program Code / Documentation<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. FAI6<br />
Name: SOIAA sub-system<br />
Description: Program Code / Documentation<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. FAI7<br />
Name: Identification <strong>and</strong> anticipation of human purposeful behaviour <strong>and</strong> information flow<br />
Description: Program Code / Documentation<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. FAI8<br />
Name: Self-motivated gait <strong>and</strong> goal Generation sub-system<br />
Description: Program Code / Documentation<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible.<br />
Indicative priority M<strong>and</strong>atory<br />
56
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Requirement No. FAI9<br />
Name: BCI sub-system<br />
Description: Program Code / Documentation<br />
Reason / Comments: The BCI results shall be displayable for the user in order to be able to verify system<br />
decisions. The form of the input will be determined in discussion with the partner<br />
responsible.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. FAI10<br />
Name: Actuation sub-system<br />
Description: Program Code / Documentation<br />
Reason / Comments: Hardware components must be accessible for the architecture in order to execute required<br />
actions. The form of the input will be determined in discussion with the partner responsible.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. FAI11<br />
Name: Real-time control sub-system<br />
Description: Program Code / Documentation°<br />
Reason / Comments: The control sub-system shall be accessible for the architecture as well for giving input to the<br />
controller module if necessary. The form of the input will be determined in discussion with<br />
the partner responsible.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. FAI12<br />
Name: User interface<br />
Description: Development of user interface as necessary for functional testing <strong>and</strong> evaluation purposes.<br />
Reason / Comments: The user interface is required since it is inevitable for testing the whole architecture with all<br />
its subsystems<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. FAI13<br />
Name: User interface for patients<br />
Description: An adjusted minimised user interface giving just required <strong>and</strong> necessary information during<br />
runtime in order not to overburden the patient with the technical details. This user interface<br />
serves mainly to initiate/abort actions of the system<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. FAI14<br />
Name: User interface for <strong>CORBYS</strong> Professional / Expert User<br />
Description: This user interface is adjusted to suit the needs of the therapists in order to manipulate the<br />
desired system reactions <strong>and</strong> change parameters during runtime for achieving the best<br />
possible therapeutic results<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. FAI15<br />
Name: User interface for developers<br />
Description: This user interface serves for all kinds of developers being equipped with several modes that<br />
can display/manipulate every detail of every subsystem in order to debug/improve the<br />
components of the system<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. FAI16<br />
Name: Cognitive user interface<br />
Description: This user interface is designed for the specific needs of the cognitive modules, allowing to<br />
adjust the cognitive modules<br />
57
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible<br />
Indicative priority M<strong>and</strong>atory<br />
Outputs:<br />
Requirement No. FAI17<br />
Name: Robot cognitive control architecture<br />
Description: Program Code / Documentation<br />
Reason<br />
Comments:<br />
/ Final architecture combining all components into one system<br />
Indicative priority M<strong>and</strong>atory<br />
6.8.1.2 Interfaces<br />
If observed as a sub-system the software control architecture is the interface between robot hardware <strong>and</strong><br />
robot intelligence. All software sub-systems listed in the table above (list is not finalised) will communicate<br />
with each other <strong>and</strong> they will form the “brain” of the robotic system. The software framework that “glues”<br />
together all other modules will provide basic functionalities of communication between modules, scheduling,<br />
hardware abstraction <strong>and</strong> similar. For the communication between the modules, the Robot Operating System<br />
ROS will be used as a framework for robot software development. The communication will be instantiated in<br />
C++ in an open manner, using platform <strong>and</strong> programming language independent interfaces. All information<br />
will be h<strong>and</strong>led in data-containers that can be configured for every specific need <strong>and</strong> identified by its recipient<br />
for further use. Desired data has to be explicitly requested <strong>and</strong> can contain either single values or a request for<br />
data streams.<br />
Requirement No. FAI18<br />
Name: Interface to sensor network <strong>and</strong> actuation system (CAN)<br />
Description: In order to control the robotic system <strong>and</strong> to sense the environment <strong>and</strong> robots internal state<br />
it is necessary to communicate sensor data <strong>and</strong> exchange it between software modules of<br />
robotic system. CAN interface will be used for communication. The structure of CAN<br />
contents should be determined (TBD).<br />
Reason / Comments: CAN network is widely used protocol in robotics, automation <strong>and</strong> automotive industries. It<br />
is supported by a large number of manufacturers <strong>and</strong> de facto st<strong>and</strong>ard for complex robotic<br />
systems.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. FAI19<br />
Name: Interface to user interface<br />
Description: TBD<br />
Reason / Comments: TBD<br />
Indicative priority Optional<br />
6.8.1.3 Operating environment<br />
Requirement No. FAI20<br />
Name: ROS environment<br />
Description: The cognitive sub-system will be one of the entities of the robot control architecture that will<br />
be implemented using the ROS (Robot Operating System) software.<br />
Reason / Comments: ROS is framework for robot software development. It provides libraries <strong>and</strong> tools to help<br />
software developers create robot applications. It provides hardware abstraction, device<br />
drivers, libraries, visualisers, message-passing, package management, <strong>and</strong> more.<br />
58
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Indicative priority M<strong>and</strong>atory<br />
6.8.2 Non Functional <strong>Requirements</strong><br />
6.8.2.1 Safety <strong>and</strong> reliability<br />
Requirement No. FAI21<br />
Name: Code st<strong>and</strong>ards<br />
Description: Software design based on MISRA C, MISRA C++ or similar software development<br />
st<strong>and</strong>ards for safety critical systems (applicable only for real—time control sub-modules).<br />
Usage of some analysis software such as: PC-Lint, LDRA or similar would be beneficial.<br />
Reason / Comments: In order to make system safe for potential users.<br />
Indicative priority M<strong>and</strong>atory<br />
7 Robohumatic Systems (Graceful RobotHuman InteractiveCooperative<br />
Systems)<br />
7.1 Canonical subsystems<br />
� Human-robot sharing of cognitive information (TASK 3.5, BBT)<br />
� Device Ontology Modelling (TASK 4.1, UR)<br />
� Self-Awareness Realisation (TASK 4.2, UR)<br />
� Robot response to a situation (TASK 4.3, UR)<br />
� Expectation as a specification of an anticipated outcome (TASK 4.4, UH)<br />
� Development of self-motivated gait <strong>and</strong> goal generator for the cognitive architecture (TASK 5.1, UH)<br />
� Models <strong>and</strong> algorithms for the identification <strong>and</strong> anticipation of human purposeful behaviour (TASK<br />
5.2, UH)<br />
� Algorithms for measurement of anticipatory information flow between robot <strong>and</strong> human <strong>and</strong> vice<br />
versa (TASK 5.3, UH)<br />
� Development of framework for transitional dynamics between robot-initiated <strong>and</strong> human-initiated<br />
behaviours (TASK 5.4, UH)<br />
� User Responsive Learning <strong>and</strong> Adaptation Framework (TASK 5.5, UR)<br />
� Cognitive adaptation of low level controllers (TASK 5.6, UB)<br />
7.2 BCI Cognitive Information (Task 3.5, BBT)<br />
Relevant task:<br />
� Human-robot sharing of cognitive information (TASK 3.5, BBT)<br />
7.2.1 Functional <strong>Requirements</strong><br />
7.2.1.1 Processes<br />
Input<br />
Requirement No. BCI1<br />
Name: Execution Mode<br />
Description: Indicates which operation between training <strong>and</strong> decoding is going to be used.<br />
59
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
BCI requires a machine free training stage before users can work the technology. The<br />
training process modifies some internal parameters that successively the decoding process<br />
uses. This procedure must be observed for each cognitive-related task is planned to be used.<br />
A training phase is also needed for the artefacts removal processing (Training artefacts).<br />
The possible values of the execution mode input are<br />
� Training motion *<br />
� Training feedback<br />
� Training attention<br />
� Training artefacts<br />
� Decoding motion<br />
� Decoding feedback<br />
� Decoding attention<br />
� Decoding motion & feedback<br />
� Decoding motion & attention<br />
� Decoding feedback & attention<br />
� Decoding motion & feedback & attention<br />
� Stop<br />
The independence of the cognitive-related tasks allows their simultaneous execution in the<br />
decoding process (i.e. Decoding motion & feedback, Decoding attention & feedback, etc.).<br />
A stop input value has been also added to allow the interruption of the running processes.<br />
*Motion, feedback <strong>and</strong> attention are the abbreviations for intention of legs motion, feedback<br />
error-related potential <strong>and</strong> attention states respectively.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. BCI2<br />
Name: Configuration File<br />
Description: Values of the optional parameters, a default setting is provided<br />
A list of possible optional parameters is available below:<br />
� Number of electrodes to be used.<br />
� Sampling rate of the EEG signal.<br />
� Decoders parameters.<br />
A complete list will be provided depending on the results of the ongoing <strong>CORBYS</strong> research.<br />
Reason / Comments:<br />
Indicative priority Optional<br />
Output<br />
Requirement No. BCI3<br />
Name: Raw EEG<br />
Description: Electroencephalographic signal acquired by the BCI hardware<br />
Reason / Comments:<br />
Indicative priority Optional<br />
Requirement No. BCI4<br />
Name: Filtered EEG<br />
Description: Electroencephalographic signal filtered from occurring artefacts<br />
60
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Reason / Comments:<br />
Indicative priority Optional<br />
The tables above show the inputs <strong>and</strong> outputs of the Brain Computer Interface (BCI) subsystem, beyond them<br />
each cognitive-related subtask may need <strong>and</strong>/or provide more information (see following tables).<br />
Subtask 3.5.1 -- Intention of legs motion<br />
Several studies have demonstrated the appearance of EEG activity preceding human voluntary movement.<br />
These signals are associated to motor task preparation, <strong>and</strong> dissimilar from those during the actual execution.<br />
In this subtask we will study the appearance of such kind of signals related to right <strong>and</strong> left legs movement.<br />
Output<br />
Requirement No. BCI5<br />
Name: Intention of legs motion decoding flag<br />
Description: Information about which leg the subject is going to move. It also provides a “no movement”<br />
output value<br />
The intention of legs motion decoding flag output can take the following values: right leg,<br />
left leg <strong>and</strong> no movement.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. BCI6<br />
Name: Decoding accuracy<br />
Description: It provides a numerical value (e.g. a percentage) related to the ability of the BCI subsystem<br />
in detecting the intention of legs motion.<br />
Reason / Comments: Information about the performance in decoding intention of legs motion.<br />
Indicative priority M<strong>and</strong>atory<br />
Subtask 3.5.2 -- Feedback error-related potential<br />
A type of error-related potential is produced when a subject is informed that he has committed an error. The<br />
brain signal following incorrect feedback differs from the signal following the correct one. Based on this<br />
theoretical background the subtask 3.5.2 will analyse the presence of feedback error related potential in a<br />
motor task.<br />
Input<br />
Requirement No. BCI7<br />
Name: Error marker<br />
Description: Feedback stimulus that informs the subject about the correctness of their response to a<br />
specific task<br />
The feedback stimulus presented after the accomplishment of a task, informs the subject<br />
about the correctness of his response <strong>and</strong> therefore provide the critical information that<br />
would enable the error detection. Feedback stimulus, provided by other subsystems, can be<br />
auditory, visual, somatosensory, etc.<br />
The error marker input is a time marker that informs the BCI subsystem when the feedback<br />
is presented to the subject.<br />
61
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Outputs<br />
Requirement No. BCI8<br />
Name: Feedback error-related potential decoding flag<br />
Description: Information about the presence of the feedback error-related potential in the brain signal<br />
The feedback error-related potential decoding flag output can take the following values:<br />
present <strong>and</strong> absent depending on the presence of the feedback error-related potential in the<br />
brain signal.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. BCI9<br />
Name: Decoding accuracy<br />
Description: It provides a numerical value (e.g. a percentage) related to the ability of the BCI subsystem<br />
in detecting the feedback error-related potential.<br />
Reason / Comments: Information about the performance in decoding feedback error-related potential.<br />
Indicative priority M<strong>and</strong>atory<br />
Attention states (Deliverable 3.3, BBT)<br />
Several studies have focused on the ability of the subject to maintain a consistent behavioural response during<br />
continuous <strong>and</strong> repetitive activity. Since in robotic rehabilitation the user has to face repeated <strong>and</strong><br />
unchallenging stimuli, deliverable 3.3 will analyse the user’s attention level in such kind of scenario.<br />
Outputs<br />
Requirement No. BCI10<br />
Name: Attention states decoding flag<br />
Description: Information about the subject’s attention level during a specific task<br />
The attention states decoding flag output provides a numerical value indicating the user’s<br />
level of attention.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. BCI11<br />
Name: Decoding accuracy<br />
Description: It provides a numerical value (e.g. a percentage) related to the ability of the BCI subsystem<br />
in detecting the attention states.<br />
Reason / Comments: Information about the performance in decoding attention states.<br />
Indicative priority M<strong>and</strong>atory<br />
7.2.1.2 Goals <strong>and</strong> expectations<br />
It is expected that for some diseases the decoding accuracy related to cognitive processes will not be as high<br />
as in healthy subjects.<br />
62
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
7.3 Device Ontology Modelling (Task 4.1, UR)<br />
Ontology modelling for all the devices involved in the <strong>CORBYS</strong> system enables representation of the various<br />
sensors <strong>and</strong> actuators that the system uses <strong>and</strong> interacts with. This ontology contains information about<br />
properties <strong>and</strong> constraints of the devices which allow the robot to use the actuators as well as accurately<br />
interpret the information received from the sensors. The ontological representation of this information allows<br />
for reasoning to take place in case a specific device model is not readily available but relationships can be<br />
found.<br />
7.3.1 Functional <strong>Requirements</strong><br />
7.3.1.1 Processes<br />
Inputs<br />
Requirement No. DMO1<br />
Name: Device identification <strong>and</strong> selection<br />
Description: Identification <strong>and</strong> selection of models of appropriate devices which the <strong>CORBYS</strong> systems<br />
should be able to use<br />
Reason / Comments: Selection takes place during specification stage WP2, in discussion with all partners<br />
Indicative priority M<strong>and</strong>atory<br />
Processing<br />
Requirement No.: DMO2<br />
Name: Semantics Extraction<br />
Description: The identified <strong>and</strong> selected list of devices to be studied <strong>and</strong> respective semantics to be<br />
extracted for integrating into the device ontology. Including information on scope, domain,<br />
spatio-temporal, Finite State Machine representations, functions, roles, responsibilities,<br />
attributions<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: DMO3<br />
Name: Formulation of Device Ontology<br />
Description: The identified <strong>and</strong> selected list of devices, extracted respective semantics used to formulate<br />
the device ontology<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Output<br />
Requirement No.: DMO4<br />
Name: Device ontology<br />
Description: Ontological representation of all selected devices used by the <strong>CORBYS</strong> systems<br />
Reason / Comments: To allow the system to effectively use actuators, accurately interpret sensor information <strong>and</strong><br />
allow for reasoning to take place in case a specific device model is not readily available but<br />
relationships can be found.<br />
Indicative priority M<strong>and</strong>atory<br />
7.3.1.2 Interfaces<br />
63
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Requirement No.: DMO5<br />
Name: Device state integration with Device State Integrator (DSI)<br />
Description: Device ontology to assist device state integration<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
7.3.2 Assumptions <strong>and</strong> Dependencies<br />
� Modelling the device ontology depends on input from partners regarding selected sensors <strong>and</strong><br />
actuators, other devices to be used in <strong>CORBYS</strong><br />
7.4 SelfAwareness Realisation (Task 4.2, UR)<br />
Self-Awareness supported by Situation Assessment is an essential part of a cognitive system. It allows the<br />
system to interact with the environment in the most efficient <strong>and</strong> sensible way without the interference of an<br />
external guide. The <strong>CORBYS</strong> Situation Assessment Architecture facilitates (i) awareness of available sensors<br />
<strong>and</strong> actuators (via the device ontology) <strong>and</strong> (ii) awareness about the relationship of these devices to the<br />
environment including humans. <strong>CORBYS</strong> Situation Assessment Architecture includes a number of modules<br />
such as semantic integrators for person, device/object/entity, ontological layer for devices, domain specific<br />
(Plug <strong>and</strong> Play) ontologies <strong>and</strong> a blackboard structure acting as a globally accessible facility for situation<br />
assessment.<br />
A Device State Integrator (DSI) acts as the interface or interpreter between the sensors/actuators <strong>and</strong> the core<br />
system, using the information in the device ontology. DSI fuses all the information <strong>and</strong> makes it available to<br />
the core-architecture for further processing via the blackboard.<br />
Awareness about the relationship to the environment entails positioning, interaction possibilities with the<br />
environment, potential workflows, process flows, process maps, etc. This information is provided by an<br />
Object State Integrator (OSI) which analyses the received input information from sensor data <strong>and</strong> creates a<br />
cognitive image of the environment.<br />
A Person State Integrator (PSI) allows for the representation of the interacting person, including information<br />
such as position of the person, profile integration etc.<br />
The integrated Situation-Awareness Blackboard (SAWBB) in <strong>CORBYS</strong> allows access to designated deviceagents<br />
for relevant updates of profiling knowledge as well as the person’s current states, events, <strong>and</strong><br />
behaviours detected by all the sensor <strong>and</strong> reasoning sub-systems. It serves to facilitate symbolic knowledge<br />
integration <strong>and</strong> inferencing at the higher semantic fusion, <strong>and</strong> enables appropriate dynamic data/knowledge<br />
sharing regarding the semantic parametric values of the situated operational context.<br />
SAWBB is a layered architecture supporting the sharing <strong>and</strong> integration of data at various levels of abstraction<br />
including raw data from sensors etc; the upper layer being in the form of an event h<strong>and</strong>ler/look up table <strong>and</strong><br />
the second layer taking the form of a database to store historic <strong>and</strong> profiling information. SAWBB is a part of<br />
the high level Cognitive <strong>CORBYS</strong> control architecture; the connection of the Situation Assessment<br />
Architecture with the module responsible for perceiving, focusing, cognition, learning <strong>and</strong> responding to<br />
environment. An important part of this module is the internal representation of the robot internal self-model<br />
<strong>and</strong> self-state awareness which allow the system to reason <strong>and</strong> act based on its status, the context of assigned<br />
task <strong>and</strong> event anticipation. Two types of memory will be realised, i.e. one representing the current state of<br />
64
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
task execution (M1) <strong>and</strong> the second being the representation of the world model (M2).<br />
The Situation Assessment Architecture enables the sharing of raw sensor data from the Blackboard to support<br />
low level sensor data fusion amongst reasoning sub-systems; particularly to serve those modules within the<br />
Data Analysis Abstraction <strong>and</strong> Fusion Layer with variously designated Anticipation, Empowerment,<br />
Classification, Recognition responsibilities. Accordingly, the blackboard can receive data from any sensor<br />
configurations on the system e.g. the body area networks in the first demonstrator as well as from the mobile<br />
robotic system for examining hazardous areas. The output is the data <strong>and</strong> knowledge that can be made<br />
available at various levels of semantic abstraction for access as dem<strong>and</strong>ed by the agents.<br />
7.4.1 Functional <strong>Requirements</strong><br />
7.4.1.1 Processes<br />
Inputs<br />
Requirement No. SAWR1<br />
Name: Sensor input to Situation Awareness Blackboard (SAWBB)<br />
Description: From various sensing <strong>and</strong> data captures device-agents within the system e.g. Person State<br />
Integrator etc.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. SAWR2<br />
Name: Semantic input to Situation Awareness Blackboard (SAWBB)<br />
Description: For example, periodic profiling data refresh of the Situation-Awareness Blackboard (SABB)<br />
with semantically resolved structured profiling knowledge to re-update the operational<br />
profiling reference values of the Situation-Awareness Blackboard (SABB) for use by all<br />
relevant sub-systems.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. SAWR3<br />
Name: Event h<strong>and</strong>ling request to Situation Awareness Blackboard (SAWBB)<br />
Description: Requests that flag to indicate that a module should take action (event-h<strong>and</strong>ling)<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. SAWR4<br />
Name: Sensor input to Person State Integrator (PSI)<br />
Description: Sensors <strong>and</strong> devices detecting <strong>and</strong> tracking person states, to provide input to the PSI so that<br />
an integrated structured state descriptor of the person at any given time, is sent to the<br />
SAWBB<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. SAWR5<br />
Name: Sensor input to Device State Integrator (DSI)<br />
Description: Device inputs i.e. from sensors/actuators. All input is asserted onto the SAWBB.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. SAWR6<br />
Name: Sensor input to Object State Integrator (OSI)<br />
Description: Sensor Input are used to create a cognitive image of the environment, including relationship<br />
of an object to the environment, positioning, interaction possibilities with the environment,<br />
65
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
potential workflows, process flows, process maps etc.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Processing<br />
Requirement No. SAWR7<br />
Name: Person ontology modelling<br />
Description: To formulate relevant ontology deduced from knowledge of the person states. Including<br />
information on scope, domain, spatio-temporal, Finite State Machine representations,<br />
functions, roles, responsibilities, attributions. This is used to arrive at a structured person<br />
state description update which is provided to the Situation-Awareness Blackboard.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. SAWR8<br />
Name: Person state integration<br />
Description: Integration of person states using person ontology. Combine different hypotheses from<br />
various sensor modalities to compute structured person state description.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. SAWR9<br />
Name: Device state integration<br />
Description: Integration of device states using device ontology.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. SAWR10<br />
Name: Object ontology modelling<br />
Description: To formulate relevant ontology deduced from knowledge of objects <strong>and</strong> their states likely to<br />
be encountered by robotic system.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. SAWR11<br />
Name: Object state integration<br />
Description: Integration of object states to create a cognitive image of the environment.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. SAWR12<br />
Name: Establish working memory structure for SAWBB<br />
Description: Establish the local <strong>and</strong> global (shared) dynamically update-able working memory structure.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. SAWR13<br />
Name: Semantic fusion<br />
Description: Multi-level data fusion to ensure semantic integration to serve situation assessment (selfawareness)<br />
updates. Selection of appropriate fusion techniques used for the various state<br />
integrators.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
66
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Output<br />
Requirement No.: SAWR14<br />
Name: Sensor output from Situation Awareness Blackboard (SAWBB)<br />
Description: Allow role-based secure read/write access to various sensing <strong>and</strong> data capture device-agents<br />
e.g. Person State Integrator etc to enable global <strong>and</strong> local data <strong>and</strong> knowledge exchange<br />
amongst relevant device-agents regarding the relevant states of person.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: SAWR15<br />
Name: Semantic output from Situation Awareness Blackboard (SAWBB)<br />
Description: For example, periodic uploads to the profiling integration of new profiling related data<br />
published on the Situation-Awareness Blackboard (SABB), by sensors <strong>and</strong> detectors, during<br />
the operational cycle just ended.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: SAWR16<br />
Name: Current context of SAWBB<br />
Description: The current context that is represented on the SAWBB<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: SAWR17<br />
Name: Person state descriptors by PSI<br />
Description: Semantic parametric descriptors (hypotheses with variable levels of confidence) at various<br />
levels of abstraction re person states. Integrated structured person state descriptors onto the<br />
Situation-Awareness Blackboard (SAWBB)<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: SAWR18<br />
Name: Device state descriptors by DSI<br />
Description: Semantic parametric descriptors (hypotheses with variable levels of confidence) at various<br />
levels of abstraction re device states.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: SAWR19<br />
Name: Object state descriptors by OSI<br />
Description: Semantic parametric descriptors (hypotheses with variable levels of confidence) at various<br />
levels of abstraction re states of objects/entities in the environment<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
7.4.1.2 Interfaces<br />
� Same as Requirement “DMO5”<br />
� All modules interface with SAWBB<br />
7.4.1.3 Operating environment<br />
� SAWR: including SAWBB, PSI, DSI, OSI<br />
o The operation of this module is automated.<br />
67
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
o Operating systems required/supported: MS Windows XP/Vista/7<br />
7.4.2 Assumptions <strong>and</strong> Dependencies<br />
Internal<br />
� PSI depend on SAWBB<br />
� DSI depend on SAWBB<br />
� OSI depend on SAWBB<br />
� All other modules depend on SAWBB<br />
7.5 Robot response to a situation (Task 4.3, UR)<br />
For an effective control system, the behaviour is divided into reflexive <strong>and</strong> learned (reflective) behaviour. A<br />
hardware <strong>and</strong> software co-design will be used for this purpose. Reflexes, as in humans, are pre-compiled, i.e.<br />
they are automatic without the need for elaboration or thinking. In <strong>CORBYS</strong>, this will be realised via<br />
hardware implemented algorithms for obstacle avoidance, navigation, safety issues etc. Learned reflective<br />
behaviour on the other h<strong>and</strong> is constantly updated <strong>and</strong> adapted to the situation, this will be implemented in<br />
software.<br />
Hardware Reflex Capability will entail the implementation of invariant reflexes of the system such as<br />
avoidance of certain obstacles, using low level recognition algorithms realised in an FPGA to allow for fast<br />
prototyping <strong>and</strong> re-configurability. On the other h<strong>and</strong>, Software Learning Modules will realise the learning<br />
behaviour in software using machine learning for pattern discovery, pattern-directed inference <strong>and</strong> learning<br />
<strong>and</strong> refinement of the cognitive map.<br />
7.5.1 Functional <strong>Requirements</strong><br />
7.5.1.1 Processes<br />
Inputs<br />
Requirement No. RRS1<br />
Name: Inputs <strong>and</strong> connections to the FPGA sub-system from the mechatronics sub-system<br />
Description: Any inputs directly feeding into the FPGA sub-system, signal conditioning, Analog to<br />
Digital conversion (ADC), data rate, data bus width, data format etc.<br />
Number <strong>and</strong> type of the sensors attached to the subject<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. RRS2<br />
Name: Inputs to FPGA sub-system from various software modules<br />
Description: Such as the Situational Awareness modules etc.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. RRS3<br />
Name: Scalability to support inputs <strong>and</strong> connections from hardware transducers/devices<br />
Description: Within the scope <strong>and</strong> duration of the project, the FPGA sub-system should have enough I/O<br />
capability available for any future hardware transducers, their data rate, data bus width, data<br />
68
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
format, signal conditioning, Analog to Digital conversion etc.<br />
Reason / Comments:<br />
Indicative priority Desirable<br />
Requirement No. RRS4<br />
Name: Continuous monitoring<br />
Description: Any physical quantities or situations, that may have an impact on the power requirements<br />
<strong>and</strong> the architecture of the FPGA hardware sub-system, need to be monitored or may need to<br />
be polled continuously for any purpose <strong>and</strong> their criticality. Also special requirements such<br />
as signal conditioning, ADC need to be identified.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. RRS5<br />
Name: Event h<strong>and</strong>ling<br />
Description: Events that need to be taken care of during the normal execution <strong>and</strong> their criticality. Events<br />
that may have an impact on the power requirements <strong>and</strong> architecture of the FPGA hardware<br />
sub-system such as priority interrupts etc<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Processing<br />
Requirement No.: RRS6<br />
Name: Identification of reflexive capability to be enabled by the FPGA sub-system.<br />
Description: To identify the feasibility in terms of implementation in the FPGA Hardware.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS7<br />
Name: Identification of ‘Navigation’ related reflexive behaviour enabled by the FPGA sub-system.<br />
Description: To identify the feasibility in terms of implementation in the FPGA Hardware.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS8<br />
Name: Identification of ‘Obstacle avoidance’ related reflexive behaviour enabled by the FPGA subsystem.<br />
Description: To identify the feasibility in terms of implementation in the FPGA Hardware.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS9<br />
Name: Identification of ‘Safety’ related reflexive behaviour enabled by the FPGA sub-system.<br />
Description: To identify the feasibility in terms of implementation in the FPGA Hardware.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS10<br />
Name: Identification of open <strong>and</strong> closed loop processes for the FPGA sub-system<br />
Description:<br />
Reason / Comments: Reason:<br />
For example, in a closed loop process, it may be required to implement a closed loop<br />
controller algorithm in an on board processor or in FPGA, so it is important to know<br />
beforeh<strong>and</strong>.<br />
69
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS11<br />
Name: Selection of off-the-shelf FPGA device<br />
Description: This depends on the above factors <strong>and</strong> the overall functional requirement from the FPGA<br />
sub-system<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS12<br />
Name: Selection/design of custom or off-the-shelf FPGA board for the selected FPGA device<br />
Description: This depends on the above factors <strong>and</strong> the overall functional requirement from the FPGA<br />
sub-system<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS13<br />
Name: Additional processing <strong>and</strong> memory devices for the FPGA board<br />
Description: This depends on the above factors <strong>and</strong> the overall functional requirement from the FPGA<br />
sub-system<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS14<br />
Name: List of states, conditions <strong>and</strong> transitions<br />
Description: This is derived from the functionality of the FPGA sub-system<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS15<br />
Name: Reconfiguration <strong>and</strong> reprogrammable FPGA sub-system<br />
Description: The implemented hardware sub-system <strong>and</strong> the FPGA board should be reprogrammable,<br />
reconfigurable <strong>and</strong> have enough processing capability to support any future requirements<br />
within the scope <strong>and</strong> duration of the project<br />
Reason / Comments:<br />
Indicative priority Desirable<br />
Requirement No.: RRS16<br />
Name: Sensor input level 0 processing on the FPGA sub-system<br />
Description: If FPGA sub-system is situated between mechatronics <strong>and</strong> software/cognitive sub-systems,<br />
raw sensor data can be pre-processed in real-time before it is presented to the software<br />
modules of <strong>CORBYS</strong>. To be discussed with partners during specification stage<br />
Reason / Comments: Reason:<br />
Fast real-time processing sensor input<br />
Indicative priority Optional<br />
Output<br />
Requirement No. RRS17<br />
Name: Outputs <strong>and</strong> connections to the FPGA sub-system from the mechatronics sub-system<br />
Description: Any outputs directly feeding into the mechatronics sub-system, signal conditioning, Analog<br />
to Digital conversion (ADC), data rate, data bus width, data format etc.<br />
Number <strong>and</strong> type of the sensors attached to the subject<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
70
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Requirement No. RRS18<br />
Name: Outputs to FPGA sub-system from various software modules<br />
Description: Such as the Situational Awareness modules etc.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. RRS19<br />
Name: Scalability to support outputs <strong>and</strong> connections from hardware transducers/devices<br />
Description: Within the scope <strong>and</strong> duration of the project, the FPGA sub-system should have enough I/O<br />
capability available for any future hardware transducers, their data rate, data bus width, data<br />
format, signal conditioning, Analog to Digital conversion etc.<br />
Reason / Comments:<br />
Indicative priority Desirable<br />
Requirement No. RRS20<br />
Name: Continuous stimulation<br />
Description: Any physical quantities or situations, that may have an impact on the power requirements<br />
<strong>and</strong> the architecture of the FPGA hardware sub-system, need to be stimulated continuously<br />
for any purpose <strong>and</strong> their criticality. Also special requirements such as signal conditioning,<br />
ADC need to be identified.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. RRS21<br />
Name: Event h<strong>and</strong>ling<br />
Description: Events that need to be taken care of during the normal execution <strong>and</strong> their criticality. Events<br />
that may have an impact on the power requirements <strong>and</strong> architecture of the FPGA hardware<br />
sub-system such as priority interrupts etc<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Dataflow<br />
Requirement No.: RRS22<br />
Name: Data paths to/from FPGA sub-system<br />
Description: Identify any other data paths that may be required to be implemented in the FPGA subsystem.<br />
Reason / Comments:<br />
Indicative priority Optional<br />
Requirement No.: RRS23<br />
Name: Any data paths required VIA FPGA sub-system<br />
Description: This is when FPGA sub-system is also working as a middle-man between the software <strong>and</strong><br />
mechatronics sub-system.<br />
Reason / Comments:<br />
Indicative priority Optional<br />
7.5.1.2 Interfaces<br />
User interfaces, hardware/software interfaces i.e. what the system must connect to.<br />
Requirement No.: RRS24<br />
Name: FPGA sub-system interfacing<br />
Description: Any special requirements regarding interfacing with rest of the system as covered in inputs<br />
<strong>and</strong> outputs sections.<br />
71
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
7.5.2 Non Functional <strong>Requirements</strong><br />
7.5.2.1 Performance<br />
Requirement No.: RRS25<br />
Name: Latency <strong>and</strong> timing requirements for the input <strong>and</strong> output of the FPGA sub-system<br />
Description: This also depends on the functional requirements of the overall FPGA sub-system<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS26<br />
Name: Continuous run time<br />
Description: This depends on the functional requirements as well <strong>and</strong> also impacts on heat dissipation<br />
<strong>and</strong> power requirements of the FPGA sub-system.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. RRS27<br />
Name: Power consumption of the FPGA sub-system<br />
Description: This depends on the selection of the board <strong>and</strong> the hardware components. This depends on<br />
outcome of the requirements dealing with I/O, processing <strong>and</strong> device <strong>and</strong> component<br />
selection such as RRS1, RRS2, RRS3 etc.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
7.5.2.2 Safety <strong>and</strong> reliability<br />
Requirement No.: RRS28<br />
Name: Heat dissipation of FPGA <strong>and</strong> other device on board<br />
Description: This depends on many of the previous factors such as functionality required from the FPGA<br />
sub-system. An on chip fan or heatsink is typically required to dissipate the heat from the<br />
system. Vents in the demonstrator enclosures etc. Inadequate ventilation can cause thermal<br />
shutdown of the system. Furthermore, the user needs to be protected from exposure to<br />
excessive heat.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS29<br />
Name: User safety.<br />
Description: To prevent shock or injury to the user. To prevent any imposing mechanical movement by<br />
the Mechatronic system (e.g. using a comm<strong>and</strong> master stop or reset). Isolation, physical<br />
connections.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS30<br />
Name: Reliability of the implemented hardware<br />
Description: The implemented digital hardware <strong>and</strong> firmware must be verified <strong>and</strong> validated properly by<br />
using functional simulation etc methods.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
72
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Requirement No.: RRS31<br />
Name: Testing of the FPGA sub-system<br />
Description: The implemented hardware <strong>and</strong> firmware together with board testing must be carried out to<br />
make sure the system is safe to run.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS32<br />
Name: Battery safety <strong>and</strong> reliability for the FPGA sub-system<br />
Description: Capacity <strong>and</strong> size of the battery; whether separate battery for FPGA sub-system or only one<br />
battery. Type of the battery in light of the scenario context in which the system is used<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS33<br />
Name: Hardware redundancy within the FPGA sub-system<br />
Description: To make the system fail-safe, functionality to be implemented in more than one hardware<br />
component on the same board or on a different board.<br />
Reason / Comments:<br />
Indicative priority Desirable<br />
Requirement No.: RRS34<br />
Name: System State, steps <strong>and</strong> recovery in case of emergency or malfunction<br />
Description: Identify the state the system should be in, in the case of emergency <strong>and</strong>/or malfunction. The<br />
steps that should be performed by the FPGA sub-system at its level to aid the recovery of the<br />
malfunction.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS35<br />
Name: Alerts for emergencies or malfunctions<br />
Description: The alerts to be generated, the format in which to raise alerts, signal etc in case of<br />
emergency or malfunctions. Type of alert, i.e. visual or audio or text etc.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS36<br />
Name: Mutually exclusive states, scenarios, unsafe or hazardous conditions/states of FPGA subsystem<br />
Description: These conditions or states should be avoided <strong>and</strong> should ideally never occur during the<br />
normal functioning of the system.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS37<br />
Name: Synchronous vs. Non-synchronous execution of various functionalities<br />
Description: If there is any special requirement for the above<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS38<br />
Name: Diagnostic <strong>and</strong> self-test functionality<br />
Description: It is important to know if the system is ok at the start up. Also it is important to identify<br />
what is wrong with the system when there is a malfunction or emergency situation.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
73
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
7.5.2.3 Other<br />
Requirement No.: RRS39<br />
Name: <strong>Requirements</strong> for size/shape/dimensions/weight of the FPGA board<br />
Description: It is important to know if there are any restrictions to these parameters.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS40<br />
Name: <strong>Requirements</strong> for the placement of the FPGA board<br />
Description: Where, what location in the demonstrator the FPGA sub-system board should be placed<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS41<br />
Name: Cost requirements of the FPGA sub-system<br />
Description: Cost requirements of the FPGA sub-system while purchasing components <strong>and</strong> boards<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: RRS42<br />
Name: Certification<br />
Description: To get the sub-system certified for safety <strong>and</strong> usability<br />
Reason / Comments:<br />
Indicative priority Desirable<br />
7.5.3 Assumptions <strong>and</strong> Dependencies<br />
Internal<br />
� Depends on SAWBB<br />
� Depends on all modules<br />
7.6 SOIAA: SelfOrganizing Informational Anticipatory Architecture (Tasks 4.4,<br />
5.1, 5.2, 5.3, 5.4; UH)<br />
Relevant tasks:<br />
� Expectation as a specification of an anticipated outcome (TASK 4.4, UH)<br />
� Development of self-motivated movement <strong>and</strong> goal generator for the cognitive architecture (TASK<br />
5.1, UH)<br />
� Models <strong>and</strong> algorithms for the identification <strong>and</strong> anticipation of human purposeful behaviour (TASK<br />
5.2, UH)<br />
� Algorithms for measurement of anticipatory information flow between robot <strong>and</strong> human <strong>and</strong> vice<br />
versa (TASK 5.3, UH)<br />
� Development of framework for transitional dynamics between robot-initiated <strong>and</strong> human-initiated<br />
behaviours (TASK 5.4, UH)<br />
74
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
7.6.1 Preamble: Interface with the project<br />
7.6.1.1 Research Scenario Considerations<br />
From the perspective of the SOIAA component whose development is in the responsibility of UH, <strong>and</strong> in the<br />
underst<strong>and</strong>ing of UH of the documents provided by IRSZR there appear to be at least two main “axes”<br />
spanning the space of modes of operation to which SOIAA needs to be exposed.<br />
The first is a “follow me” mode, where the controller strives to ensure safe interaction between the human <strong>and</strong><br />
the robot. The robot would then practically “walk along” with the human, <strong>and</strong> should neither hinder nor assist<br />
its behaviour. This could then be used as a zero-hypothesis “status quo” from which the human’s task could<br />
start to be influenced.<br />
In general, the second mode of operation in either demonstrator involves outlining how the robot will interact<br />
<strong>and</strong> anticipate the needs of the human. For example, with regards to the first demonstrator, the second mode of<br />
operation would strive to fully impose a desirable (e.g. from the perspective of therapy) walking pattern on the<br />
patient. The needed movement would in this case be estimated from the current torque supplied by the sensors<br />
in the exoskeleton’s motors. This would basically force the human subject to move in the specified way,<br />
where the applied force would be steadily reduced (under the supervision of a therapist) as the human starts to<br />
walk normally on his own.<br />
From the perspective of the development of the SOIAA component, one of the main scientific challenges is<br />
the formalisation of the different modes of operation (such as the “follow me” mode, or the “human-robot<br />
interaction” - mode) in terms of the universal approach of the SOIAA component described in the next<br />
section. The various modes of operation should ideally also allow inclusion of established human movement<br />
models, such as for instance the minimum jerk principle or the 2/3 power law (Viviani <strong>and</strong> Flash 1995) in a<br />
form accessible to the SOIAA framework to be smoothly integrated into the cognitive process.<br />
With respect to the cognitive requirements in regard to the second demonstrator, the exploration scenario, it is<br />
important to establish the extent to which the <strong>CORBYS</strong> architecture should be generalisable to that scenario.<br />
This includes clarification of the range to which the <strong>CORBYS</strong> algorithms <strong>and</strong> techniques will work without<br />
change for both robots, <strong>and</strong> possibly for other cognitive robots, <strong>and</strong> to which extent the development will be<br />
specifically aimed to work for a particular demonstrator. In this context, a discussion of specific cognitive<br />
requirements for each demonstrator inside the <strong>CORBYS</strong> framework is particularly vital.<br />
7.6.1.2 Issues for SOIAA Interface<br />
To clarify the interface of the cognitive UH SOIAA component inside the <strong>CORBYS</strong> architecture, it is<br />
important to determine the level at which the cognitive component provided by UH will interface with the<br />
overall project. With regards to the first demonstrator, this can be summarised in the following key question:<br />
“How much cognition is involved in walking?”<br />
From the perspective relevant to the SOIAA architecture, the cognitive process of walking is not high-level as<br />
would be if every action taken <strong>and</strong> every muscle moved involved processing complex semantic<br />
representations. In particular, human movement typically does not involve a conscious process of<br />
contemplation such as e.g. characterised by the following: “To get 10 centimetres ahead I am going to lift my<br />
leg <strong>and</strong> move it slightly forward so as a result my body will then tilt slightly forward <strong>and</strong> gain some forward<br />
momentum” or some other kind of precise rule-based system. Rather, the approach espoused by SOIAA is<br />
based on optimisation principles which express the balances of cognitive load in terms of informational<br />
quantities. In this view, the control of a quasi-automatic behaviour (such as straight walking) would be<br />
reflected in limited cognitive processing load which, could be furthermore treated as largely detached from<br />
higher cognitive levels <strong>and</strong> would only be modulated by them to a limited degree. The key insight is that,<br />
according to these principles, we expect that the cognitive information processing can be modelled on these<br />
75
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
different levels using the same way. Thus, conceptually, the SOIAA architecture does not have to distinguish<br />
a priori on which level it interacts with the <strong>CORBYS</strong> sensorimotor comm<strong>and</strong> loop <strong>and</strong> is not limited to one<br />
particular level of control.<br />
In the spirit of ensuring the timeliness <strong>and</strong> novelty of the cognitive research component of the <strong>CORBYS</strong><br />
project, the present proposal for the interface requirements will emphasise that the interface supports the<br />
universality of the proposed cognitive solution in that it will not limit a priori (with exception of aspects of<br />
safety) the level at which SOIAA interacts with the <strong>CORBYS</strong> sensorimotor comm<strong>and</strong> loop. However, this<br />
does not preclude that clear preferences be formulated by the <strong>CORBYS</strong> Consortium which cognitive levels<br />
(e.g. high-level) should be addressed with priority.<br />
7.6.2 Functional <strong>Requirements</strong><br />
7.6.2.1 Processes<br />
Inputs<br />
Requirement No. SOIAA1<br />
Name: Sensorimotor data from the robot platforms<br />
Description: To develop <strong>and</strong> train a first prototype of the SOIAA algorithm UH requires a large data set<br />
of Sensorimotor data from the robot platforms. This data set should comprise timestamped<br />
low level sensor values, regarding various robot components or other available information,<br />
such as from the BCI (in the case of the first demonstrator), <strong>and</strong> would include, but would<br />
not be limited to:<br />
� positions<br />
� orientations<br />
� velocities<br />
� accelerations<br />
� forces<br />
� absolute positions/orientations if available<br />
� proprioceptive information about actuator activity<br />
� BCI channel information<br />
� (semantic) tags of events; especially to be used for debugging<br />
Reason / Comments: Any available sensor data should be accessible by SOIAA (<strong>and</strong> it should be possible for<br />
SOIAA to select between raw <strong>and</strong>, where applicable, suitably pre-processed data). The<br />
format it is provided in should foreshadow the interface which will later be used for the<br />
<strong>CORBYS</strong> sensorimotor comm<strong>and</strong> loop to interact with the SOIAA algorithm in the robot.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. SOIAA2<br />
Name: Documentation Sensorimotor data from the robot platforms<br />
Description: The sensor motor data should be accompanied by a document relating parameter values to<br />
the respective sensors <strong>and</strong> actuators, detailing the type of the parameter, its minimal <strong>and</strong><br />
maximal values, <strong>and</strong> other relevant information. Furthermore, it should identify which of the<br />
provided parameters would be available on the onboard robot system prototype, <strong>and</strong> which<br />
of them are supplementary “lab-only” data which will only be available as test <strong>and</strong> training<br />
data (e.g. h<strong>and</strong>-created semantic tags, absolute positions determined with sensors outside of<br />
the robot, etc.).<br />
Reason / Comments:<br />
Indicative priority Desirable<br />
Requirement No. SOIAA3<br />
76
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Name: Parameter Space <strong>Specification</strong>s<br />
Description: For SOIAA to take full advantage of the information-theoretic methodology it is based on, it<br />
would be highly desirable for SOIAA to have additional information available about all the<br />
parameters <strong>and</strong> data points provided by the <strong>CORBYS</strong> sensorimotor comm<strong>and</strong> loop. This<br />
should address questions such as, but should not be limited to:<br />
� Is the parameter continuous or discrete?<br />
� If discrete, what are the possible state transitions? What is the topology of the<br />
parameter space?<br />
� If continuous, what are the extreme values the variable can assume; what is the<br />
manifold structure of the parameter space?<br />
� What is the resolution of the physical sensor measuring the continuous variable?<br />
� What is the value measured in (meter, degrees, etc)?<br />
Reason / Comments: To apply the information-theoretic methodology to full effect, it is necessary that<br />
quantitative, low-level data is made available to the SOIAA component.<br />
Indicative priority Desirable<br />
Requirement No. SOIAA4<br />
Name: Structural Information regarding Parameter Space<br />
Description:<br />
- An operational structure formalising the effects of <strong>and</strong> constraints on actions (outputs) on<br />
the values of the overall system, formalised in mathematical terms, such as e.g. semigroup<br />
or Lie group action, invariances, <strong>and</strong> similar;<br />
- An interventional structure indicating which variables are causally subject to change if a<br />
certain action (output) variable is changed. One form in which this could be expressed is as a<br />
Causal Bayesian Network on the variable space.<br />
If such explicit structures are not available a priori, weaker substitutes which can be derived<br />
from empirical data could be used instead, among others e.g.:<br />
� Laplacian models of data (Kondor <strong>and</strong> Lafferty 2002)<br />
� graphlet library (e.g. Kondor, Shervashidze, Borgwardt 2009)<br />
Reason / Comments: It would be an advantage if the SOIAA component had access to further information about<br />
the structure of parameter space.<br />
Indicative priority Desirable<br />
Output<br />
Requirement No. SOIAA5<br />
Name: Self-motivated movement <strong>and</strong> goal generator<br />
Description: Variations of Klyubin’s (2004, 2007) Empowerment methodology are used to identify both,<br />
salient, high empowered states <strong>and</strong> trajectories in the state space of the human-robot system.<br />
Reason / Comments: Discussion with the relevant partners is needed to determine the exact specifications of this<br />
state space.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. SOIAA6<br />
Name: Models <strong>and</strong> algorithms for the identification <strong>and</strong> anticipation of human purposeful behaviour<br />
Description: The “Relevant Goal Information” approach, a specific kind of conditional Bayesian<br />
modelling, will be used to identify the most likely c<strong>and</strong>idates for sub goals from the salient,<br />
high empowered output states of SOIAA5.<br />
Reason / Comments: This module is dependent on SOIAA5<br />
77
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. SOIAA7<br />
Name: Measurement of anticipatory information flow between robot <strong>and</strong> human <strong>and</strong> vice versa<br />
Description:<br />
Reason / Comments:<br />
Different formalisms for “information flow” are to be used to get a quantitative<br />
measurement indicating the causal dominant partner in the symbiotic human-robot relation.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. SOIAA8<br />
Name: Development of framework for transitional dynamics between robot-initiated <strong>and</strong> humaninitiated<br />
behaviour<br />
Description: The different quantitative outputs of SOIAA8 are to be combined with further data to<br />
construct a regulatory feedback control system that transitions the symbiotic human-robot<br />
relationship into one where the human has maximal causal control.<br />
Reason / Comments: Dependent on SOIAA7<br />
Indicative priority M<strong>and</strong>atory<br />
7.6.2.2 Interfaces<br />
Requirement No. SOIAA10<br />
Name: Interactive Model of Robotic System<br />
Description: For such a simulation model UH sees the following options:<br />
� a simulation model is provided by the robotics/human dynamics partners, e.g. if<br />
used in their own work.<br />
This option is highly desirable, even underst<strong>and</strong>ing that the model falls short of a fully<br />
accurate simulation of human motion, to capture the essential kinematic <strong>and</strong> dynamic<br />
properties of the system, as seen by the partners with the robotic/therapeutic expertise.<br />
� if option 1 is not feasible, UH will, as a fall-back option, have to resort to build<br />
such a model on its own. This will necessarily be only a coarse approximation of<br />
the human/robot system due to the lack of relevant system expertise on the side of<br />
UH, but is necessary to ensure practicable progress on the SOIAA architecture<br />
development while the prototype of the <strong>CORBYS</strong> sensorimotor comm<strong>and</strong> loop is<br />
being developed for the hardware robots.<br />
The UH requirements for the simulation model, whether built by the relevant partners<br />
(option 1) or UH (option 2), needs to have the abilities described in the following section. If<br />
it turns out to be necessary to resort to option 2, it would be m<strong>and</strong>atory for UH to have the<br />
information required to build the simulation model. (see Requirement SOIAA11 <strong>and</strong><br />
SOIAA12)<br />
Reason / Comments: To allow a large number of runs under controlled conditions <strong>and</strong> allowing a flexible<br />
application <strong>and</strong> test of interventions by the SOIAA component, a simulation model of the<br />
human-robot system is strictly necessary. This model needs to provide an input/output<br />
interface of the sensorimotor comm<strong>and</strong> cycle of the model to the SOIAA component in<br />
accordance with the above requirements.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. SOIAA11<br />
Name: Functional Dynamics<br />
Description: The simulation system needs to have the following properties:<br />
78
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
� it provides a functional model of the robot-human system<br />
� it includes the dynamics of robot, sensor, motors<br />
� with <strong>and</strong> without safety filter (provided by the relevant partners)<br />
� it includes a dynamic model of human walkers (in the case of the first<br />
demonstrator)<br />
Regarding the human model, a pragmatic approximation is sufficient. It is not expected or<br />
necessary to implement a high-accuracy model of human walking. A pragmatic model that<br />
is available early <strong>and</strong> can be used effectively is preferable to a high-quality model which is<br />
difficult to tune <strong>and</strong> use <strong>and</strong> is available late.<br />
Reason / Comments: This information is only needed, if UH has to build a functional model themselves.<br />
Indicative priority M<strong>and</strong>atory (conditional on SOIAA10)<br />
Requirement No. SOIAA12<br />
Name: Interventional Dynamics<br />
Description: The model should provide an interface that would allow the SOIAA component to also<br />
directly influence the low level dynamics of the robotic system, such as joint controls,<br />
motors, etc, in addition to providing general behaviour cues (all, however, filtered through<br />
the protection of the safety unit). For testing purposes, it would also be highly desirable for<br />
the simulation model if world states could be directly influenced.<br />
Reason / Comments: This information is only needed, if UH has to build a functional model themselves.<br />
Indicative priority M<strong>and</strong>atory (conditional on SOIAA10)<br />
Requirement No. SOIAA13<br />
Name: Target <strong>Specification</strong>s<br />
Description: To work towards the overall project goals the SOIAA component will require a specification<br />
of the target behaviour of the robot-human system which suits the information-theoretic<br />
SOIAA framework. In particular, with regards to the first demonstrator this requires a<br />
clarification about how the therapeutic behaviour is to be integrated, <strong>and</strong> how teaching hints<br />
that the robot needs to give should be expressed in a form suitable to the SOIAA cognitive<br />
framework. Example behaviours could include:<br />
1. toe walking<br />
a. <strong>CORBYS</strong> should restore: target angles;<br />
2. crouch gait<br />
a. <strong>CORBYS</strong> should restore: target angles<br />
3. stiff knee<br />
a. insecure weight transfer<br />
b. should restore knee angles<br />
4. stiff knee with circumduction<br />
a. insecure weight transfer<br />
b. excessive pelvic vertical movement<br />
c. should restore angles<br />
In general, it is necessary to communicate to the SOIAA component in a suitable format as<br />
to what exactly should happen to assist the human in their task.<br />
To approach <strong>and</strong> phrase this in a more specific way (in the case of the first demonstrator),<br />
the following questions might be posed:<br />
� Is the insecure weight transfer a measurable effect?<br />
� If so, how could it be measured practically?<br />
79
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
7.6.2.3 Resources<br />
� What are the desired target angles <strong>and</strong> in how far are they universally definable?<br />
More generally, the following questions might be posed:<br />
� How is safe movement specified?<br />
� How is the robot supposed to give teaching hints?<br />
� How is a “normal” fixed move specified?<br />
� Can they be modulated, meaning that we could interpolate between the current <strong>and</strong><br />
the target behaviour?<br />
� Intention extraction: What kind of high level behaviour should SOIAA identify?<br />
Requirement No. SOIAA14<br />
Name: Testing <strong>Specification</strong>s<br />
Description: In later stages of the project it will also be necessary to integrate <strong>and</strong> test the SOIAA module<br />
with the actual robotic system, including tests with human subjects. For this, a safety<br />
filtering interface from our partners will be m<strong>and</strong>atory, which will ensure the safety of the<br />
human participant <strong>and</strong> the robot under comm<strong>and</strong>s initiated by the SOIAA component, as the<br />
SOIAA component, while striving to reproduce “natural behaviour” will not contain explicit<br />
safety provisions. At that point, UH will also require (desirable) sufficient computational<br />
power to run the algorithm in online mode, to allow a system able to actively intervene in<br />
the synergic robot-human movement. As a fall-back option for this case if the computational<br />
power is not available, UH will consider constructing simplified <strong>and</strong> computationally<br />
cheaper proxy quantities that would be replacing the full-fledged information-theoretic<br />
concepts in the online settings.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. SOIAA15<br />
Name: Hardware <strong>Specification</strong>s<br />
Description: In regard to hardware requirements, the actual CPU <strong>and</strong> memory requirements of SOIAA<br />
will be determined during the project, but in general UH strives to create an algorithm that<br />
has (in decreasing order of priority):<br />
� online capability<br />
� onboard capability<br />
� (desirable) online learning<br />
Since the information-theoretic algorithms proposed for SOIAA are rather CPU intensive,<br />
<strong>and</strong> UH will be looking into methods to create more efficient <strong>and</strong> possible more light-weight<br />
versions of the algorithms for this purpose. In general, however, UH expects a need for<br />
substantial computational power, <strong>and</strong> once UH will have acquired more experience with the<br />
specific target domain our partners will be updated with more specific requirements.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
80
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
7.7 User Responsive Learning <strong>and</strong> Adaptation Framework (Task 5.5, UR)<br />
The User Responsive Learning <strong>and</strong> Adaptation Framework allows for a stable maintained <strong>and</strong> goal-oriented<br />
performance continuum in the context of human-robot interaction. This framework enables the creation,<br />
capturing <strong>and</strong> storage <strong>and</strong> thus the data intelligence acquisition of user preferences related to control <strong>and</strong><br />
intervention. Using a canonical set of control parametrics in relation to the variable contexts of the integrative<br />
human-robot-system (e.g. context elements such as gait descriptors, stability, directionality, admissible <strong>and</strong><br />
inadmissible postures etc.), this framework establishes the semantic parameters necessary for the<br />
representation of various states of the <strong>CORBYS</strong> systems in dynamically evolving contexts of states <strong>and</strong><br />
responsive control <strong>and</strong>/or interventions from either actor (human or robot).<br />
7.7.1 Functional <strong>Requirements</strong><br />
7.7.1.1 Processes<br />
Inputs<br />
Requirement No. URLAF1<br />
Name: Identification of control parametrics<br />
Description: A canonical set of control parametrics in relation to the variable contexts of the integrative<br />
human-robot-system<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. URLAF2<br />
Name: User control <strong>and</strong> intervention preferences<br />
Description: Creation, capture <strong>and</strong> storage <strong>and</strong> thus the data intelligence acquisition of user preferences<br />
related to control <strong>and</strong> intervention<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Processing<br />
Requirement No.: URLAF3<br />
Name: Offline learning (Reflective capability)<br />
Description: Offline learning whereby the systems learns to adapt the level <strong>and</strong> type of intervention <strong>and</strong><br />
control in a particular context responsive to the cumulative data intelligence relation to userpreferences<br />
as acquired during runtime interaction with the user. This constitutes the<br />
reflective learning capabilities of the system<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: URLAF4<br />
Name: Runtime responsive adaptation<br />
Description: The system adjusting the level of intervention <strong>and</strong> control to user’s preferences dynamically<br />
in response to user-indications. This response will be available rapidly in a pre-compiled,<br />
reflexive fashion?<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: URLAF5<br />
Name: Runtime responsive adaptation<br />
Description: The response is made available rapidly in a pre-compiled, reflexive fashion i.e. the reflective<br />
81
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
learning outcomes at one point become part of the reflexive capability<br />
Reason / Comments:<br />
Indicative priority Desirable<br />
Output<br />
Requirement No.: URLAF6<br />
Name: Output of Runtime responsive adaptation<br />
Description: Recommendations on adjusting the level of intervention <strong>and</strong> control by the system to user’s<br />
preferences dynamically in response to user-indications<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
7.7.1.2 Interfaces<br />
Requirement No.: URLAF7<br />
Name: Data acquisition from SAWBB<br />
Description: URLAF needs to interface with SAWBB for information about user preferences, profiling<br />
etc.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: URLAF8<br />
Name: Adaptive recommendation to FPGA reflexive capability<br />
Description: URLAF needs to interface with FPGA to provide outcomes of reflective capability i.e.<br />
recommendations on adaptation as informed by the learning, so that this becomes part of the<br />
reflexive capability of the system<br />
Reason / Comments:<br />
Indicative priority Desirable<br />
7.7.2 Non Functional <strong>Requirements</strong><br />
7.7.2.1 Performance<br />
Requirement No.: URLAF9<br />
Name: Latency<br />
Description: The reflexive capability enabled adaptive response recommendation needs to be<br />
communicated by the system in a timely fashion<br />
Reason / Comments:<br />
Indicative priority Desirable<br />
7.8 Cognitive adaptation of low level controllers (Task 5.6, UB)<br />
Relevant task:<br />
� Cognitive adaptation of low level controllers (Task 5.6, UB)<br />
The goal of research is short-term <strong>and</strong> long-term adaptation of control parameters of real-time controllers<br />
based on online measurement of interaction of the robot with its environment. Adapting control parameters<br />
has for an outcome the change in the robot behaviour during the interaction with the environment.<br />
82
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
7.8.1 Functional <strong>Requirements</strong><br />
7.8.1.1 Processes<br />
Inputs:<br />
Requirement No. CAL1<br />
Name: Self-awareness modules<br />
Description: MATLAB, Program Code, Documentation<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. CAL2<br />
Name: SOIAA architecture<br />
Description: MATLAB, Program Code, Documentation<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. CAL3<br />
Name: Low level control algorithms<br />
Description: MATLAB, Program Code, Documentation<br />
Reason / Comments: The form of the input will be determined in discussion with the partner responsible.<br />
Indicative priority M<strong>and</strong>atory<br />
Outputs:<br />
Requirement No. CAL4<br />
Name: Adaptation sub-system layer<br />
Description: The layer that will be interface between low-level control modules <strong>and</strong> cognitive modules.<br />
The module should take care of real time requirements of low level control modules.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Processing:<br />
Requirement No. CAL5<br />
Name: Adaptation of control parameters<br />
Description: In the <strong>CORBYS</strong> rehabilitation system adaptation of control parameters has the advantage<br />
that assistance of the system to the user can be automatically tuned to the patient’s changing<br />
needs in the short term or long term period. Moreover, since several different control laws<br />
will be used for controlling the different robotic sub-systems of the <strong>CORBYS</strong> demonstrator,<br />
the transition in between the different control laws based on anticipation of the environment<br />
will be researched.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Data flow:<br />
Requirement No. CAL6<br />
Name: Adaptation layer data flow<br />
Description: The sub-system is intermediate layer between real-time control modules <strong>and</strong> cognitive<br />
83
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
control higher levels, the information that the system retrieves comes from both directions.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
7.8.1.2 Interfaces<br />
Requirement No. CAL7<br />
Name: Adaptive parameters<br />
Description: From real-time control modules the sub-system will retrieve information about the status of<br />
the control units <strong>and</strong> their performance while from higher level cognitive control modules it<br />
will retrieve information about the interpreted sensory data, control requests, desired control<br />
behaviour <strong>and</strong> etc.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
7.8.1.3 Operating environment<br />
Requirement No. CAL8<br />
Name: Real-time environment<br />
Description: Since the module has to be able to provide real-time update of low-level controllers with<br />
control parameters, referent values, actions <strong>and</strong> others it is necessary that the module<br />
operates in real time. A real time framework such as OROCOS or similar will be used.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
7.8.1.4 Resources<br />
Requirement No. CAL9<br />
Name: Real time hardware<br />
Description: In order to achieve real-time functionality.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
7.8.2 Non Functional <strong>Requirements</strong><br />
7.8.2.1 Performance<br />
Requirement No. CAL10<br />
Name: Hard real-time operation<br />
Description: Hard real time operation will be tested first in laboratory environment <strong>and</strong> later in evaluation<br />
phase.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
7.8.2.2 Safety <strong>and</strong> reliability<br />
Requirement No. CAL11<br />
Name: Safety checking of control parameters<br />
Description: The layer will feed in control parameters to low-level control modules <strong>and</strong> therefore a safety<br />
84
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
checking of control values has to be implemented. This will be agreed with partners that are<br />
involved in this task.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
8 System Integration <strong>and</strong> Functional Testing (WP8, SINTEF)<br />
� Sub-System Conformance Testing (TASK 8.1, SCHUNK)<br />
� Integration of Sub-Systems (TASK 8.2, SINTEF)<br />
� Conformance Testing of integrated System (TASK 8.3, SINTEF)<br />
Work Package 8 deals with the integration of <strong>CORBYS</strong> hardware <strong>and</strong> software subsystem components into a<br />
complete, operational system that can eventually be evaluated in clinical trails in WP 9. System integration is<br />
the process of bringing together subsystem functionalities <strong>and</strong> ensuring that they work together as a system.<br />
A prerequisite in order to obtain a working system is obviously that the intended functionalities of all<br />
subsystem components have been successfully realised <strong>and</strong> verified (tested) before the integration with other<br />
subsystems takes place. Without such subsystem testing, many errors in the complete system must be<br />
anticipated: The origin can be hard to identify in a complex system, the debugging can be intricate, <strong>and</strong> also<br />
the subsystem responsibility easily becomes fragmented among the contributing partners.<br />
The Work Package is therefore defined to contain three main activities (tasks). First, it is verified through<br />
testing in Task 8.1 that the sub-system components have been developed according to the target specifications.<br />
Only sub-system components that are deemed to conform to the targets are then cleared for the next stage, the<br />
integration of sub-systems into an integrated system (Task 8.2). Finally, when the system has become as<br />
complete as time <strong>and</strong> the supply of sub-systems allows, the integrated system will be subject to system level<br />
conformance testing in Task 8.3 to verify the overall system functionality with respect to target specifications.<br />
8.1 Conformance testing on subsystem <strong>and</strong> system level<br />
Relevant tasks:<br />
� Sub-System Conformance Testing (TASK 8.1, SCHUNK)<br />
� Conformance Testing of integrated System (TASK 8.3, SINTEF)<br />
8.1.1 Functional <strong>Requirements</strong><br />
8.1.1.1 Processes<br />
Inputs<br />
Requirement No. CTREF1<br />
Name: Functional target specification of all sub-system components<br />
Description: Functional target specifications will be established, <strong>and</strong> will define metrics <strong>and</strong> procedures<br />
to be used in the system integration stage<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. CTREF2<br />
Name: Documentation of all sub-system components<br />
Description: The documentation required in order to integrate sub-system components into a complete<br />
system, such as:<br />
85
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
� Functional specification<br />
� Mechanical design drawings<br />
� User manual<br />
� Source code<br />
� Interface definitions<br />
� Installation guidelines<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. CTREF3<br />
Name: H<strong>and</strong>-over of components from sub-system developers to system integrators<br />
Description: The hardware <strong>and</strong> software components must be made available for the system integration<br />
partners<br />
Reason / Comments: Must plan for a component h<strong>and</strong>-over to WP 8 partner. Typically, this can be a visit of the<br />
sub-component partner to perform the installation <strong>and</strong> to verify that the system works as<br />
expected in the new environment<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. CONF1<br />
Name: Definition of test parameters <strong>and</strong> test rig setups.<br />
Description: The finalised robotic system including mobile platform <strong>and</strong> the powered orthosis<br />
demonstrator need to be tested according to the specifications. A scenario <strong>and</strong> use case<br />
profile with a specialised test rig design is to be detailed.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Processing<br />
Requirement No.: CTREF4<br />
Name: Conformance testing requirements must be established<br />
Description: Predefined test methodologies based on predefined test metrics must be established.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: CTREF5<br />
Name: Identification of test metrics <strong>and</strong> procedures for conformance testing<br />
Description: Shared metrics for approving that a system or sub-system conforms to target specifications<br />
must be established. Typically, these st<strong>and</strong>ards will be defined between the partners<br />
h<strong>and</strong>ing over <strong>and</strong> receiving sub-systems, <strong>and</strong> will vary <strong>and</strong> be adapted to the nature of the<br />
system or sub-system in question.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: CTREF6<br />
Name: Conformance testing parameters – minimum requirements for all testing<br />
Description: Test plans shall as a minimum contain the following test items:<br />
� Definition of test environment (site) <strong>and</strong> test personnel (partners)<br />
� Conformance to interface definitions<br />
� Conformance to functional requirements<br />
� Conformance to performance related requirements (these might preferably be<br />
carried out by the sub-system developer before the integration stages)<br />
� Conformance to safety/ hasard requirements<br />
� Conformance to documentation requirements<br />
Reason / Comments:<br />
86
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: CTREF7<br />
Name: Conformance testing test protocol design<br />
Description: Test plans shall have unique definition of test objects (physical components shall be<br />
uniquely marked <strong>and</strong> software shall have correct version numbering). It shall further<br />
contain information about test site, test date <strong>and</strong> test personnel<br />
The test protocol when feasible will be designed with the following information for each test<br />
item:<br />
� Unique test item number<br />
� Description of test activity<br />
� Description of expected test result (which should be in accordance with target<br />
specifications)<br />
� Check box field for entering test result with the following alternatives:<br />
Passed/Failed<br />
� Field for entering test observation (in particular observations when the “Failed” box<br />
was checked.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: CONF2<br />
Name:<br />
Mechanical <strong>and</strong> electrical design of necessary test rigs.<br />
Description: Rigs <strong>and</strong> setups to test actuators of orthosis through <strong>and</strong> mobile manipulator need<br />
mechanical design, bill of materials, electrical design <strong>and</strong> software structure.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: CONF3<br />
Name: Production of test rigs<br />
Description: Manufacturing <strong>and</strong> assembly, cabling, functional test, software design for test procedures.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: CONF4<br />
Name: Sensor verification<br />
Description: Sensor function will be verified with third party measurement techniques <strong>and</strong> third party<br />
sensors. Options for calibration <strong>and</strong> necessary maintenance intervals will be presented.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Output<br />
Requirement No.: CTREF8<br />
Name: Sub-system test reports from conformance testing<br />
Description: For <strong>CORBYS</strong> sub-systems, the results of conformance testing will be stored as project<br />
internal development documentation.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: CONF5<br />
Name: Data sheet <strong>and</strong> measurement/test protocols<br />
Description: Conformance with requirements <strong>and</strong> specifications has been proven with a final data sheet<br />
87
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
<strong>and</strong> test protocols<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
8.1.1.2 Interfaces<br />
Requirement No.: CTREF9<br />
Name: Evaluation of sub-system conformance testing outcomes<br />
Description: For sub-system conformance testing, the outcomes will be determined as a result of<br />
conformance testing involving the partners delivering (technology development work<br />
packages) <strong>and</strong> receiving technology (system integrators)<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: CTREF10<br />
Name: Sub-system components not meeting all conformance testing requirements<br />
Description: It must be anticipated that not all sub-systems will meet all conformance test requirements.<br />
In these cases, there will have to be a discussion about the severity of the failed items, the<br />
possibility to correct these items at a later stage (during integration stages), <strong>and</strong> the overall<br />
system consequences of not integrating the component in question.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: CTREF11<br />
Name: Evaluation of integrated system conformance testing outcomes<br />
Description: For integrated system conformance testing, the outcomes will be determined as a result of<br />
conformance testing involving WP 8 (system integrators) on one side <strong>and</strong> WP 9 (clinical<br />
evaluation) partners<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
8.1.1.3 Roles <strong>and</strong> responsibilities<br />
Requirement No.: CTREF12<br />
Name: Sub-system responsibility<br />
Description: The sub-system functionality <strong>and</strong> performance is the responsibility of the partner(s)<br />
developing the sub-system during all testing <strong>and</strong> integration stages.<br />
Reason / Comments: After the integration stage, the responsibility will have to be shared with all partners, but all<br />
partners have responsibility to offer <strong>and</strong> provide guidance, service <strong>and</strong> other assistance<br />
during the integration <strong>and</strong> evaluation stages<br />
Indicative priority M<strong>and</strong>atory<br />
8.1.1.4 Operating environment<br />
Requirement No.: CTREF13<br />
Name: <strong>CORBYS</strong> gait system target operating environment<br />
Description: The <strong>CORBYS</strong> gait rehabilitation system will be designed to work in typical health<br />
institution environments.<br />
Reason / Comments: The <strong>CORBYS</strong> gait rehabilitation system will not be suitable for outdoor all-weather usage.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: CTREF14<br />
Name: <strong>CORBYS</strong> gait system target operating mode (ambulatory/not ambulatory)<br />
88
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Description: The <strong>CORBYS</strong> gait rehabilitation system will be designed to be ambulatory, e.g., with<br />
wheels, moving around during gait rehabilitation therapy.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
8.1.1.5 Resources<br />
Requirement No.: CTREF15<br />
Name: Sub-system conformance testing equipment<br />
Description: Special, dedicated test equipment, jigs et cetera required to perform conformance testing are<br />
supplied by the developing partner of the sub-system<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
8.1.2 Non Functional <strong>Requirements</strong><br />
8.1.2.1 Performance<br />
Requirement No.: CTREF16<br />
Name: Sub-system conformance testing of performance<br />
Description: Performance testing is not the main concern of conformance testing, but should have been<br />
carried out by developers of the sub-system prior to integration.<br />
Some limited performance testing should however be included in the conformance testing to<br />
verify functionality (such as to verify that an increase in control system load gives the<br />
expected feedback)<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
8.1.2.2 Safety <strong>and</strong> reliability<br />
Requirement No.: CTREF17<br />
Name: Sub-system evaluation of user hasards <strong>and</strong> safety<br />
Description: User safety is a prime concern for all medical devices. During conformance testing <strong>and</strong> in<br />
integration stages, hasards <strong>and</strong> safety concerns of all sub-systems shall be evaluated as part<br />
of the sub-system h<strong>and</strong>-over to integration partners. Hasards <strong>and</strong> safety aspects shall be<br />
considered for all <strong>CORBYS</strong> users - both patients <strong>and</strong> professionals.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: CTREF18<br />
Name: Sub-system evaluation of reliability<br />
Description: The developer of the sub-system is responsible for ensuring that their sub-system<br />
components operate reliably. The developer’s effort to ensure this should be discussed<br />
during conformance testing.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
89
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
8.1.3 Assumptions <strong>and</strong> Dependencies<br />
Internal<br />
� Depends on <strong>CORBYS</strong> Functional <strong>Specification</strong><br />
� Depends on <strong>CORBYS</strong> overall output<br />
8.2 Integration of SubSystems<br />
Relevant tasks:<br />
� Integration of Sub-Systems (TASK 8.2, SINTEF)<br />
8.2.1 Functional <strong>Requirements</strong><br />
8.2.1.1 Processes<br />
Inputs<br />
Requirement No. SIREF1<br />
Name: Sub-systems to be integrated must successfully have passed conformance testing<br />
Description: Only sub-systems that meet conformance test requirements will be integrated into the<br />
system<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. SIREF2<br />
Name: Sub-systems to be integrated must be accompanied by sufficient documentation<br />
Description: The documentation required in order to integrate sub-system components into a complete<br />
system, such as:<br />
� Functional specification<br />
� Mechanical design drawings<br />
� User manual<br />
� Source code<br />
� Interface definitions<br />
� Installation guidelines<br />
Reason / Comments: Same as CTREF6<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. SIREF3 (same as CTREF7)<br />
Name: H<strong>and</strong>-over of components from sub-system developers to system integrators<br />
Description: The hardware <strong>and</strong> software components must be made available for the system integration<br />
partners<br />
Reason / Comments: Must plan for a component h<strong>and</strong>-over to WP 8 partner. Typically, this can be a visit of the<br />
sub-component partner to perform the installation <strong>and</strong> to verify that the system works as<br />
expected in the new environment<br />
Indicative priority M<strong>and</strong>atory<br />
Processing<br />
Requirement No.: SIREF4<br />
Name: System integration site<br />
90
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Description: The physical location where the system integration activity takes place must be decided.<br />
Reason / Comments: Alternatively, at least a plan for moving the system from one site to other must be fixed.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: SIREF5<br />
Name: System integration process for installing new sub-components<br />
Description: New sub-system components cleared for system integration will be installed in a joint effort<br />
of the sub-system developer <strong>and</strong> the system integrator.<br />
Reason / Comments: From case to case, it will have to be decided what is the most practical. Examples of joint<br />
efforts can range from remote installation of software or physical face to face meetings to<br />
install physical components.<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: SIREF6<br />
Name: System integration functionality testing should be carried out when new components have<br />
been installed<br />
Description: Frequent testing of the system in the integration stages is important in order to verify that<br />
new components do not give rise to problems in or conflicts with other sub-systems<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: SIREF7<br />
Name: Fine-tuning <strong>and</strong> optimisation of functionality of the integrated system <strong>and</strong> Sub-system to<br />
sub-system optimisation<br />
Description: Many system features can only be implemented <strong>and</strong> tested when integrated as a complete<br />
system. This will be a combined effort of the system integrators <strong>and</strong> the developers of the<br />
sub-systems. System integrators must ensure that time <strong>and</strong> access are given for sub-system<br />
developers to meet <strong>and</strong> improve the combined, integrated system<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: SIREF8<br />
Name: Physical design process of integrated system<br />
Description: Physical design <strong>and</strong> construction shall be based on dimensionally correct models developed<br />
using 3D CAD tools such as SolidWorks or ProEngineeer.<br />
For moving components, the system shall be evaluated/inspected at all end points to detect<br />
potential physical conflicts<br />
Reason / Comments: Discuss with other partners which CAD system <strong>and</strong> exchange format to use<br />
Indicative priority M<strong>and</strong>atory<br />
Output<br />
Requirement No.: SIREF9<br />
Name: System integration final output<br />
Description: The System Integration activity shall have as an output an integrated system that passes the<br />
final conformance tests of Task 8.3<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
8.2.1.2 Interfaces<br />
Requirement No.: SIREF10<br />
Name: System integration status <strong>and</strong> issues update<br />
Description: Status <strong>and</strong> issues in integration shall be made available to all project partners by<br />
91
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
documentation of plans, timelines <strong>and</strong> issues stored on the <strong>CORBYS</strong> file sharing system<br />
(WebDAV)<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
8.2.1.3 Roles <strong>and</strong> responsibilities<br />
Requirement No.: SIREF11 (confer CTREF12)<br />
Name: Sub-system responsibility<br />
Description: The sub-system functionality <strong>and</strong> performance is the responsibility of the partner(s)<br />
developing the sub-system during all testing <strong>and</strong> integration stages.<br />
Reason / Comments: After the integration stage, the responsibility will have to be shared with all partners, but all<br />
partners have responsibility to offer <strong>and</strong> provide guidance, service <strong>and</strong> other assistance<br />
during the integration <strong>and</strong> evaluation stages<br />
Indicative priority M<strong>and</strong>atory<br />
8.2.1.4 Operating environment<br />
Operation environment requirements correspond 1:1 to the requirements for conformance testing: CTREF13<br />
<strong>and</strong> CTREF14<br />
Requirement No.: SIREF12 (confer CTREF13)<br />
Name: Integrated system power resources<br />
Description: The Consortium need to decide whether a battery or a mains powered system will be<br />
targeted<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
8.2.1.5 Resources<br />
Requirement No.: SIREF13<br />
Name: Resource limitations encountered in the system integration stage<br />
Description: Resource limitations emerging in the integration stage (for example power or processing<br />
will have to be dealt with in a process involving the system integrators <strong>and</strong> the partners<br />
sharing the limited resource)<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
8.2.2 Non Functional <strong>Requirements</strong><br />
8.2.2.1 Performance<br />
Requirement No.: SIREF12 (confer CTREF12)<br />
Name: Integrated system performance testing<br />
Description: The integrated system will be subject to performance testing. These tests will partly be to<br />
done to verify that sub-components performs as planned, <strong>and</strong> partly to verify that system<br />
features of the integrated system performs.<br />
Performance test parameters will be derived/extracted from target system specifications<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
92
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
8.2.2.2 Safety <strong>and</strong> reliability<br />
Requirement No.: SIREF13<br />
Name: System evaluation of user hazards <strong>and</strong> safety<br />
Description: User safety is a prime concern for all medical devices. During integration stages, hasards <strong>and</strong><br />
safety concerns of all sub-systems shall be evaluated on a continuous basis, as well as in<br />
dedicated risks hasard meetings. The purpose of these meetings is to identify potential risks<br />
<strong>and</strong> find strategies to mitigate them. Risks <strong>and</strong> hazards evaluation shall consider all <strong>CORBYS</strong><br />
users – both patients <strong>and</strong> professionals<br />
Reason<br />
Comments:<br />
/<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: SIREF14<br />
Name: System integration reliability assessment<br />
Description: The integrated system will not be subject of extensive reliability tests (such as accelerated<br />
wear tests, climate chamber testing, <strong>and</strong> repeatability testing of sensor/actuators). The system<br />
will however not be released until a situation deemed as stable with a low frequency of<br />
failures have been reached.<br />
Reason / An acceptable failure rate has to be decided, but as a minimum, failures should not be<br />
Comments: expected during the time span of a typical rehabilitation session.<br />
Indicative priority M<strong>and</strong>atory<br />
8.2.2.3 Other<br />
Requirement No.: SIREF15<br />
Name: Integrated system must be possible to transport in Europe<br />
Description: The integrated system will have to be moved between various locations in Europe. The<br />
assembly quality of all components must therefore be sufficient to allow safe transport using<br />
commercial freight services<br />
Reason / The integrated system will have to be moved for display at one or more conferences, as well<br />
Comments: as when it will be deployed in the evaluation effort in clinical environments<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: SIREF16<br />
Name: The integrated system must be properly insured <strong>and</strong> declared at customs when transported<br />
across borders<br />
Description: With substantial accumulated effort <strong>and</strong> component value follows the need for insurance.<br />
With the component value follows also the need to avoid taxes at each customs border.<br />
Reason<br />
Comments:<br />
/<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: SIREF17<br />
Name: Number of gait rehabilitation devices to be developed<br />
Description: There will only be developed at least one complete one integrated system<br />
Reason<br />
Comments:<br />
/<br />
Indicative priority M<strong>and</strong>atory<br />
8.2.3 Assumptions <strong>and</strong> Dependencies<br />
Internal<br />
93
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
� Depends on <strong>CORBYS</strong> Functional <strong>Specification</strong><br />
� Depends on <strong>CORBYS</strong> overall output<br />
9 Evaluation (WP9, IRSZR)<br />
� <strong>Specification</strong> of evaluation methodology, benchmarking, metrics, procedures <strong>and</strong> ethical assurance<br />
(TASK 9.1, UR)<br />
� Training on <strong>CORBYS</strong> system (TASK 9.2, IRSZR)<br />
� Continuous assessment of the technology under the development (TASK 9.3, IRSZR)<br />
� Evaluation of the researched methods on autonomous robotic system (TASK 9.4, UB)<br />
� Evaluation <strong>and</strong> feedback to development (TASK 9.5, NRZ)<br />
9.1 Evaluation methodology, benchmarking, metrics, procedures <strong>and</strong> ethical<br />
assurance (Task 9.1, UR)<br />
Relevant task:<br />
� <strong>Specification</strong> of evaluation methodology, benchmarking, metrics, procedures <strong>and</strong> ethical assurance<br />
(Task 9.1)<br />
9.1.1 Functional <strong>Requirements</strong><br />
9.1.1.1 Processes<br />
Inputs<br />
Requirement No. UIREF1<br />
Name: Functional <strong>Specification</strong> of <strong>CORBYS</strong><br />
Description: This will inform the identification of metrics <strong>and</strong> procedures to be used in the evaluation<br />
methodology<br />
Reason /<br />
Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. UIREF2<br />
Name: Overall output of <strong>CORBYS</strong> project<br />
Description: The outcomes of the project will be used in the identification of appropriate existing systems<br />
to be used as benchmarks<br />
Reason /<br />
Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Processing<br />
Requirement No.: UIREF3<br />
Name: Formulation of evaluation methodology<br />
Description: To follow established methodologies (e.g. UI-REF, Badii 2008) for identification of<br />
evaluation metrics <strong>and</strong> procedures i.e. requirements UIREF4, UIREF5 <strong>and</strong> UIREF6<br />
Reason /<br />
Comments:<br />
94
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: UIREF4<br />
Name: Identification of benchmarks<br />
Description: For the purpose of the benchmarking exercise, the <strong>CORBYS</strong> system viewed generically as a<br />
meta-controller controlling semi-autonomous sub-systems can be specified with respect to the<br />
identified terms of reference so as to be able to conduct comparative studies with existing<br />
meta-controllers that can be identified as comparable with respect to the established terms of<br />
reference. Thus the work entails the establishment of meta-controller comparators so that the<br />
performance of the <strong>CORBYS</strong> controller can be evaluated <strong>and</strong> compared with other<br />
comparable meta-controllers within a formal benchmarking framework.<br />
Reason /<br />
Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: UIREF5<br />
Name: Identification of evaluation metrics <strong>and</strong> procedures<br />
Description: To identify the evaluation metrics <strong>and</strong> specify the evaluation procedures to be followed in<br />
<strong>CORBYS</strong>.<br />
Reason /<br />
Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: UIREF6<br />
Name: Ethical assurance<br />
Description: To identify the evaluation metrics <strong>and</strong> specify the evaluation procedures to be followed in<br />
<strong>CORBYS</strong> in the context of ethical assurance <strong>and</strong> user involvement.<br />
Reason /<br />
Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Output<br />
Requirement No.: UIREF7<br />
Name: Set of metrics, procedures, benchmarks, ethical assurance <strong>and</strong> evaluation methodology<br />
Description: Comprehensive list of identified evaluation metrics <strong>and</strong> procedures to be followed in<br />
<strong>CORBYS</strong>. For benchmarking, the terms of reference by referring to generic properties of the<br />
required control performance:<br />
1. Dynamicity: typical real-time response (e.g. in ms)<br />
2. Complexity of Control: Number of states of each sub-system <strong>and</strong> number of states of<br />
control transition dynamics between the two systems<br />
3. Required Control Envelope: Number of spatio-temporal dimensions of required<br />
control, e.g. linear motion, rotational motion, 2D, 3D, etc<br />
4. Delivered Performance Envelope: Dynamicity, complexity <strong>and</strong> dimensionality levels<br />
of the controller<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
9.1.1.2 Interfaces<br />
Requirement No.: UIREF8<br />
Name: Evaluation of <strong>CORBYS</strong> outcomes<br />
Description: The formulated evaluation methodology, identified benchmarks, metrics <strong>and</strong> procedures will<br />
be used in the Evaluation WP<br />
95
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
9.1.2 Assumptions <strong>and</strong> Dependencies<br />
Internal<br />
� Depends on <strong>CORBYS</strong> Functional <strong>Specification</strong><br />
� Depends on <strong>CORBYS</strong> overall output<br />
9.2 Training on <strong>CORBYS</strong> system (Task 9.2, IRSZR)<br />
<strong>CORBYS</strong> demonstrator I prototype will be used <strong>and</strong> tested in real application environments by industrial<br />
partners. This process will feed refinement <strong>and</strong> improvement of the <strong>CORBYS</strong> system. Conclusions regarding<br />
further improvements will be derived by measuring all benefits achieved by the implemented solution.<br />
Training for staff, end-users, <strong>and</strong> evaluators will be carried out.<br />
9.2.1 Functional <strong>Requirements</strong><br />
9.2.1.1 Processes<br />
Inputs<br />
Requirement No. TRAINING1<br />
Name: Overall output of <strong>CORBYS</strong> project<br />
Description: The outcomes of the project will be used for training purposes<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Processing<br />
Requirement No.: TRAINING2<br />
Name: Formulation of training strategies<br />
Description: Appropriate training strategies to be devised in collaboration with experts <strong>and</strong> the whole<br />
consortium, so that <strong>CORBYS</strong> solutions are used optimally<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: TRAINING3<br />
Name: Training process<br />
Description: Training of personnel, staff, evaluators, end-users of industrial partners to use relevant<br />
<strong>CORBYS</strong> solutions in their application domains<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Output<br />
Requirement No.: TRAINING4<br />
Name: Conclusions regarding refinement / further improvement of <strong>CORBYS</strong> solutions<br />
Description: Results of training <strong>and</strong> testing will be documented<br />
96
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
9.3 Continuous assessment of the technology under the development (Task 9.3,<br />
IRSZR)<br />
Developed technologies <strong>and</strong> algorithms will be tested for their feasibility, interaction with users <strong>and</strong> expected<br />
performance. This will be done by assessing various sub-components of the system with selected patients in<br />
form of clinical testing. Different pathologies (e.g., stroke) as selected in the requirements elicitation stage<br />
specific to the first demonstrator, will be considered. The results will provide feed back on ergonomic issues,<br />
feasibility of sensory <strong>and</strong> actuation subsystems as well as about the overall feasibility of the intelligence<br />
platform that makes decisions on the current stage <strong>and</strong> future directions of rehabilitation.<br />
9.3.1 Functional <strong>Requirements</strong><br />
9.3.1.1 Processes<br />
Inputs<br />
Requirement No. CATD1<br />
Name: Overall output of <strong>CORBYS</strong><br />
Description: Including subsystems <strong>and</strong> the overall system prototype<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Processing<br />
Requirement No.: CATD2<br />
Name: Phase I trials<br />
Description: Each specific aspect of the overall system e.g., powered orthosis, mobile platform, sensory<br />
system <strong>and</strong> cognitive control subsystems, will be tested with able-bodied individuals<br />
simulating specific gait conditions related to various identified pathologies. This is expected<br />
to form the core load of system evaluation, hence it would not be appropriate to overburden<br />
patients with this task that can not help in their rehabilitation.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: CATD3<br />
Name: Phase II trials<br />
Description: Once the subsystems function adequately, patients falling under the identified pathological<br />
categories will evaluate <strong>and</strong> experimentally validate functioning of the technology under<br />
development. Here the goal will not be to draw any general conclusions on the therapeutic<br />
efficiency of the proposed system but rather to provide sufficient proof-of-concept that may<br />
facilitate r<strong>and</strong>omised controlled trials. Inclusion <strong>and</strong> exclusion criteria will be determined<br />
<strong>and</strong> a set of reliable clinical outcome measures related to balance <strong>and</strong> walking (10-meters<br />
walk, 6-minutes walking tests etc.) will be selected to determine the possible therapeutic<br />
effects.<br />
In addition, patients will also undergo the instrumented gait analysis in the kinesiology<br />
laboratory at IRSZR, helping to determine not only functional improvement but also any<br />
qualitative changes in the kinematic <strong>and</strong> kinetic features of a particular gait.<br />
Patients will negotiate various objects or obstacles within their path; this will allow testing<br />
97
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
of the cognitive capability in the system, which aids the patients with obstacle negotiation or<br />
avoidance.<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Output<br />
Requirement No.: CATD4<br />
Name: Feedback on ergonomics <strong>and</strong> feasibility<br />
Description: Testing <strong>and</strong> validation will result in feedback for engineers to optimise the developed<br />
solutions in terms of feasibility <strong>and</strong> ergonomics<br />
Reason / Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
9.4 Evaluation of the researched methods on the second demonstrator (Task<br />
9.4, UB)<br />
Task: The evaluation of the generalizability of the <strong>CORBYS</strong> cognitive techniques developed will be done on<br />
the second demonstrator representing the robotic system for investigation of contaminated / hazardous<br />
environments.<br />
9.4.1 Functional <strong>Requirements</strong><br />
9.4.1.1 Processes<br />
Inputs<br />
Requirement No. EASD1<br />
Name: Detailed specification of the evaluation scenarios.<br />
Description: The robot working scenarios appropriate for evaluation of <strong>CORBYS</strong> cognitive modules have<br />
to be designed. It is to specify in details which aspects of cognitive modules will be evaluated<br />
with second demonstrator.<br />
Reason<br />
Comments:<br />
/<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. EASD2<br />
Name: Integration (wrapping) of modules of existing control architecture (e.g. robot arm control<br />
module) of the second demonstrator into <strong>CORBYS</strong> control architecture.<br />
Description: High level <strong>CORBYS</strong> cognitive control modules to be evaluated on existing robotic system for<br />
examining hazardous environments.<br />
Reason / Detailed strategy will be defined later. It depends on requirements of <strong>CORBYS</strong> control<br />
Comments: (software) architecture. To be discussed in WP6-Cognitive Control<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. EASD3<br />
Name: Analysis of usability of existing sensors for <strong>CORBYS</strong> demonstration<br />
Description: Existing sensors (e.g. vision sensors) are to be analysed in the <strong>CORBYS</strong> context<br />
Reason / It is to be tested whether existing sensors can provide necessary information for cognitive<br />
Comments: modules.<br />
Indicative priority M<strong>and</strong>atory<br />
98
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Processing<br />
Requirement No.: EASD4<br />
Name: Situation Awareness<br />
Description: Situation Awareness input to semi-autonomous (autonomous) robot control<br />
Reason /<br />
Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: EASD5<br />
Name: Identification <strong>and</strong> anticipation of human co-worker purposeful behaviour<br />
Description: SOIAA input to semi-autonomous (autonomous) robot control for overtaking initiative when<br />
needed<br />
Reason /<br />
Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: EASD6<br />
Name: Extension of existing sensor modules (e.g. vision)<br />
Description: Extension of functionalities of existing sensor modules such as vision module with<br />
functionality of human tracking <strong>and</strong> action recognition to be done by UB if needed. To be<br />
specify in details with partners<br />
Reason /<br />
Comments:<br />
Indicative priority Desirable<br />
Outputs<br />
Requirement No.: EASD7<br />
Name: Evaluation of <strong>CORBYS</strong> cognitive control architecture on the existing robotic system<br />
Description: Demonstration of generality of <strong>CORBYS</strong> cognitive control architecture<br />
Reason<br />
Comments:<br />
/ Partial <strong>and</strong> full testing TBD<br />
Indicative priority M<strong>and</strong>atory<br />
9.5 Evaluation <strong>and</strong> feedback to development (Task 9.5, NRZ)<br />
This task will undertake evaluation of developed solutions <strong>and</strong> present conclusions <strong>and</strong> findings to feed back<br />
the process of developmental refinement, improvements <strong>and</strong> re-adjustments where necessary, with an<br />
evolutionary approach.<br />
9.5.1 Functional <strong>Requirements</strong><br />
9.5.1.1 Processes<br />
Inputs<br />
Requirement No. EVAL1<br />
Name: Evaluation methodologies <strong>and</strong> strategies<br />
Description: Evaluation methodologies <strong>and</strong> strategies as set out in requirement UIREF3<br />
Reason /<br />
Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
99
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Requirement No. EVAL2<br />
Name: Evaluation results of demonstrators I <strong>and</strong> II<br />
Description: Evaluation results of demonstrators I <strong>and</strong> II as output of tasks 9.3 <strong>and</strong> 9.4.<br />
Reason /<br />
Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No. EVAL3<br />
Name: Evaluation of demonstrators I<br />
Description: Training <strong>and</strong> evaluation of <strong>CORBYS</strong> demonstrator I (trials with patients in NRZ).<br />
Reason /<br />
Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
Processing<br />
Requirement No.: EVAL4<br />
Name: Evaluation of <strong>CORBYS</strong> solutions<br />
Description: Evaluation of technologies, algorithms, subsystems, components developed as part of<br />
<strong>CORBYS</strong> project<br />
Reason<br />
Comments:<br />
/ Evaluation of Corbys solutions<br />
Indicative priority M<strong>and</strong>atory<br />
Requirement No.: EVAL5<br />
Name: Evaluation of demonstrator I<br />
Description: Continuous assessment of the technology under development in UB <strong>and</strong> NRZ<br />
Reason<br />
Comments:<br />
/<br />
Indicative priority M<strong>and</strong>atory<br />
Output<br />
Requirement No.: EVAL6<br />
Name: Results of evaluation process<br />
Description: The outcomes of the evaluation process will be documented, <strong>and</strong> used for refinement <strong>and</strong><br />
improvement of technologies.<br />
Reason /<br />
Comments:<br />
Indicative priority M<strong>and</strong>atory<br />
100
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
StateoftheArt in Enabling Technologies<br />
From a <strong>CORBYS</strong>-specific technology needs viewpoint:<br />
10 StateoftheArt in Sensors <strong>and</strong> Perception (SINTEF)<br />
10.1 Introduction to sensors <strong>and</strong> perception<br />
10.1.1 Document Scope <strong>and</strong> Purpose<br />
The purpose of this section is to describe the current state-of-art with respect to sensing in order to underst<strong>and</strong><br />
the human in human-robot interfaces. The section does not take into account sensor principles directed<br />
towards interpreting signals from the brain. The brain-computer interface state-of-the-art is presented in<br />
section 15.<br />
10.2 Sensor principles for perception in humanrobot interaction<br />
This section reviews the state-of-the-art of sensor principles underlying the underst<strong>and</strong>ing of the human in<br />
human-robot interactions. The section has been grouped as follows according to the type of information<br />
extracted:<br />
� Physiological sensors, which provides information about human physiology<br />
� Inertial measurement units, which provides inertia information for specific parts of the human body<br />
� Mechanical sensors, which provides information about contact forces, force distribution (pressure)<br />
<strong>and</strong> information about absolute <strong>and</strong> angular positions of specific body parts.<br />
� Environmental sensors, which measures the (physical) environment of the human-robot system -<br />
such as collision avoidance sensors<br />
The human-robot interface in a gait rehabilitation system has a number of characteristics that need to be<br />
considered when analysing the human sensing concepts state-of-art <strong>and</strong> its relevance in the <strong>CORBYS</strong> setting:<br />
� Sensors should be safe to use. The sensor readouts will enter into a robotics control system. It is<br />
therefore crucial that the sensor principles are capable of reliably detecting correct values, or<br />
alternatively, detect failure situations so that erroneous sensor readings do not cause harmful robotic<br />
actions. Further, the sensors should not cause other health or safety concerns during the time the<br />
patient uses the <strong>CORBYS</strong> system. In this respect, it is anticipated that the <strong>CORBYS</strong> system will be<br />
used in rehabilitation training sequences lasting from several minutes to a few hours, <strong>and</strong> that the<br />
sensor principles should be compatible with safe <strong>and</strong> harmless operation during this time frame.<br />
� <strong>CORBYS</strong> patients are monitored while doing physical activities of variable intensities. Sensor<br />
concepts that have the potential of being used while the patient is carrying out physical activities are<br />
therefore necessary. Many clinical physiological measurement systems are based on evaluating the<br />
patient while they are resting, for example while sitting down or lying in a bed. Physical activity<br />
easily introduces movement artefacts <strong>and</strong> noise from for example muscle activation which lower the<br />
accuracy of measurements.<br />
� The benefits of using the <strong>CORBYS</strong> system should not be outweighed by the hassle of using it. The<br />
two main users of the <strong>CORBYS</strong> system are the rehabilitation patient <strong>and</strong> the rehabilitation therapist.<br />
From these two user perspectives it should be easy <strong>and</strong> quick to install the system (for the lack of a<br />
better word) <strong>and</strong> configure the patient in the gait rehabilitation system. It is therefore of interest to<br />
lower the number of interfaces (such as patches <strong>and</strong> cables) that must be attached to the patient, in<br />
particular components that must interface the patient’s body directly. From the therapist’s perspective,<br />
it should also be pointed out that the need for a considerable number of sensors attached at multiple<br />
locations on the body adds to the risk of placing sensors in wrong locations, such as mixing the left<br />
101
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
<strong>and</strong> right side of the body. In the case of <strong>CORBYS</strong>, it is therefore of interest to consider ways of i)<br />
effectively combining multiple sensors in a limited number of unobtrusive devices, ii) reducing the<br />
number of cables required, or iii) introduce solutions allowing wireless communication of sensor<br />
readings. Due to user comfort, safety <strong>and</strong> unobtrusiveness requirements, this review does not<br />
consider invasive or implanted sensors (sensors puncturing the human skin), breath gas analysis<br />
sensor principles (which requires sensors or a mask in front of the mouth <strong>and</strong> nose) or body fluid<br />
analysis concepts.<br />
10.2.1 Physiological sensor concepts<br />
Physiological sensors aim to measure aspects of the human physiology.<br />
10.2.1.1 Heart rate<br />
The heart rate is the rate at which the heart beats. As a result of the heart contractions, blood flow through the<br />
arteries, causing a measurable pulse throughout the body. In normal situations, the heart rate <strong>and</strong> the pulse are<br />
the same, but the distinction between pulse <strong>and</strong> heart rate are made because there are situations when the heart<br />
is beating, but is not able to pump blood. The heart rate increases with the increasing physical effort, <strong>and</strong> it<br />
can also increase due to psychological stress.<br />
The electrical signals caused by the nerve signals in muscles, <strong>and</strong> in particular the signals associated with<br />
periodic contractions of the heart have been known for more than hundred years. Willhelm Einthoven was<br />
awarded a Nobel Prize in 1924 for his contributions to the field of electrocardiography, see for example<br />
Rivera-Ruiz et al (2008), see Figure 15.<br />
Figure 15: Willhelm Einthoven’s setup for measuring the ECG signals (left)<br />
<strong>and</strong> the resulting observed schematic PQRST heart signature.<br />
It is interesting to note that apart from some changes in electrode positions, the measurement is essentially the<br />
same today. While Einthoven used buckets filled with salt solutions as electrodes, the main technology<br />
development has taken place within the field of electrode technology. Skin electrode measurements are<br />
however associated with substantial problems due to physical movements, which causes large DC offset<br />
values to measurement signals due to polarisation effects as illustrated in Figure 16. Further, noises are<br />
introduced due to any muscular activity, thereby making electrode measurements more challenging for<br />
patients that are physically active.<br />
102
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Figure 16: Illustration of the ECG signal <strong>and</strong> the effect of an abrupt mechanical disturbance.<br />
A full diagnostic (12 lead) ECG investigation, aiming at identifying heart disease conditions, employs 10<br />
electrodes placed on the body as shown in Figure 17. Based on the mapping of the resulting curves, it is<br />
possible to identify malfunctions of the heart, for example due to infarction damaged tissue. In addition to 12<br />
lead ECG aiming at geometrical mapping of the heart electrical signals; other diagnostic ECG measurements<br />
are done to characterise various heart rate variability conditions.<br />
Figure 17: 12 lead ECG<br />
Within <strong>CORBYS</strong>, it is not an objective to identify diagnostic heart condition information. Rather, <strong>CORBYS</strong><br />
will employ the heart rate as an ingredient in the assessment of physical effort <strong>and</strong> possibly as a co-indicator<br />
of intended actions or psychophysiological states. As <strong>CORBYS</strong> deals with users engaged in activity, it is<br />
essential to select electrodes <strong>and</strong> measurement configurations that allow accurate monitoring while the user is<br />
active. The most common, <strong>and</strong> also potentially most accurate, method being used on active people is based on<br />
simplified ECG measurements using only 3 or 2 electrodes to extract the heart rate.<br />
In clinical applications, AgCl terminated electrodes are frequently used. These are glued, attached or patched<br />
to the patient’s skin. Silver chloride terminated electrodes can cause skin irritation with prolonged use.<br />
Further, they are associated with adhesive patches which can be uncomfortable to remove after use.<br />
Commercial heart rate monitors used in sports <strong>and</strong> fitness usually employs electrodes manufactured from<br />
elastic, conductive textile, or conductive rubber or polyurethane. The textile electrodes are backed by soft<br />
rubber or similar to increase pressure against the skin, improving electrical contact. A disadvantage with all<br />
these electrode versions is that a direct mechanical <strong>and</strong> electrical contact with the body is required at all times.<br />
The use of capacitive, “non-contact” electrodes has also been demonstrated, but these can not compete with<br />
the performance of contact electrodes at the present time.<br />
103
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Figure 18: ECG electrodes. To the left is an example of disposable electrodes<br />
(Ambu Centre Snap ECG Electrode 1 ), in the middle a Garmin sports belt for heart rate monitoring, <strong>and</strong> to the right capacitive<br />
electrodes from Quasar 2 .<br />
10.2.1.2 Electromyography (EMG)<br />
Electromyography is a technique for evaluating <strong>and</strong> recording the electrical activity produced by skeletal<br />
muscles. Skin surface electrodes can be used to record signals from the muscles. The distance from the skin<br />
to the muscles of interest in <strong>CORBYS</strong> is short <strong>and</strong> the muscle groups are large. The muscle activity is related<br />
to the root mean square (rms) value of the signal. There are commercially available systems, e.g. from Plux 3<br />
<strong>and</strong> Biometrics Ltd 4 (Figure 19). In these systems several sensors are connected to a connection system that<br />
can send the recorded data to a computer. Systems with wired <strong>and</strong> wireless sending exist. Sensors <strong>and</strong> system<br />
from Biometrics Ltd are shown in Figure 19. Also for EMG capacitive, “non-contact” electrodes has also been<br />
demonstrated, e.g. Quasar 2 .<br />
Figure 19: The left images show the Biometrics Ltd EMG monitoring system.<br />
The right graph shows extensor EMG <strong>and</strong> 3 repetitions of maximum grip strength of a normal subject. The EMG trace is the<br />
raw data.<br />
10.2.1.3 Electrodermal response (EDR)<br />
Electrodermal response (EDR) is a method of measuring the electrical conductance of the skin, which varies<br />
with its moisture level. EDR measurements have for many years been based on DC voltage or current <strong>and</strong><br />
therefore the method has also been termed galvanic skin response (GSR). EDR is of interest because the<br />
sweat gl<strong>and</strong>s are controlled by the sympathetic nervous system, so skin conductance is used as an indication<br />
of psychological or physiological arousal. Skin conductance levels are not in direct correlation with the sweat<br />
production or the evaporation from skin, but rather the sweat activity with the filling of the sweat ducts <strong>and</strong><br />
the reabsorption process.<br />
1 http://63.134.192.29/cart/Results.cfm?category=16<br />
2 http://www.quasarusa.com/<br />
3 www.plux.info/EMG<br />
4 www.biometricsltd.com/<br />
104
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Prototypes used for measuring electro dermal activity (sweat) have been made <strong>and</strong> reported [Tronstad et al,<br />
2008] [Poh et al, 2010]. There are also some commercial systems available, e.g. from Plux 5 .<br />
10.2.1.4 Humidity/sweat sensors<br />
In addition to the information obtained from EDR type measurement, it is also possible to apply humidity<br />
sensors that do not have to be in direct contact with the skin of the patient. Humidity sensors might be used to<br />
measure relative humidity inside the clothing as an indication of sweating, <strong>and</strong> methods for using two sensors<br />
for measuring sweat rate have also been reported [Salvo et al, 2010]. Humidity sensors are often combined<br />
with temperature to measure relative humidity (e.g. SHT21 6 ). Relative humidity is often used instead of<br />
absolute humidity in situations where the rate of water evaporation is important. SINTEF has carried out<br />
climate chamber testing on a jacket with integrated relative humidity sensors. Changes in relative humidity<br />
are correlated with the change in heart rate <strong>and</strong> tie in well with subjective reported sweat activity [Seeberg et<br />
al, 2011 (submitted)]. A humidity sensor does not need to be glued to the skin.<br />
10.2.1.5 Respiration<br />
The respiration rate is the number of breaths within a certain amount of time, often breaths per minute. The<br />
ventilation rate is a related parameter, specifically addressing the flow of gas entering or leaving the lungs<br />
with measurement units of volume per time. While respiration can be measured outside the patient’s chest,<br />
measurements of the ventilation require the use of a breathing tube or spirometer. Ventilation measurements<br />
are therefore considered too obtrusive for the <strong>CORBYS</strong> patient.<br />
A change in respiratory rate or volume might be an indication of physical or psychological stress, <strong>and</strong> hence<br />
might be a useful input into the <strong>CORBYS</strong> system. An individual can be aware <strong>and</strong> adjust his/her respiration<br />
rate, <strong>and</strong> the rate is also affected by physical or mental activities, talking <strong>and</strong> so forth, it is therefore difficult to<br />
apply the respiration rate as a single indicator as physiological state. In <strong>CORBYS</strong> the motion of the thoracic<br />
cage can be measured by impedance plethysmography (with electrodes on both sides), by inductive<br />
plethysmography (inductive sensor that changes values of inductance when stretched across the chest or<br />
abdomen), piezo resistive sensors (resistance changes when stretched across the chest or abdomen), or<br />
capacitive displacement sensors (measuring the change in area between two capacitor plates that slide parallel<br />
to one another). There are commercially available systems that also report the respiratory rate (like the<br />
Hidalgo Equivitale system 7 ), <strong>and</strong> more st<strong>and</strong> alone sensors like the respPlux 8 .<br />
10.2.1.6 Pulse oxygen saturation<br />
The oxygen content of arterial blood can be measured by optical techniques. The saturation partial O2<br />
pressure (SpO2) is a measure for the percentage of haemoglobin binding sites in the bloodstream occupied by<br />
oxygen. Since it measures the arterial blood the pulse can also be extracted from the measurements. SpO2<br />
measurements are easily affected by movements <strong>and</strong> measurements can only be carried out on certain areas<br />
like the earlobe <strong>and</strong> finger tip (using transmission photo-plethysmography). In normal circumstances the<br />
oxygen level is between 95% <strong>and</strong> 99%, <strong>and</strong> is not correlated with psycho-physiological states. SpO2 is<br />
continuously monitored whenever a patient’s oxygenation may be unstable as in intensive care, critical care,<br />
<strong>and</strong> emergency department areas of a hospital. SpO2 is also checked periodically in the case of congestive<br />
heart failure (CHF) <strong>and</strong> chronic obstructive pulmonary disease (COPD) patients. For <strong>CORBYS</strong> patients, the<br />
oxygenation is usually not an issue. It might also be a problem to do accurate measurements during gait<br />
5 www.plux.info/EDA<br />
6 www.sensirion.no<br />
7 www.equivital.co.uk<br />
8 www.plux.info/resp<br />
105
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
rehabilitation.<br />
Most SpO2 sensors are disposables that are glue attached to the patient. Validated products sending SpO2 <strong>and</strong><br />
heart rate data via Continua st<strong>and</strong>ard Bluetooth protocol are also available (e.g. Nonin Onyx II 9560 9 ).<br />
Examples of both disposable <strong>and</strong> reusable sensors are shown in Figure 20.<br />
10.2.1.7 Blood pressure<br />
Figure 20: Nonin Onyx II 9560 <strong>and</strong> disposable 7000A Adult sensor<br />
Blood pressure (BP) is the pressure exerted by circulating blood upon the walls of blood vessels. A person's<br />
BP is usually expressed in terms of the systolic pressure over diastolic pressure (mmHg). The term blood<br />
pressure usually refers to the pressure measured at a person's upper arm (at the level of the heart). It is<br />
measured on the inside of an elbow at the brachial artery, which is the upper arm's major blood vessel that<br />
carries blood away from the heart. The clinical, gold st<strong>and</strong>ard blood pressure measurement is however done<br />
invasively.<br />
Blood pressure is influenced by many factors, such as diet, activity level, exercise, disease, drugs or alcohol,<br />
stress, obesity <strong>and</strong> so-forth. Further, it is a parameter that varies throughout the body depending upon gravity,<br />
blood viscosity, artery flexibility <strong>and</strong> several other parameters.<br />
Blood pressure is usually done as a point measurement during a consultation with a medical professional. It is<br />
based on either palpation, ascultatory measurements or oscillometric measurements. In either case, the<br />
measurement of the blood pressure involves a measurement of the pressure at which the blood pressure of an<br />
artery or vein is becoming significantly constrained. These measurements are therefore not suitable for<br />
continuous, long term monitoring.<br />
Simple, continuous <strong>and</strong> comfortable monitoring of blood pressure is currently not possible using<br />
commercially available products [Kirstein et al, 2005] [Sola et al, 2011]. One promising research direction is<br />
the analysis of the arterial pulse wave velocity, which can be measured without physical constraints on an<br />
artery or vein. The research is however novel, <strong>and</strong> challenged by movement artefacts, so it is not seen as a<br />
realistic alternative for BP measurements in <strong>CORBYS</strong> .<br />
10.2.1.8 Skin/Core temperature<br />
The clinical interest for core temperature measurements is related to illness or hypo- or hyperthermia. The<br />
skin temperature varies widely <strong>and</strong> is also strongly affected by the thermal environment. Both skin <strong>and</strong> core<br />
temperature will increase during physical effort but the human thermal regulating system will effectively limit<br />
the temperature changes. Both skin <strong>and</strong> core temperature are slowly varying parameter. Skin temperature can<br />
be measured with different types of temperature sensors, such as thermocouple, resistance, or infrared<br />
emission sensors placed in contact with the patient’s skin. The core temperature is typically using a rectal<br />
9 www.nonin.com<br />
106
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
probe or by infrared emission measurements in the ear.<br />
10.2.2 Inertial measurement units (IMU)<br />
An inertial measurement unit (IMU) uses inertial sensors such as movement sensors (accelerometers) <strong>and</strong><br />
rotation sensors (gyroscopes), possibly in combination with other movement sensors. One frequent example<br />
in this respect is magnetometers, which help orient the object you want to track with respect to the earth<br />
magnetic field. In order to be able to track objects, an IMU must have accelerometer <strong>and</strong> gyroscope sensing<br />
in at least as many directions of freedom as the object does; typically 3 axis of acceleration <strong>and</strong> 3 axes for<br />
rotations.<br />
Accelerometers <strong>and</strong> gyroscopes measures changes of inertia during movements <strong>and</strong> rotations. For<br />
accelerometers, accelerometer values can be a direct measure of force via Newton’s second law:<br />
The equation also displays that if the acceleration measurement is integrated once with time, one gets the<br />
velocity, <strong>and</strong> by integrating a second time, on gets the position. The same can be done for gyroscope<br />
measurements, where changes in degrees/second squared can be integrated to yield angular velocity <strong>and</strong><br />
integrated again to obtain the absolute angle.<br />
It is however important to be aware that there are several challenges in order to obtain accurate measurements.<br />
First, as this is integration measurements, any measurement error, such as offset error, a drift or a scaling error<br />
will be integrated <strong>and</strong> can become substantial. In order to perform correct integration of the accelerometer<br />
data is important to be aware of the direction of gravity at all times. The magnetometer is valuable for<br />
providing this orientation information. Further it is crucial to know the initial position, from which the<br />
integration process is started. In the <strong>CORBYS</strong> setting, it will therefore be important to establish “home”<br />
positions where the locations of the various IMUs are known.<br />
IMU are essential components in robotics <strong>and</strong> are also widely used in navigation systems. For human<br />
measurements, it is probably most convenient to select a compact, integrated IMU suitable for attachment on<br />
the human. As an example, the xSens MTx 10 device that offers 3D motion, 3D rotations <strong>and</strong> 3D<br />
magnetometer in an integrated unit weighing 30 g, <strong>and</strong> intended for biomechanics applications.<br />
10 http://www.xsens.com/en/general/mtx<br />
Figure 21: The xSens MTx inertial monitoring unit offering simultaneous<br />
3 axis measurements of movements, rotations <strong>and</strong> magnetic field.<br />
107
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
10.2.3 Mechanical sensors<br />
10.2.3.1 Pressure/Force sensors<br />
Force <strong>and</strong> force distribution (pressure) sensors are widely used in biomechanics <strong>and</strong> gait analysis in<br />
particular. There are several flexible thin force <strong>and</strong> pressure sensors on the market, e.g. from<br />
Tekscan 11 . Pressure sensors are comprised of numerous individual sensing elements <strong>and</strong> a<br />
corresponding sensor map <strong>and</strong> complex software are necessary to decode the signal. Tekscan have<br />
e.g. systems for in shoe, as shown in Figure 22. The complex pressure sensors might be difficult in<br />
real time evaluation <strong>and</strong> feedback in <strong>CORBYS</strong> gait rehabilitation system. Force sensors can detect<br />
<strong>and</strong> measure a relative change in force or applied load to a surface, detect <strong>and</strong> measure the rate of<br />
change in force, identify force thresholds <strong>and</strong> trigger appropriate action <strong>and</strong> detect contact <strong>and</strong>/or<br />
touch. Tekscan st<strong>and</strong>ard force sensors are shown in the far right image in Figure 22.<br />
Figure 22. Tekscan F-Scan® System ,<br />
an in-shoe plantar pressure analysis (left <strong>and</strong> middle-left), <strong>and</strong> FlexForce A201 sensors, st<strong>and</strong>ard lengths from 19.7 cm to 5.1<br />
cm, with sensing area diameter of 0.95 cm (middle-right ) <strong>and</strong> FlexForce A401 sensor with sensing area diameter of 1 cm.<br />
10.2.3.2 Joint angular sensing<br />
Goniometers are devices capable of transforming an angular position into a proportional electrical signal.<br />
Commercial sensors for biomechanics applications are available, like the Plux 12 goniometer angelPlus, <strong>and</strong> the<br />
twin or single goniometers from Biometrics Ltd 13 (Figure 23). A torsiometer will detect the torque on a shaft<br />
by measuring the twist of a given length of the shaft. Single axis torsiometers are designed for measurement<br />
of rotations in one plane, e.g. forearm pronation/supination or neck axial rotation. Torsiometers are available<br />
from e.g. Biometrics Ltd (Figure 23). All these sensors must be attached to the skin using glue.<br />
11 http://www.tekscan.com/<br />
12 www.plux.info/angle<br />
13 www.biometricsltd.com/<br />
Figure 23: Goniometers from Plux (left) <strong>and</strong> Biometrics Ltd<br />
(twin axis in middle left, single axis in middle right, single axis torsiometers to the right).<br />
108
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
10.2.4 Environmental sensing<br />
Beyond the human-robot interface, the <strong>CORBYS</strong> gait rehabilitation system will not need extensive system<br />
environment sensing. The main needs are related to the need for collision avoidance <strong>and</strong> for ensuring a safety<br />
zone for the system users. This would most likely be accomplished by using a simple optical (laser beam) or<br />
mechanical switching systems to prevent the system from having collisions or from falling down a stair.<br />
10.2.5 Physiological multi sensor devices<br />
Several combined sensor systems measuring multiple parameters exist. Also validated chest belts sending<br />
heart rate data via a documented Bluetooth protocol are commercially available e.g. Hidalgo Equivital<br />
system 14 , as shown in Figure 24. This systems measures heart rate, respiratory rate, chest skin temperature,<br />
activity, posture <strong>and</strong> also has a fall detection algorithm as well as a Physiological Welfare Indication based on<br />
heart rate <strong>and</strong> respiratory rate.<br />
Figure 24: Hidalgo Equivital measuring unit (left) <strong>and</strong> belt (right)<br />
SINTEF has also developed a sensor belt sending data over Bluetooth to a cell phone (which in turn sends<br />
data to a server so that a nurse may follow the patient). The system is to be tested in home monitoring of<br />
patients with congestive heart failure in the US autumn 2011. The chest unit is shown in Figure 25, <strong>and</strong><br />
includes simple ECG (heart rate), 3D accelerometer, 3D gyroscope <strong>and</strong> IR skin temperature sensor.<br />
Figure 25: SINTEF ESUMS chest unit belt<br />
10.3 Interpretation of multiple sensor signals<br />
The development of sensor fusion algorithms has for several years been viewed as an important perceptual<br />
activity in mobile robotics [Murphy, 1996]. Sensor fusion is a term used to describe how multiple<br />
independent sensors are combined to extract <strong>and</strong> refine information not available through single sensors alone.<br />
14 www.equivital.co.uk<br />
109
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
In the <strong>CORBYS</strong> context, perception sensing will be used for the following purposes:<br />
� User safety<br />
� Physical effort<br />
� Characterisation of gait parameters<br />
� Psycho-physiological states<br />
� User intention<br />
With few exceptions, extraction of meaningful parameters in these cases requires a multi sensor approach <strong>and</strong><br />
can not be drawn based on single sensors alone. Compared to other gait rehabilitation research, <strong>CORBYS</strong> is<br />
in a quite unique position in being able to combine physiological sensors with EEG information. Below, we<br />
outline what type of sensor information might be of interest within <strong>CORBYS</strong> to assess the gait rehabilitation<br />
session <strong>and</strong> to provide sensory input to the <strong>CORBYS</strong> cognitive robotic control algorithms.<br />
Ensuring user safety:<br />
For user safety it is important to ensure that the position of extremities <strong>and</strong> joint angles are reasonable. For<br />
<strong>CORBYS</strong> gait rehabilitation system it will then be monitoring:<br />
� Mechanical switches<br />
� Consistency between measured body positions <strong>and</strong> robot actuator position<br />
Assessing physical effort:<br />
Physical effort will tell the therapist about the work load of the patient, <strong>and</strong> may also be input to the cognitive<br />
control system. For assessing physical effort the following parameters are judged most valuable:<br />
� Heart rate<br />
� Respiratory rate<br />
� Activity information<br />
� Force sensors<br />
� Temperature<br />
� Sweat/humidity<br />
� EMG<br />
� EEG<br />
� Input from robot actuators<br />
Characterisation of gait<br />
Characterisation of movements is also of interest. Characterisation on biomechanics is already a big field <strong>and</strong><br />
applications range from gait rehabilitation [Avor <strong>and</strong>. Sarkodie-Gyan, 2009 ] to technical analysis within the<br />
field of sport [Myklebust et al, 2011].<br />
Within the <strong>CORBYS</strong> settings we consider combination of these parameters most promising in the<br />
characterisation of gait:<br />
� Inertial measurement units information<br />
� Force sensors<br />
� Angular/torque measurements<br />
110
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
� EMG<br />
� Input from robot actuators<br />
Identifying psycho-physiological states<br />
Extensive research effort has been put into the field of physiological computing, which is the term used to<br />
describe any computing system that uses real-time physiological data as an input stream to control the user<br />
interface 15 . The review article Fundamentals of Physiological Computing [Fairclough, 2009] summarises<br />
some of the challenges <strong>and</strong> the complexity in developing a physiological computing system that employs a<br />
real-time measure of psycho-physiology to communicate the physiological state of the user to an adaptive<br />
system.<br />
As summarised in this article the physiological state of the user has been represented e.g. as one-dimensional<br />
continuum of frustration, anxiety, task engagement, mental workload, or two-dimensional space of activation<br />
<strong>and</strong> valence. The detection of negative emotions may be particularly relevant for computing applications<br />
designed to aid learning [Picard et al, 2004]. Heart rate is one of the most common parameters used to detect<br />
stress of the affective state [Rani et al, 2002], together with EMG, EDR <strong>and</strong> facial expression [Rani et al,<br />
2004] [Kulic <strong>and</strong> Croft, 2007].<br />
Within the <strong>CORBYS</strong> settings we consider combination of these parameters most promising in order to<br />
identify psycho-physiological states.<br />
� Heart rate<br />
� EMG<br />
� EDR<br />
� EEG<br />
Identifying intention<br />
<strong>CORBYS</strong> has a vision of being able to identify <strong>and</strong> help assist the user carry out his/her intentions. This is<br />
however very challenging since there are no or few clear manifestations of intention in physiological<br />
measurements, <strong>and</strong> the sensor information is blurred by all other factors impacting the measurements. It is<br />
probably therefore wise to limit the ambitions to being able to identify whether the patient wants to carry on<br />
making a cyclic movement such as walking or whether the patient wants to stop. For this purpose the<br />
following <strong>CORBYS</strong> components can be considered:<br />
1. EEG<br />
2. EDR<br />
3. EMG<br />
4. Heart rate<br />
10.4 Summary on technology gaps <strong>and</strong> priorities for development in <strong>CORBYS</strong><br />
The SOA analysis shows that many sensor concepts are developed <strong>and</strong> relatively mature. It is therefore to a<br />
limited extent necessary to develop entirely new sensor concepts. The best path for innovation in the field is<br />
through a clever combination of existing sensor concepts. The combination can either be through a physical<br />
integration of different sensor concepts, or it can be the combination of sensor data in order to come up with<br />
15 http://www.physiologicalcomputing.net/<br />
111
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
innovative <strong>and</strong> informative new knowledge about the state <strong>and</strong> condition of the monitored subject. The<br />
technology gaps can therefore be separated into a hardware part <strong>and</strong> a software part.<br />
With respect to hardware the challenge is to build the most relevant sensors in a system that is<br />
� Simple <strong>and</strong> safe to use<br />
� Effectively combines <strong>and</strong> integrates multiple sensors in integrated devices<br />
� A shared sensor interface that makes possible accurate time stamping <strong>and</strong> simultaneous analysis of<br />
data<br />
� Choose <strong>and</strong> h<strong>and</strong>le a mix of wired <strong>and</strong> wireless sensors<br />
In software the challenge is to develop relevant sensor fusion algorithms for the <strong>CORBYS</strong> gait rehabilitation<br />
platform, <strong>and</strong> to verify <strong>and</strong> validate these.<br />
Not all sensors <strong>and</strong> algorithms described in the sections above can be included; hence a prioritisation will be<br />
required.<br />
11 StateoftheArt in Situation assessment (UR)<br />
This section describes the current state-of-the-art on the approaches, methods <strong>and</strong> tools for automatic situation<br />
assessment to enhance a decision made by the system making the process in a mixed-initiative intimate manmachine<br />
interactivity context.<br />
11.1 Introduction<br />
Widespread commercial availability of technology has shifted human-computer interaction paradigm from a<br />
conventional keyboard-mouse combination to more flexible modes of interactivity. Such systems comprise of<br />
a number of modalities by which they may acquire user input to perform a function explicitly or implicitly<br />
requested by the user. The collection of input from various modalities enables the development of intelligent<br />
systems that are context-aware <strong>and</strong> aid in the shift from traditional systems to a more natural approach for<br />
interaction <strong>and</strong> usability (Hurtig <strong>and</strong> Jokinen, 2006).<br />
Context-aware interactive systems aim to adapt to the needs <strong>and</strong> behavioural patterns of users <strong>and</strong> offer a way<br />
forward for enhancing the efficacy <strong>and</strong> quality of experience (QoE) in human-computer interaction. The<br />
various modalities that contribute to such systems each provide a specific uni-modal response that is<br />
integratively presented as a multi-modal interface capable of interpretation of multi-modal user input <strong>and</strong><br />
appropriately responding through dynamically adapted multi-modal interactive flow management.<br />
Multimodal systems provide increased accuracy <strong>and</strong> precision (<strong>and</strong> in turn improved reliability) in terms of<br />
context-awareness <strong>and</strong> situation assessment by incorporating information from a number of input modalities.<br />
This approach offers a kind of fault-tolerant way of managing modalities. In the case that one of the<br />
modalities fails or contains noisy data, information from other modalities can be used to minimise ambiguity<br />
regarding situation assessment arising from a failed or noisy modality. The improvement in more reliable<br />
context- sensing <strong>and</strong> thus more appropriately responsive behaviour by the interactive system is said to be a<br />
likely outcome of multimodal fusion (Corradini et al. 2005).<br />
Various application areas exist for such multimodal systems that can make use of feasible hardware devices to<br />
acquire input not only at a neuro-motor but also at a physiological level for instance: in a patient monitoring<br />
system, monitoring heart rate, perspiration, blood pressure etc. Further examples of multimodal systems<br />
112
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
include anomaly detection, assisted living, PDA with pen-gestures, speech <strong>and</strong> graphics input capability etc.<br />
11.2 Situation Assessment<br />
Endsley (2000) defines situation assessment as “the perception of elements in the environment within a<br />
volume of time <strong>and</strong> space, the comprehension of their meaning <strong>and</strong> the projection of their status in the near<br />
future”. According to Lambert (1999, 2001, 2003, 2006), situation assessment involves assessment of<br />
situations where situations are “fragments of the world represented by a set of assertions”. This differs from<br />
object recognition in the sense that it requires a shift in the procedure from numeric to symbolic<br />
representations, a problem coined as the semantic challenge for information fusion (Lambert, 2003). Lambert<br />
(2006) proposes an interesting approach to semantic fusion using a formal theory by following a development<br />
path that involves sequential construction of the problem in terms of philosophy, mathematics <strong>and</strong><br />
computation. This approach is illustrated using a formal theory for existence in Lambert’s work (Lambert,<br />
2006).<br />
Level 1:<br />
Perception of<br />
elements in<br />
current<br />
situation<br />
Situation Awareness<br />
Level 2:<br />
Comprehension<br />
of current<br />
situation<br />
Goals <strong>and</strong><br />
objectives<br />
preconceptions<br />
Figure 26: Lambert's approach to semantic Fusion<br />
There exist two main stages for integration <strong>and</strong> fusion of multimodal input according to Corradini et al (2005)<br />
namely:<br />
• Integration of signal at feature level,<br />
Feedback<br />
• Integration of information at semantic level.<br />
Level 3:<br />
Projection of<br />
future status<br />
Environment<br />
Decision<br />
The feature level signal fusion, also referred to lower level fusion is related to “closely coupled <strong>and</strong><br />
synchronised” modalities such as speech <strong>and</strong> lip movements. This does not scale with ease, requires extensive<br />
training data sets <strong>and</strong> incurs high computational costs (Corradini et al. 2005). Higher level symbolic or<br />
semantic fusion on the other h<strong>and</strong>, is related to modalities that “differ in time scale characteristics of their<br />
features”. This entails time stamping of all modalities to aid the fusion process. Semantic fusion involves a<br />
number of benefits such as off-the-shelf usage, reusability, simplicity etc. (Corradini et al. 2005). Semantic<br />
fusion is a process that unifies input at a “meaning level” from various modalities that are part of a multimodal<br />
system (Gibbon et al. 2000). It is said to occur in two steps: a) input events for a user’s comm<strong>and</strong> are taken<br />
from various modalities <strong>and</strong> fused at a low level to form a single multimodal input event that signifies the<br />
113<br />
Action
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
user’s comm<strong>and</strong>; b) this multimodal input event is then semantically interpreted at a higher level to deduce the<br />
intended meaning of the user’s action by “extracting <strong>and</strong> combining information chunks” (Gibbon et al. 2000).<br />
The main aim of multimodal fusion is to make sense of the user’s intended meaning by fusing the partial<br />
streams of input that constitute a user’s comm<strong>and</strong> from various input modalities of the system. Hurtig <strong>and</strong><br />
Jokinen (2006) cite a “classical example of coordinated multimodal input” as the Put-that-there system<br />
proposed by Bolt (1980). This system presented a scenario where the user of a system specified input<br />
comm<strong>and</strong>s in a natural manner by way of speech <strong>and</strong> h<strong>and</strong> gestures. According to Hurtig <strong>and</strong> Jokinen (2006),<br />
multimodal input fusion can be categorised as a process that occurs over three levels: i) signal-level,<br />
ii) feature-level <strong>and</strong> iii) semantic, where semantic fusion integrates the underst<strong>and</strong>ing acquired from inputs of<br />
various individual modalities into a single comprehension set that signifies the intended meaning of the user’s<br />
input.<br />
Hurtig <strong>and</strong> Jokinen (2006) propose a three level semantic fusion component which involves temporal fusion<br />
that creates combinations of input data, statistically motivated weighting of these combinations, <strong>and</strong>, a<br />
discourse level phase that selects the best c<strong>and</strong>idate. The weighting process uses three parameters namely<br />
overlap, proximity <strong>and</strong> concept type of the multimodal input events (Hurtig <strong>and</strong> Jokinen, 2006). Overlapping<br />
<strong>and</strong> proximity are related to temporal placement of the input events where proximity is said to play a very<br />
important role especially in modalities such as speech <strong>and</strong> tactile data (Hurtig <strong>and</strong> Jokinen, 2006). Concept<br />
type refers to user comm<strong>and</strong>s which would be incomplete if the system were to consider only one input<br />
modality for example a user pointing to a location <strong>and</strong> speaking a phrase that only incompletely describes the<br />
location (Hurtig <strong>and</strong> Jokinen, 2006). Once the weighted ranking has been performed, a list of these c<strong>and</strong>idates<br />
is passed on to the discourse level phase that selects the best ranked final c<strong>and</strong>idate for system response<br />
construction. If no c<strong>and</strong>idate fits, the system requests the user to repeat his/her comm<strong>and</strong> (Hurtig <strong>and</strong> Jokinen,<br />
2006).<br />
Multimodal fusion encapsulates the union of a number of data from a number of input channels in a<br />
multimodal interactive system (L<strong>and</strong>ragin, 2007) where each channel may represent a distinct modality<br />
whereby a user may interact actively or passively with the system. Active input can be categorised as a direct<br />
usage of the system on the part of the user by way of speech or gestures etc. whereas passive input may be<br />
understood as input acquired from the user as a result of a monitoring activity on the part of the system.<br />
According to L<strong>and</strong>ragin (2007), multimodal fusion is distinguished in terms of several sub-processes namely:<br />
a) multimodal coordination, b) content fusion <strong>and</strong> c) event fusion. Multimodal coordination associates two<br />
activities acquired by different modalities for the formation of a “complete utterance” (L<strong>and</strong>ragin, 2007). The<br />
output of this sub-process contains a set of paired “hypotheses”, with an associated confidence level, which<br />
are ingested by the multimodal content fusion sub-process to develop an improved comprehension of<br />
otherwise partial information (L<strong>and</strong>ragin, 2007). The last sub-process, multimodal event fusion, then unifies<br />
the “pragmatic forces of mono-modal acts” to create a complete underst<strong>and</strong>ing of the user’s input. L<strong>and</strong>ragin<br />
(2007) uses the general communication categories as considered in the study of classical natural language, <strong>and</strong><br />
lists them as being the following: 1) inform, 2) dem<strong>and</strong>, 3) question <strong>and</strong> 4) act. Inform, dem<strong>and</strong> <strong>and</strong> question<br />
are fairly easy to underst<strong>and</strong> as scenarios where the user may be providing information to the system,<br />
requiring the system to do something <strong>and</strong> querying the system to provide him/her with some information. Act<br />
is the general category which comes into play when the system is not able to label the user’s input act into any<br />
of the aforementioned three categories (L<strong>and</strong>ragin, 2007).<br />
The fusion process is divided into five levels by the Joint Director of Labs / Data Fusion Group revised model<br />
(JDL/DFG) namely: pre-processing (level 0), object refinement (level 1), situation refinement (level 2),<br />
114
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
impact assessment or threat refinement (level 3), <strong>and</strong> lastly process refinement (level 4). This model,<br />
however, does not incorporate human-in-the-loop, <strong>and</strong> another level called user refinement (level 5) has been<br />
proposed to “delineate the human from the machine in the process refinement” allowing for the human to play<br />
an important role in the fusion process. (Blasch <strong>and</strong> Plano, 2002)<br />
• Level 0: Sub-object assessment<br />
• Level 1: Object assessment<br />
• Level 2: Situation assessment<br />
• Level 3: Impact assessment<br />
• Level 4: Process refinement<br />
• Level 5: User refinement<br />
Level 0 <strong>and</strong> 1 deal with sub-object <strong>and</strong> object assessment respectively, making use of information from<br />
multiple sources to arrive at a representation of objects of interest in the environment. In level 2, a<br />
relationship between the identified objects is established. At the end of this level, once the situation<br />
assessment process is complete, the system has achieved situation awareness as the objects detected in the<br />
environment <strong>and</strong> the various ways in which they are related or connected with each other is known by the<br />
system. Level 3 allows the system to predict effects of actions or situations on the environment. Level 4<br />
attempts to refine the outcomes of levels 1, 2 <strong>and</strong> 3.<br />
Corradini et al. (2005) list a number of approaches <strong>and</strong> architectures for multimodal fusion in multimodal<br />
systems such as carrying out multimodal fusion in a maximum likelihood estimation framework, using<br />
distributed agent architectures (e.g. Open Agent Architecture OAA (Cheyer <strong>and</strong> Martin, 2001)) with intraagent<br />
communication taking place through a blackboard, identification of individuals via “physiological<br />
<strong>and</strong>/or behavioural characteristics” e.g. biometric security systems using fingerprints, iris, face, voice, h<strong>and</strong><br />
shape etc. (Corradini et al. 2005). It is stated by Corradini et al (2005) that modality fusion in such systems<br />
involve less complicated processing as they fall largely under a “pattern recognition framework” <strong>and</strong> that this<br />
process may use techniques for integrating “biometric traits” (Corradini et al. 2005) such as the weighted sum<br />
rule as in Wang et al. (2003), Fisher discriminant analysis (Wang et al. 2003), decision trees (Ross <strong>and</strong> Jain,<br />
2003), decision fusion scheme (Jain et al, 1999) etc.<br />
Corradini et al (2005) also list a number of systems fusing speech <strong>and</strong> lip movements such as using<br />
histograms <strong>and</strong> multivariate Gaussians (Nock et al. 2002), artificial neural networks (Wolff et al. 1994; Meier<br />
et al. 2000) <strong>and</strong> hidden Markov models (Nock et al. 2002).<br />
Some systems use independent individual modality processing modules such as speech recognition modules,<br />
gesture recognition module, gaze localisation etc. Each module carries out mono-modal processing <strong>and</strong><br />
presents the output to the multimodal processing module which h<strong>and</strong>les the semantic fusion. These systems<br />
are ideal for introducing a shelf framework where various showcases may be developed for different<br />
application domains applying re-usable off-the-shelf components each h<strong>and</strong>ling a single modality in full.<br />
Other systems include “quick set” (L<strong>and</strong>ragin, 2007) which offers the user the freedom to interact with a mapbased<br />
application using a pen-<strong>and</strong>-speech cross-modal input capability. The system presented in Elting (2002)<br />
enables the user to specify a comm<strong>and</strong> by way of speech, pointing gesture <strong>and</strong> the input from a graphical user<br />
interface into a “pipelined architecture”. The system put forward by Wahlster et al (2001) is a multimodal<br />
115
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
dialogue system which fuses speech, facial expression <strong>and</strong> gesture for “both input <strong>and</strong> output via an<br />
anthropomorphic <strong>and</strong> affective user interface”, as an Embodied Conversational Agent (ECA) or Avatar<br />
(Corradini et al. 2005). In all such systems, input events are assigned a weight or “confidence score” that is<br />
considered during high level fusion for creating a set of situation interpretations that are ranked by their<br />
combined score (Corradini et al. 2005).<br />
11.3 Relevant approaches<br />
We review different approaches <strong>and</strong> techniques related to situation assessment for solving the problem of<br />
identification of objects, scenarios, situations of interest in an operational environment.<br />
11.3.1 Rulebased Expert Systems<br />
These are knowledge based systems consisting of a rule base <strong>and</strong> a fact base. Rules encode domain<br />
knowledge whereas facts represent the state of the environment. The outcome of a rule is deduced by a<br />
preceding condition part of the rule using logic, where facts act as the input to the condition. When rules are<br />
executed, new facts corresponding to the conclusion are added to the fact base. The reasoning of the rulebased<br />
systems is controlled by an inference engine. To achieve situation awareness, this type of system can<br />
be used with ontological mappings of the environment in terms of the objects, relations, situations etc. within<br />
this environment. While such systems are easily deployed <strong>and</strong> various expert tools are available that require<br />
only the rule definitions to be specified by the user, the major drawback of these systems is that they are<br />
largely deterministic in nature. Therefore, a lack of provision for uncertainty values for rules introduces<br />
limitations <strong>and</strong> imposes dem<strong>and</strong>s on the user regarding careful consideration in developing the rules (Russell<br />
<strong>and</strong> Norvig, 2003). Rule-based systems also lack the possibility to perform temporal reasoning. Learning<br />
new rules in an automatic fashion is not so straight forward either. This necessitates manual human input<br />
which can be a tedious task.<br />
11.3.2 Fuzzy Logic<br />
Fuzzy logic deals with reasoning that is approximate rather than exact. Fuzzy logic variables can have truth<br />
value ranging between 0 <strong>and</strong> 1, as opposed to two-valued logic (true or false), hence allowing h<strong>and</strong>ling of<br />
partial truth where truth value can range between completely true <strong>and</strong> completely false (Novak et al. 1999).<br />
In classical set theory, elements are introduced to a set in binary terms, overseen by a hard condition, whereby<br />
an element either belongs to or does not belong to the set. In fuzzy set theory, elements can be assessed<br />
gradually in terms of their c<strong>and</strong>idature using a membership function that defines the degree to which an<br />
element can be considered to be inside or outside of the fuzzy set. The use of fuzzy logic allows for complex<br />
real world modelling. Data fuzzification allows a transition from numerical to fuzzy format, followed by an<br />
evaluation or fuzzy inferencing of the fuzzy conditions. The output is then de-fuzzified i.e. transited back to<br />
numerical format. There exist several fuzzy inference engines such as Java based Jess (Orchard, 2001) <strong>and</strong><br />
Prolog based Flint (Shalfield, 2005) among others.<br />
11.3.3 Casebased Reasoning<br />
Case Based Reasoning (CBR) is a method used in computing to allow systems to solve problems by recalling<br />
how similar problems were dealt with previously. The system is given a set of basic cases <strong>and</strong> a set of rules<br />
by which it can alter these cases. When given a problem, the system finds the most relevant case (or multiple<br />
cases) <strong>and</strong> if required, modifies it to solve the problem. The basic premise for CBR is that similar cases can<br />
be satisfied by similar solutions. Hence, if a similar case to the received input case is found present in the<br />
CBR repository, the solution can be adapted to the current case <strong>and</strong> the problem can be solved. If the<br />
modified solution is satisfactory, the new solution is retained in the repository together with the problem.<br />
116
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Problem<br />
4<br />
Retention<br />
Solution<br />
1<br />
Retrieval<br />
Figure 1: CBR Process<br />
Case Based Repository<br />
1. The problem is entered into the system <strong>and</strong> analysed.<br />
2. The case which closest matches the details of the problem is selected.<br />
3. This case is modified to better fit the problem using predefined rules. This becomes the solution<br />
which is presented.<br />
4. The solution is analysed for its effect <strong>and</strong> is stored for future use along with an indication of its<br />
successfulness. This allows the system to learn.<br />
The system should be able to learn not only from its successful solutions, but also from its failed solutions<br />
(success driven <strong>and</strong> failure driven learning). Successful solutions can be stored so that the solution can be<br />
reused without regenerating it. Solutions which failed to solve the problem can be used to generate better<br />
solutions for use at a later stage. CBR can be used to identify situations presented as cases from a repository<br />
of known situations; however, the mapping of a situation as a case <strong>and</strong> subsequent measurement of similarity<br />
between two cases poses a problem. Furthermore, the existence of an associated solution becomes irrelevant<br />
as soon as the situation is matched to a template or a case from the repository.<br />
11.3.4 Bayesian Networks<br />
2<br />
Selection<br />
Bayesian networks are probabilistic graphical models that represent a set of r<strong>and</strong>om variables <strong>and</strong> their<br />
conditional dependencies via directed acyclic graphs. Bayesian networks are commonly used for probabilistic<br />
reasoning in the context of situation assessment (Das et al. 2002; Bladon et al. 2002; Higgins 2005). Bayesian<br />
networks are based on Bayes theorem which computes posterior or inverse probability for a proposition i.e.,<br />
given the prior or unconditional probabilities of A <strong>and</strong> B, <strong>and</strong> knowing the conditional probability of B given<br />
A, what is the conditional probability of A given B? Bayesian nodes in the networks represent propositions,<br />
r<strong>and</strong>om variables, unknown parameters or hypotheses. Nodes are connected by edges, that represent<br />
conditional dependency, <strong>and</strong> unconnected nodes therefore represent variables that are conditionally<br />
independent. Each node has an associated probability function that gives the variable probability represented<br />
by the node, upon receiving a set of values as input. Priori probabilities <strong>and</strong> conditional probabilities have to<br />
be specified for each node in the network. Inverse probabilities for each node can then be computed using<br />
Bayes rule as new input is received into the network.<br />
In the context of situation assessment, Das et al. (2002) lists two important points that must be observed when<br />
117<br />
Application of<br />
rules to modify<br />
selected cases<br />
Figure 27: Case-Based Reasoning process<br />
3<br />
Presentation
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
using Bayesian networks in real-time environments i.e., rapid modelling of complex situations via Bayesian<br />
networks, efficient Bayesian network inference based on incoming evidence. For rapid modelling, each<br />
network can be constructed in real-time from smaller Bayesian networks to assess a specific situation,<br />
whereas for efficient inference, a network can be broken up into sub-networks <strong>and</strong> distributed over a physical<br />
network of computers, employing parallel processing technologies (Das et al. 2002).<br />
While Bayesian networks satisfactorily h<strong>and</strong>le uncertainty <strong>and</strong> casual relationships <strong>and</strong> are fairly<br />
straightforward in terms of implementation, the set of variables they support is finite <strong>and</strong> each variable has a<br />
fixed domain of possible values (Russel <strong>and</strong> Norvig, 2003). Regular Bayesian networks lack the concept of<br />
objects <strong>and</strong> relations <strong>and</strong> hence are unable to fully benefit from structure of the domain (Howard <strong>and</strong><br />
Stumptner, 2005). In order to enable application of Bayesian networks in complex domains, relational<br />
probabilistic models attempt to rectify these limitations (Howard <strong>and</strong> Stumptner, 2005; Russell <strong>and</strong> Norvig,<br />
2003) by including generic objects <strong>and</strong> relations. Bayesian networks also lack support for temporal reasoning<br />
<strong>and</strong> dynamic Bayesian networks have been proposed in this regard (Russell <strong>and</strong> Norvig, 2003). Also, a large<br />
amount of training data is required, as Bayesian methods are based on probability theory, to approximate the<br />
probability distributions.<br />
Sutton et al. (2004) present a Bayesian blackboard (called AIID) for information fusion that is a conventional,<br />
knowledge-based blackboard system in which knowledge sources modify Bayesian networks on the<br />
blackboard. It is proposed to carry several advantages as temporal reasoning can range from “data-driven<br />
statistical algorithms up to domain-specific, knowledge-based inference”; <strong>and</strong> “the control of intelligencegathering<br />
in the world <strong>and</strong> inference on the blackboard can be rational … grounded in probability <strong>and</strong> utility<br />
theory”. (Sutton et al. 2004)<br />
11.3.5 Blackboard systems<br />
A blackboard system is an Artificial Intelligence application based on the blackboard architectural model. In<br />
such systems, the blackboard acts as a Common Knowledge Base (KB) which is continuously updated by<br />
several specialised Knowledge Sources (KS) to reach a viable solution to a problem. Each Knowledge Source<br />
updates the blackboard with a partial solution <strong>and</strong> iteratively, a complete solution is worked towards<br />
collaboratively by all subscribers (KSs) of the blackboard. The blackboard i.e. the common KB retains the<br />
state of the problem solution, while the KSs make changes to the blackboard.<br />
The blackboard model is suitable to tackle problems that are complex <strong>and</strong> not well-defined, <strong>and</strong> especially<br />
where the solution is a sum of its parts. The main features of blackboard architecture are multiple sources,<br />
multiple competing hypotheses, multiple levels of abstraction, feedback to the sources <strong>and</strong> the blackboard<br />
acting as a common knowledge base or an associative memory. Blackboard systems were created to deal with<br />
complex problems that are best addressed by an approach of incremental solution development. They have<br />
been used extensively in AI problems, <strong>and</strong> there exist several examples in the literature <strong>and</strong> market e.g.<br />
Hearsay I (Reddy 1973), Hearsay II (Erman, 1980), blackboards addressing signal <strong>and</strong> image underst<strong>and</strong>ing<br />
(Carver, 1991), planning <strong>and</strong> scheduling (Sadeh, 1998; Smith, 1985), machine translation (Nirenburg, 1989)<br />
workflows (Stegemann et al, 2007), <strong>and</strong> spatial-temporal geographic information analysis (Riadh, 2006).<br />
In essence, a blackboard system is a task independent architecture, implying that it addresses a range of<br />
problems such as design, classification, diagnosis etc. Owing to its integrated design whereby multiple<br />
knowledge sources work towards a solution incrementally, more than one technique may be employed, each<br />
as a separate knowledge source or reasoning system e.g. KS1 can be case-based while KS 2 can be heuristic i.e.<br />
rule-based (Hunt, 2002).<br />
118
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Blackboard architectures offer several benefits such as configuration flexibility as the KSs are not rigidly<br />
connected, rather cooperate via the blackboard, hence KSs can be removed or added to the system without<br />
explicit declaration to all subscribers at any point in time. Furthermore, the user is presented with a selection<br />
of choices as the set of KSs may contain more than one source carrying out the same function to arrive at a<br />
partial solution, in which case the most efficient may be chosen. The blackboard system manages multiple<br />
levels of abstraction, using hierarchy supported by KSs, where the KSs may operate at different levels of<br />
abstraction <strong>and</strong>/or may operate between two abstraction levels for information transfer. The main<br />
disadvantages of these systems include a lack of support for task specific strategies, i.e. lack of support for<br />
domain specific problem solving strategies <strong>and</strong> they have to be implicitly added if required. Also, as the<br />
blackboard is a common knowledge base, it is directly changed by the knowledge sources <strong>and</strong> therefore<br />
consideration must be taken to oversee the responsible behaviour of knowledge sources. The scope of the<br />
system as a whole is not maintained, making it difficult to identify problems (Hunt, 2002).<br />
Salient architectures, tools <strong>and</strong> technologies<br />
Hearsay (or Hearsay I) was developed to solve complex speech recognition problems with a hierarchical<br />
abstraction space of complete <strong>and</strong> partial interpretations. It was domain restricted <strong>and</strong> not a general purpose<br />
problem solving system, however it led to an extension in the form of a new design or a sequel. Hearsay II<br />
proposed the architecture of blackboard as known today for the first time. It responded to questions <strong>and</strong><br />
retrieved documents from a collection. It was not restricted to any specific domain. In Hearsay II, the<br />
knowledge sources work in an opportunistic fashion, i.e., each KS monitors the blackboard for an opportunity<br />
to contribute to the development of the solution. If such an opportunity is detected, the KS posts an activation<br />
entry on the agenda of the scheduler indicating its intention to contribute. The scheduler determines the<br />
suitability of any KS activation, using information in a common knowledge base <strong>and</strong> the potential impact of<br />
the knowledge source.<br />
HASP/SIAP, a signal interpretation system based on the blackboard model of Hearsay II, used AI techniques<br />
to address the surveillance problem of the activities of submarines <strong>and</strong> ships in the surveillance region.<br />
Instead of the scheduler agenda approach, HASP employed a control mechanism based on predefined events,<br />
which triggered a sequence of KSs relevant for an occurring event. Another system called TRICERO system<br />
also addressed a similar application area, employing blackboard architecture in the domain of distributed<br />
computing, to monitor a region of airspace for various aircraft activities <strong>and</strong> underst<strong>and</strong> the signals to assess<br />
the situation. The problem is partitioned into independent sub-problems <strong>and</strong> solved using blackboard systems.<br />
BB1 is a task independent blackboard control architecture that offers two blackboards, namely a domain<br />
blackboard, which is essentially the blackboard as we know it, <strong>and</strong> a control blackboard, which deals with<br />
generation of a solution to the control problem. The control blackboard in BB1 works the same as st<strong>and</strong>ard,<br />
the only difference being that KSs are control problem solvers <strong>and</strong> the solution on the blackboard determines<br />
which domain knowledge source is to be activated next. In this way, BB1 offers increased flexibility in its<br />
control mechanism, albeit with additional computational complexity, as opposed to Hearsay II. A well-known<br />
example implementation using BB1 is a mission-planning system for an autonomous vehicle (Hayes-Roth<br />
1985).<br />
Hearsay III provides a completely generic architecture which can be used to develop a specialised problem<br />
solver for a specific domain. The primary goal of Hearsay III is to provide representation <strong>and</strong> control<br />
facilities in order to enable the user to create a customised expert system.<br />
Van Brussel et al. (1998) used a blackboard model in a behaviour-based mobile robot system enabling task<br />
execution in unstructured real-world environments. The behaviour model consists of a number of motion<br />
119
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
behaviours including reflexes, voluntary motion behaviours, <strong>and</strong> knowledge acquisition modules providing<br />
task-oriented information about the robot environment. Zhang & Zhu (2004) present a multi-agent based<br />
architecture using a distributed communication infrastructure <strong>and</strong> an event-driven situation evaluation agent<br />
for outdoor mobile robot navigation where event-driven control is used to h<strong>and</strong>le the dynamic change in the<br />
environment. Smartfire (Petridis <strong>and</strong> Knight, 2001) uses the blackboard architecture in a hybrid CBR system<br />
for the automatic set-up of fire field models which enables fire safety practitioners who are not expert in<br />
modelling techniques to use a fire modelling tool. This hybrid system demonstrates the power of blackboard<br />
with a common knowledge representation <strong>and</strong> mechanisms for adaptation <strong>and</strong> conflict resolution.<br />
Psyclone is a generic framework for AI, which includes support for multimodal input (vision, audio, speech<br />
<strong>and</strong> other sensor input) <strong>and</strong> modular cognitive processing <strong>and</strong> multimodal output generation (speech <strong>and</strong><br />
animated output). It is a message-based middleware for developing large distributed systems (Thórisson<br />
2005). It introduces the concept of a whiteboard, an extension of the blackboard model which allows<br />
heterogeneous systems to be connected together. The Psyclone framework consists of a number of<br />
whiteboards, each functioning as a publisher/subscriber of information. Data is retained by whiteboards for a<br />
time period, in a searchable database for later retrieval by modules needing historical information. Psyclone<br />
acts as a central connection point for modules using whiteboards. It also allows supervision of the system at<br />
runtime using a built-in monitoring system. Time-stamped information on whiteboards can be viewed <strong>and</strong> the<br />
content of individual messages can be read.<br />
11.4 Summary on technology gaps <strong>and</strong> priorities for development in <strong>CORBYS</strong><br />
The state-of-the-art review highlights various existing concepts that have already been developed <strong>and</strong> are<br />
fairly well-established in literature. It is foreseen that a maximally optimum combination of existing concepts<br />
in light of requirements for <strong>CORBYS</strong> will provide solid foundations for the <strong>CORBYS</strong> situation assessment<br />
<strong>and</strong> awareness cognitive architecture. The following aspects have, in particular, been identified as points of<br />
interest where the <strong>CORBYS</strong> project can contribute to research <strong>and</strong> development of context-<strong>and</strong>-situationaware<br />
cognitive architecture: i) observation (<strong>and</strong> learning) of cyclic patterns of gait, ii) identification of<br />
deviation from established normal gait pattern, iii) rectifications suggested to human-in-the-loop,<br />
iv) facilitation for user refinement (level 5 in the revised model JDL/DFG model). In order to present generic<br />
architecture for situation awareness, the blackboard solution is seen as the most suitable for several reasons.<br />
Components developed in <strong>CORBYS</strong> can be used as reusable off-the-shelf publishers <strong>and</strong> subscribers of the<br />
blackboard system <strong>and</strong> this enables ease of deployment in other application domains, where <strong>CORBYS</strong><br />
solutions may be used. Moreover, through multiple publisher <strong>and</strong> subscriber components, the use of<br />
blackboard will enable isolation of the <strong>CORBYS</strong> cognitive architecture from the application domains e.g. the<br />
two <strong>CORBYS</strong> demonstrators. Domain specific requirements can be addressed by specialist publisher<br />
components developed to work with the blackboard. A major technology gap/challenge is to develop suitable<br />
fusion algorithms for the <strong>CORBYS</strong> gait rehabilitation platform, <strong>and</strong> for the <strong>CORBYS</strong> mobile robot for<br />
examination of hazardous areas.<br />
12 StateoftheArt in Behaviour Generation, Anticipation <strong>and</strong> Initiation<br />
(UH)<br />
12.1 Introductory Comments<br />
The state-of-the-art of various forms of cognitive mechanisms concerning behaviour anticipation, -generation<br />
<strong>and</strong> initiation will be considered.<br />
120
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
First, the general outline will be established. In particular “bio-inspired” or “biologically motivated”<br />
mechanisms, <strong>and</strong> behaviour anticipation, generation <strong>and</strong> initiation are conceptually interconnected.<br />
Concerning the notion of bio-inspired mechanisms, one needs to distinguish the notions that are implied with<br />
this tag. Bio-inspired algorithms are generally understood to be algorithms that are imitating or mimicking in<br />
an abstracted form mechanisms that are prevalent in nature. Well-known examples are Neural Networks<br />
which mimic the networks of computation in the brain or Genetic Algorithms which mimic Darwinian<br />
evolution, or Ant Colony Algorithms which mimic the foraging process of biological ants. All these<br />
algorithms have in common that the analogies to nature are largely on the level of mechanisms, <strong>and</strong> it is<br />
common to all that in the context of most engineering-oriented applications they usually do claim to be a<br />
sufficiently accurate model of biological processes. However, it is increasingly established that biologically<br />
accurate processes are often not captured by these simplified mechanisms, although they prove useful <strong>and</strong><br />
effective in a wide selection of problem-solving scenarios.<br />
The second view is that of trying to accurately model actual processes taking place in biology, as, for<br />
example, in computational neuroscience, population biology <strong>and</strong> other. Here, the goal is typically the<br />
modelling of phenomena actually occurring in biology <strong>and</strong> the purpose is indeed the (at least qualitatively)<br />
accurate modelling of salient biological observations. The models are often computationally relatively<br />
expensive, but they aim for significantly increased faithfulness to biological phenomena. Nevertheless, in a<br />
considerable number of cases, even an accurate modelling of phenomena may contribute only to a limited<br />
degree to underst<strong>and</strong> what the achieved biological purpose is. In addition, these models are not easy to adapt<br />
for technical purposes, although there has been some successful adaptation of spiking neural networks (Maass,<br />
2003).<br />
The third view which is the view which will dominate the subsequent discussion, adopts a stance between<br />
these two extremes. Instead of copying mechanisms “naively” or creating detailed models, a recent family of<br />
approaches considers not mechanisms, but rather principles governing biologically relevant cognitive<br />
processing. The idea behind this is to not be concerned with the mechanisms that are implemented in the<br />
biological substrate, but rather with the principles that these mechanisms have evolved or adapted to<br />
implement. Examples for such principles proposed in the past are the minimum jerk or the two thirds power<br />
law (Viviani <strong>and</strong> Flash, 1995) for limb movement. Here, no specific neural or cognitive mechanisms are<br />
postulated, but rather that the outcome of the behaviour generation in biological organisms fulfils certain<br />
mathematical conditions (such as the Fermat Principle-like minimum jerk variation principle).<br />
The advantages of using principles to determine behaviours are manifold:<br />
� Principles detach the form of the behaviour generated or modelled from its “implementation”. If a<br />
principle is found to be valid for a behaviour, one needs not implement a detailed, physiologically<br />
accurate neuronal control model for the behaviour generation, but rather a high-level computation of<br />
the principle which may or may not be an accurate model of the biological substrate.<br />
� Such principles are usually simpler to formulate <strong>and</strong> contain significantly fewer assumptions <strong>and</strong><br />
parameters than the usual detailed biological models. They are less arbitrary than bio-mimicking<br />
computational models such as Neural Networks or Genetic Algorithms which, while having fewer<br />
parameters than the full-fledged models, have deficits in actually capturing the essence of the<br />
behaviours of biological organisms.<br />
� At the same time, such principles can be quantitatively formulated <strong>and</strong> allow one to create precise<br />
quantitative <strong>and</strong> often algorithmic implementations of the principles; as said before, the detailed<br />
121
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
algorithms may not have anything to do with the microscopic dynamics of the biological substrate,<br />
but they aim to capture the mesoscopic behaviour of the organism.<br />
� Principles allow a transparent view into current forms <strong>and</strong> possible variants of behaviours. Since not<br />
the microdynamics is modelled, they encompass are only comparatively few degrees of freedom <strong>and</strong><br />
it is easier to identify which changes affect which part of the dynamics.<br />
� Principle-based behaviour analysis <strong>and</strong> generation makes it possible to hypothesise about evolutionary<br />
incentives for the emergence of these behaviours. In this way, a principle under investigation turns<br />
often out to be accompanied by a number of further principles related to it, derived from it, or which<br />
are in force in parallel to it. This provides additional natural pathways for refinements of the<br />
methodology.<br />
For these reasons, principle-based methodologies will be those most concentrated on, however, where<br />
relevant, in the light of realising the practicality of the methods, other, traditional methodologies will also be<br />
referred to, keeping, however, always in mind that they will serve to implement principles.<br />
Amongst important principles that have been investigated in the past (e.g. the two thirds power law or the<br />
minimum jerk variation principle), one class of principles has found significant attention in the last few years.<br />
This class of principles is based on Shannon information theory. It is characterised by versatility, universality,<br />
transparency, mathematical guarantees (e.g. for robustness) in some circumstances. Most importantly, there<br />
are strong indications that the behaviour of biological organisms may be governed to a not insignificant<br />
degree by informational principles (Laughlin et al., 1998, Laughlin, 2001, Polani, 2009). The <strong>CORBYS</strong><br />
SOIAA component will be centrally based on these principles whose current state-of-the-art will be described<br />
in the following section.<br />
12.2 InformationTheoretic Principles<br />
12.2.1 Biological Context<br />
Information-theoretic behaviour principles form a special case of behaviour “bio-inspired” or “biologically<br />
motivated” anticipation, generation <strong>and</strong> initiation mechanisms, as the mathematical principles on which they<br />
rely can be formulated on a quite abstract level, but still have high biological relevance. The core hypothesis<br />
of the SOIAA component in the <strong>CORBYS</strong> project operates on this assumption.<br />
Key is the hypothesis that it is not necessary to accurately mimic human (or biological) behaviour by the robot<br />
for a smooth interaction <strong>and</strong> rather that it is sufficient to have a simplified interaction model. This, however,<br />
needs to capture characteristic properties, the essence of biological behaviour without attempting to be over<br />
accurate.<br />
The present approach is based on mounting evidence that biological information processing might be<br />
following informational optimality principles (typically minimisation or maximisation, depending on context)<br />
with respect to information processed by the organism or agent. Here information is taken into consideration<br />
strictly in the sense of Shannon (this will always be assumed in the following).<br />
Shannon information acts as a natural measure of unperturbed sensorimotor data flow throughput as well as of<br />
redundancy <strong>and</strong> its relevance for organismic behaviour has been hypothesised already in the 1950s <strong>and</strong> then<br />
re-emphasized beginning in the 90s (Barlow, 1959, Barlow, 2001, Attneave, 1954, Atick, 1992). From a<br />
different angle, namely the perspective of cybernetics, especially Ashby’s Law of Requisite Variety should be<br />
mentioned (Ashby, 1956). Its recent extensions in (Touchette <strong>and</strong> Lloyd, 2000, Touchette <strong>and</strong> Lloyd, 2004)<br />
122
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
demonstrate quantitatively the role of information in the sensorimotor loop: the difference in entropy<br />
reduction in the environment that an agent achieves in open-loop <strong>and</strong> closed-loop mode is limited by the<br />
information that the agent takes in. This is a fundamental limit that holds for any system that can be<br />
probabilistically modelled. It does not depend on any other particulars of the model. As such, it exemplifies<br />
the universality character of information-theoretic descriptions of agents which holds independently of their<br />
computational model. This relevance is universal when it holds to the informational interaction (<strong>and</strong> all<br />
physical interactions are at the same time informational) of an agent with its environment.<br />
However, the relevance even seems to translate into how biological organisms process information internally<br />
or how information is generally processed in evolutionary context. For instance, evolutionary fitness models<br />
that are suitable to be cast into Kelly Gambling scenarios (Kelly, 1956) can be combined with Howard’s value<br />
of information (Howard, 1966) in a coherent framework (Donaldson-Matasci et al., 2010).<br />
There are indications that the internal processing of sensoric stimuli seems also to be governed by<br />
informational optimality principles. This seems to hold particularly true for neural signals (Rieke et al., 1999,<br />
Laughlin et al., 1998, Parush et al., 2011). The basis for that assumption is partly due to metabolic reasons,<br />
since (Shannon) information processing is metabolically expensive. It is expected that evolutionary pressure<br />
will act as to optimise the information processing channels of an organism (Brenner et al., 2000, Laughlin,<br />
2001, Polani, 2009). Under this hypothesis, an under-exploited informatory channel will either be evolved<br />
away until it is optimally used, or else be increasingly used to improve the utility that can be gained through<br />
its existence. It follows that, in the adaptive equilibrium case, one expects that an existing information<br />
channel for an adapted organism (i.e. an organism that is in a suitable sense of its “informational ecology” in<br />
balance with its environment) will be exploited to its fullest by this organism. It will in general not hold true<br />
for an organism out of balance, i.e. an organism that only recently entered a new ecological niche <strong>and</strong> did not<br />
yet (on individual as well as on population level) have sufficient time to adapt to that niche.<br />
These assumptions are also consistent with the assumption that sensorimotor abilities of animals <strong>and</strong> humans<br />
operate on the basis of a Bayesian model (Schrater <strong>and</strong> Kersten, 2002, Körding <strong>and</strong> Wolpert, 2004). Evidence<br />
for the Bayesian character of organismic decision making is not limited to higher organisms. In fact, it turns<br />
out that even insect behaviour demonstrates some consistency with Bayesian modelling. An example for that<br />
is the infotaxis model that was introduced by (Vergassola et al., 2007) to model the search behaviour of a male<br />
moth for its female mate through a very sparse, event-based olfactory signal. In this model, isolated<br />
pheromone detection events, through an inverse Bayesian model, <strong>and</strong> through a seeking behaviour that is<br />
consistent with maximizing information about the location of the mate (named infotaxis by the authors) are<br />
sufficient to reconstruct a search-<strong>and</strong>-home dynamics that is surprisingly consistent with what is observed as<br />
behaviour of actual moths. It is, of course, not assumed that the male moth is indeed “implementing” a full<br />
Bayesian model <strong>and</strong> an infotaxis dynamics, <strong>and</strong> in fact, it is more plausible to assume that the brain of the<br />
organism will in most likeliness implement a proxy or surrogate dynamics that, in the scenarios of relevance<br />
will exhibit infotaxis-analogue behaviour. Nevertheless, the close similarity demonstrates that the assumption<br />
of near Bayes- <strong>and</strong> information-optimal behaviour provides a powerful model while not necessarily of<br />
mechanisms, but of general character of organismic behaviour generation <strong>and</strong> the incentives that drive it.<br />
12.2.2 Cognitive Modelling Context<br />
The identification of information-theoretic concepts as governing the behaviour of organisms provides<br />
quantitative pathways towards a systematic modelling of cognitive architectures based on these principles.<br />
However, although information theory is an old <strong>and</strong> long-established field, its successful use for cognitive<br />
modelling has exp<strong>and</strong>ed in a very significant only in the last decade, boosted by a series of advances.<br />
123
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
12.2.2.1 The Information Bottleneck Principle<br />
The information bottleneck principle was introduced in (Tishby et al., 1999). It implements an informationtheoretic<br />
version of sufficient statistics. As it uses information-theoretic measures to quantify informational<br />
sufficiency, the information bottleneck principle provides a constructive way to obtain optimal<br />
approximations for sufficient statistics. In the case that the distributions are modelled by exponential families,<br />
sufficient statistics can be perfect achieved, where not, they are optimally approximated.<br />
Another interpretation of the information bottleneck is its ability to tag relevant information, selected from a<br />
total body of information. Unlike in the classical interpretation of Shannon information theory, where<br />
information is completely devoid of semantics, in the information bottleneck, one is able to “colour”<br />
information according to its relevance, distinguishing between relevant <strong>and</strong> irrelevant information. This is<br />
achieved by introducing a relevance variable Y which denotes the feature or quantity of interest. This variable<br />
is now available to the system (or the r<strong>and</strong>om) variable X only via another variable Z which may contain a<br />
significant amount of total information which is not relevant in the sense that it does not convey any<br />
information about the relevance variable Y. The system (or r<strong>and</strong>om) variable X has only direct access to the<br />
latter variable Z which screens the relevance variable Y from it.<br />
Formally, one considers the Markovian Sequence Y�Z�X where Y is the relevance variable. One then<br />
maximises the mutual information I(X;Y) while keeping I(X;Z) constant. Thereby, one aims to limit that<br />
information that X takes in about Z which is not relevant to Y, the optimisation running over all probabilistic<br />
projections p(x|z). This constrained optimisation can be transformed into an unconstrained optimisation by<br />
maximising I(X;Y)��I(X;Z) for various values of � which achieve various levels of information intake of X<br />
from Z.<br />
This process defines an efficient frontier of solutions which denote the optimal trade-offs between the<br />
information intake from Z <strong>and</strong> relevant information intake from Y. The parameter � allows one to control<br />
whether one rather puts emphasis on capturing the relevant information as completely as possible or one is<br />
rather interested in keeping the information bottleneck around Z as tight as possible.<br />
12.2.2.2 UtilityRelevant Information<br />
The relevant information concept is not limited to identifying prescribed information in the relevance<br />
variable Y. For a setting where actions taken by an agent can be formulated, it is possible to reformulate the<br />
relevant information setting to take utility into account. The (unconstrained) optimisation (here a<br />
minimization over the policy �(a|s)) now is carried out over the functional I(S;A)��E[Q(S;A)] where E<br />
denotes the expectation <strong>and</strong> Q denotes the utility of action A taken in state S (Polani et al., 2006). This<br />
optimisation turns out to be formally equivalent to the well-known rate-distortion problem from information<br />
theory (Blahut, 1972), with Q replacing the distortion function. An important distinction in the interpretation<br />
is that the distortion function typically measures distortions between signals which — nominally — live in the<br />
same state space, whereas Q has not the character of a distortion as actions A <strong>and</strong> states S are concepts coming<br />
from entirely separate spaces. Nevertheless, the rate-distortion formalism transfers seamlessly to the new<br />
situation.<br />
However, in the presence of delayed reward structures, the utilities Q cannot be directly computed, but depend<br />
on the complete process that the system undergoes when proceeding according to policy �(a|s). Thus, the<br />
utility of a concrete state s under a concrete action a (the lowercase characters denote a concrete value rather<br />
than a r<strong>and</strong>om variable) <strong>and</strong> following policy � is given by where s' is the state following s, P(s,s',a) is the<br />
transition matrix <strong>and</strong> R(s,s',a) is the reward structure.<br />
124
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
In this case, the optimisation of the functional I(S;A)��E[Q(S;A|�)] requires a more intricate procedure for its<br />
computation since now not only I(S;A), but also Q(.;.|�) depends on the policy �. This computation can be<br />
carried out by a double fixed point iteration (Polani et al., 2006), <strong>and</strong> can be extended also to consider<br />
cumulated processed information, such as the information-to-go of the whole agent-environment system<br />
(Tishby <strong>and</strong> Polani, 2011) or the lookahead information (van Dijk <strong>and</strong> Polani, 2011b). See also (Saerens<br />
et al., 2009) for a related concept.<br />
These computations provide an informatory interpretation of goal-directed, or, more precisely, utility-driven<br />
behaviour. Conceptually, this corresponds to a regularisation of the usual Markovian Decision Process<br />
optimisation via Shannon information quantities. This approach offers two important advantages. First, it<br />
connects the informational interpretation of biological cognitive processes with the formal framework of<br />
utility-based decision making. Second, on the mathematical level, the informational regularisation provides a<br />
tie break in the case of undetermined solutions of same performance <strong>and</strong>, in addition, it offers<br />
stability/robustness guarantees under a PAC-Bayesian perspective (McAllester, 1999). These guarantees can<br />
be interpreted as implementing a “least committed” future of an agent’s behaviour (Tishby <strong>and</strong> Polani, 2011).<br />
This is advantageous from a number of perspectives. Least commitment implies that the agent implements the<br />
principle of information parsimony which assumes that minimal informational load is a natural way to model<br />
minimal cognitive load which, in turn, formed the basis of the biologically plausible information processing<br />
hypotheses expounded in the earlier sections.<br />
The second important aspect, however, is that a least committed future also corresponds to the least biased<br />
assumption about what an agent is going to do in the future. Not only is this attractive from a purely<br />
information-theoretical typicality perspective (Cover <strong>and</strong> Thomas, 1991), more importantly, it corresponds to<br />
transferring the Jaynesian maximum entropy principle from a simple Bayesian setting to a full-fledged<br />
Markovian Decision Process model. In the context of <strong>CORBYS</strong>, where intentions <strong>and</strong> goals need to be<br />
inferred, it provides a natural approach to make minimally committed assumptions about the future behaviour<br />
during generation or during analysis of behaviour.<br />
12.3 SelfOrganised Behaviour <strong>and</strong> Goal Generation<br />
As said above, explicit goals <strong>and</strong> tasks can be incorporated by modelling the reward structure R(s,s',a)<br />
explicitly. However, in many cases, goals, tasks <strong>and</strong> intentions of a biological agent are unclear or ill-defined.<br />
It is therefore instrumental that one has a model for goal- or task-generation in the cases of unspecified or<br />
unknown reward structure.<br />
12.3.1 Generic BehaviourGeneration<br />
The problem of constructing self-motivated behaviours has been identified already in pioneering work by<br />
Schmidhuber (1991). Under the framework of artificial curiosity, he identified possible incentives for<br />
intrinsically driven behaviours without an externally structured reward concept.<br />
Steels, (2004) proposed the autotelic principle for learning systems. This principle is directly motivated by<br />
the psychological concept of flow (Csìkszentmihályi, 1978) which aims at achieving an optimal balance<br />
between effort <strong>and</strong> ease during the process of engaging in an activity (physical or mental). The idea behind<br />
the psychological concept is that too little effort creates boredom <strong>and</strong> frustration, but too much effort will<br />
create stress <strong>and</strong> thereby again reduce the effectiveness of the activity. This again leads to frustration,<br />
however, this time via overload. In the flow state, a subject will experience an optimal match between<br />
engagement, progress, <strong>and</strong> feedback. The autotelic principle was an attempt to formalise this principle to<br />
provide quantitative characteristics that could be implemented in an AI system.<br />
125
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Another approach is based on learning progress (Kaplan <strong>and</strong> Oudeyer, 2004) which considers the speed by<br />
which an agent succeeds in learning a given error function. The idea is that it is not the performance in<br />
reaching a goal that the agent aims at optimising, but the speed by which the agent improves. When a<br />
saturation process begins as an agent becomes better at achieving a goal, this will cause the agent at the same<br />
time to improve less <strong>and</strong> less due to the law of diminishing returns, <strong>and</strong> the agent will then switch to learn a<br />
different goal. Thus, the agent will keep looking for goals which have an increased level of novelty in them.<br />
This philosophy is also pursued in Schmidhuber (2002) where learning is modelled ab initio as a compression<br />
scheme. Learning progress then directly <strong>and</strong> universally expresses itself as code growth rate during<br />
compression of the learning process. The actual improvement is measured by the increase of the effectiveness<br />
of the compression, not just the length of the compressed output. As an illustratory example, one can consider<br />
the discovery of new laws of motion which allow to compress laws learnt earlier much more effectively. Such<br />
compression gradients form the incentive structure of this ab initio model. The problem with this model is<br />
that it considers only universal Kolmogorov-type compression schemes. On the one h<strong>and</strong>, they offer various<br />
optimality guarantees, but, on the other h<strong>and</strong>, they are only of theoretical relevance due to their strongly<br />
asymptotical character, i.e. the universal guarantees can only be established for sufficiently long learning runs<br />
which are typically orders of magnitude outside the range that is available to an artificial or biological agent.<br />
12.3.2 PrincipleBased SelfMotivated Models<br />
The approaches discussed in the previous section define generic concepts which can be implemented in<br />
manifold ways <strong>and</strong> depend on the particular instantiation of the learning models or compression schemes.<br />
Principle-based models are now more important where the intrinsic motivation principle is directly embedded<br />
into <strong>and</strong> “implemented” by the formalism.<br />
Strictly speaking, Schmidhuber’s universal compression gradient also belongs in the class of principle-based<br />
models. However, since the particular compression scheme is not canonically defined, <strong>and</strong> the formalism<br />
becomes insensitive to the scheme only in the asymptotic case (which is typically not realisable), we have<br />
grouped it above together with the approaches characterised by generic concepts rather than concrete<br />
principles.<br />
One learning concept is ISO-learning which is based on modelling low-level anticipatory feedback loops (Porr<br />
et al., 2003). Another important concept for implementing intrinsic self-motivation was the homeokinesis<br />
concept introduced in (Der et al., 1999, Der, 2000, Der, 2001). Given the embodiment of a concrete agent,<br />
homeokinetic control is defined by constructing behaviour of an agent in such a way that it maximises<br />
predictability of its sensoric stimuli in the future. This is achieved by an internal model of the agent that is<br />
using a learning rule to minimise the predictive error for future stimuli encountered by the agents.<br />
Importantly, this approach encapsulates the embodiment as a core component of the model. It is only defined<br />
in the context of the complete sensorimotor loop <strong>and</strong> elevates the body into a central part of the cognitive<br />
process, in opposition to many approaches from traditional AI; this perspective thus provides a quantitative<br />
grounding of the embodied intelligence perspective (Brooks, 1991, Paul, 2006, Pfeifer <strong>and</strong> Bongard, 2007).<br />
The early “naive” homeokinesis approach had the problem that it tended to favour situations where the<br />
prediction for the agent is simple. However, since this has a propensity to send the agent into steady states,<br />
the model was extended by a mechanism to ensure a rich sensorimotor stimulus spectrum (Der et al., 2006).<br />
For this, the estimated sensorimotor dynamics of the system is considered as a dynamical system whose<br />
Lyapunov exponents are estimated. The agent then moves towards states which have the most negative time-<br />
126
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
reversed Lyapunov exponents in sensoric stimulus space, while maintaining a high level of predictability;<br />
these are the maximally unstable (thus structure-rich) points of the sensorimotor dynamics, but the<br />
predictability still maintains structure in the agent dynamics. With the success <strong>and</strong> transparent interpretation<br />
of information theory-based methods (discussed below), the homeokinesis approach was generalised to use<br />
the information-theoretic language: the “unstable-yet-predictable” concept was translated into the concept of<br />
predictive information in (Ay et al., 2008). Predictive information (Bialek et al., 2001, Shalizi, 2001) is the<br />
mutual information that the past of a time series contains about its future. This approach makes it possible to<br />
convert the original dynamic systems approach into an information-theoretic setting. A high value indicates<br />
both good predictability, but, at the same time, it is more expressive than the criterium of minimum future<br />
error, because only sensorily rich pasts can achieve high mutual information values. An impoverished sensory<br />
past/future relation, even if predictable, does not achieve high predictive information since there is not much<br />
to predict in this case. The approach can be generalised to incorporate the learning process itself (Still, 2009).<br />
An alternative approach to utilising predictive information has been exploited in an evolutionary context<br />
where predictive information has been used to evolve locomotor patterns in simulated snakes with limited<br />
sensorics (Prokopenko et al., 2006a, Prokopenko et al., 2006b). Here, the dynamics is not adapted online,<br />
but sensorimotor dynamics are optimised via an evolutionary algorithm to maximize the predictive<br />
information statistics (using a computationally cheaper Renyi variant of the predictive information).<br />
The information-theoretic approach is highly versatile. It has found extensions to encompass major classes of<br />
information processing in sensorimotor loops, ranging from simple minimal models of control <strong>and</strong><br />
encompasses various methods for the formulation of behaviour generation <strong>and</strong> adaption on the basis of<br />
information theory (Lungarella <strong>and</strong> Sporns, 2005, Lungarella <strong>and</strong> Sporns, 2006, Polani et al., 2007, Klyubin<br />
et al., 2004, Klyubin et al., 2007). The methodology makes it possible to model many aspects of agentenvironment<br />
interaction, such as autonomy itself (Bertschinger et al., 2008), or the formation of joint concepts<br />
in a group of agents (Möller <strong>and</strong> Polani, 2008).<br />
Central for <strong>CORBYS</strong> is the construction of intrinsically self-motivated behaviour <strong>and</strong> thus a degree of<br />
autonomy for the agents which should enable them to propose, initiate or estimate behaviours. As a basis for<br />
this, the universal utility empowerment will be used. Empowerment is the external channel capacity of an<br />
organism (Klyubin et al., 2005a, Klyubin et al., 2008). It is typically defined as a function over the states (or,<br />
in more general situations, as context function, see Capdepuy et al. 2007a). It is not, as the case with<br />
predictive information, a function of the trajectory, rather, a function of state <strong>and</strong> thus it acts as a universal<br />
utility. This, in particular, means that empowerment is always defined universally, i.e. in the same generic<br />
way, depending only on the particular embodiment <strong>and</strong> the dynamics of the environment; <strong>and</strong>, by acting as<br />
utility, the agent aims to maximise this utility in a greedy fashion by hillclimbing through the states guided by<br />
the local empowerment gradient.<br />
Empowerment is an expression of the least commitment idea in the absence of a concrete goal or reward. It<br />
rewards being in states which are least committed with respect to future perturbations or goals (Klyubin et al.,<br />
2008). Empowerment provides intrinsic, embodiment-based saliency criteria for desirable states of the world.<br />
This includes the identification of states affording novel manipulative degrees of freedom (Klyubin et al.,<br />
2005a) or corresponding to states of maximum centrality in state-action graphs (Anthony et al., 2008), as well<br />
as gradients for sensory feature adaptation (Klyubin et al., 2005b) <strong>and</strong> natural points of stability (Klyubin et<br />
al., 2008).<br />
Computation of empowerment in continuous scenarios provides an alternative to optimal control models for<br />
the control of dynamical systems <strong>and</strong> can avoid the backup of value data throughout the dynamical<br />
127
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
programming framework <strong>and</strong> can allow an online travel of the agent through sensorimotor space towards<br />
“meaningful” salient states without the necessity of a full exploration (Jung et al., 2011). Furthermore, the<br />
Lagrange conjugate of empowerment can be taken into consideration, namely the computation of the action<br />
sequences that achieve the maximum channel capacity. An information bottleneck constraint can then be<br />
imposed on these action sequences. This procedure highlights dominant proto-behaviours in a self-organised<br />
way, creating preferred “eigenbehaviour” structures (Anthony et al., 2009).<br />
12.4 Behaviour Anticipation, Generation <strong>and</strong> Initiation<br />
12.4.1 Anticipation<br />
Anticipation as a concept appears in widely disparate scenarios. It encapsulates implicitly more than mere<br />
time series prediction inasmuch as it implies preparation for an action to be taken that needs to be set up<br />
before an event to be most effective.<br />
As traditional anticipation approach, in Bosman <strong>and</strong> Poutré (2007), a Genetic Algorithm-based approach to<br />
carry out stochastic dynamic optimisation in an online fashion is used, which incorporates anticipation of<br />
future events. Here, the authors essentially formulate the anticipation problem as an optimisation problem<br />
solved by a Genetic Algorithm, but in an online fashion.<br />
More explicitly, the preparation aspect becomes clear in an antagonistic anticipation scenario. In Kott <strong>and</strong><br />
Ownby (2005) predictive methodologies are offered to anticipate probable enemy actions. Such a scenario is<br />
of particular complexity, as one needs to incorporate reasoning about deception; this is identified by the<br />
authors to render anticipation particularly difficult, in particular in conjunction with the inevitable emotional<br />
<strong>and</strong> psychological backdrop under which an antagonistic dynamics takes place. In particular, such a scenario<br />
requires reasoning about likely “disinformation” attempting to mislead an observer about possible goals: thus,<br />
belief <strong>and</strong> intent recognition, deception discovery as well as planning become part of the scenario. For that<br />
purpose, the authors combine emotion modelling (Gratch <strong>and</strong> Marsella, 2004) <strong>and</strong> pheromone-based<br />
multiagent (Parunak et al., 2004) approaches to explore trajectory spaces to model the complex factors<br />
involving battlefield operations <strong>and</strong> the relevant anticipatory tasks. A simpler, MUD-oriented rule-based<br />
anticipation model for antagonistic scenarios is shown in Darken, (2005). Note, however, that in the context<br />
of <strong>CORBYS</strong>, the setting is inherently cooperative, so that there is no necessity to take the specific<br />
complications of antagonistic scenarios into consideration.<br />
The distinction between weak <strong>and</strong> strong anticipation (Dubois, 2003, Stepp <strong>and</strong> Turvey, 2010) which refers to<br />
the level at which the future of the system is anticipated by an agent; in weak anticipation, the agent contains a<br />
predictive model of what the system will be doing in the future, purely based on past observation. In a<br />
strongly anticipatory system, the agent is for all purposes part of the system, in that it — implicitly — can use<br />
the future states of a system for the anticipatory action. This can be seen as closely related to the relation<br />
between excess entropy <strong>and</strong> statistical complexity (Shalizi, 2001) where the first encompasses all (also<br />
implicit) relations between past <strong>and</strong> future, while the latter must necessarily force them through the<br />
Markovian bottleneck of the present time slice (Shalizi <strong>and</strong> Crutchfield, 2002) <strong>and</strong> might therefore be much<br />
more inefficient. In Nery <strong>and</strong> Ventura (2010) this distinction is used in the framework of their Event<br />
Segmentation Theory to segment streams into event-separated continuous segments, thereby bridging the gap<br />
between continuous state space representation <strong>and</strong> discrete temporal events when a continuous stream makes a<br />
transition to another qualitatively different continuous stream. An informational perspective how to extract<br />
anticipatory events from an event sequence consisting of contingent events with given delays is given in<br />
Capdepuy et al., (2007b) <strong>and</strong> coupled with an action selection in Capdepuy et al. (2007c).<br />
128
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
A notorious example for anticipatory failure is represented by the anticipation of extreme events (Nadin,<br />
2005). Such examples are catastrophic events such as earthquakes or financial collapses for which often<br />
power-law type post-hoc models can be given, but which violate the principle suggested in Nadin (2005) that<br />
“a model unfolding in faster than real time [would appear] to the observer as an informational future”, in that<br />
they screen the future from the present observer. A particular challenge is, as the author argues, that for these<br />
models it is difficult to decide when predictions are appropriate because rare, but large events are<br />
distinguished by the fact that their occurrence is not just not easily predicted according to models valid in<br />
more “stable” phases of temporal development, but that the models do not even provide sufficient<br />
introspection to detect this situation.<br />
Seizures can be considered “extreme” events in the brain. In Mormann et al. (2006), a review is given over<br />
existing seizure anticipation models based on EEG data. Methods that have been reported to be more<br />
successful are based on dynamical systems models such as Lyapunov exponents, accumulated signal energy,<br />
simulated neuronal cell models or phase synchronisation, however, later analysis seems to put these results in<br />
doubt <strong>and</strong> the current state of knowledge is considered inconclusive.<br />
On the level of intentional actions, however, it has been demonstrated that EMG registers the influence of<br />
anticipation on future events; in particular, it appears in preparatory postures anticipating voluntary movement<br />
(Brown <strong>and</strong> Frank, 1987). The anticipatory movement takes place in response to an expected task e.g. to<br />
preserve balance in anticipation of a push or pull action. The combination of anticipation with attentional<br />
mechanisms is believed to be controlled by the prefrontal cortex, by modulating sensory pathways in a “topdown”<br />
fashion (Liang <strong>and</strong> Wang, 2003).<br />
This biological evidence indicates the fundamental relevance of anticipation not only to prepare both active<br />
behaviour which requires suitable alignment activities, but also from the st<strong>and</strong>point of the preparation of<br />
cognitive processing resources. This is corroborated by studies using principled information-theoretic<br />
methods (van Dijk et al., 2010, van Dijk <strong>and</strong> Polani, 2011a). In these, it is assumed that limited informational<br />
resources are allocated for the working memory keeping track of current goals (i.e. one places constraints on<br />
goal-relevant information). This minimal assumption gives rise to salient decision transition points for<br />
behaviour strategies, such as intermediate goals. This model acts as a kind of proto-attentional mechanism.<br />
In addition, it highlights the link between attention, anticipation, <strong>and</strong> their emergence from constraints in the<br />
available informational resources.<br />
An agent’s actions carried out in the context of goal-directed behaviour must necessarily reveal information<br />
about its goals <strong>and</strong>/or purposes. This information, called digested information, can be identified by other<br />
agents that have the same goal as the first agent (Salge <strong>and</strong> Polani, 2011).<br />
For the <strong>CORBYS</strong> project, the properties discussed in the last two paragraphs are of relevance. The digested<br />
information principle under which actions reveal information about the agent’s intentions indicate that human<br />
actions should provide the SOIAA architecture with clues to the intentions of the human. To h<strong>and</strong>le this<br />
possibly sparse information, it is necessary to impose additional regularisation constraints on its estimation.<br />
For this purpose, the studies of task structuring by constrained goal-relevant information (van Dijk et al.,<br />
2010, van Dijk <strong>and</strong> Polani, 2011a) offer natural approaches to regularisation.<br />
Generally, anticipatory behaviour requires the combination of abilities to predict causally driven as well as<br />
goal-directed dynamics. The first component aims at predicting a dynamics according to e.g. a “passive”<br />
physical law, the second addresses the fact that agents, such as humans, are not passively following laws but<br />
initiate behaviours with certain purposes. The “least commitment” philosophy described earlier provides a<br />
natural framework to extract information about possible future purposeful trajectories from information about<br />
129
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
past observations.<br />
The emphasis on self-motivated behaviour generation in the section above is grounded in the necessity of<br />
having a picture of possible c<strong>and</strong>idates for the behaviour of the robot, but also of the human <strong>and</strong> how to<br />
organise the goal/initiative transition between them.<br />
12.4.2 Causality <strong>and</strong> Information Flow<br />
Detecting the initiation of behaviour requires the detection of causality <strong>and</strong> of information flow. For causality<br />
detection, it is necessary to employ suitable causality models, namely Causal Bayesian Networks (CBN,<br />
(Pearl, 2000)). These are Bayesian Networks endowed with additional interventional semantics. With this<br />
interventional model, it is possible to structurally model causal influence. Given observational variables only,<br />
established algorithms such as IC* can be used for the systematic reconstruction of the causal structures for<br />
parts of the networks under certain, relatively strong conditions (Pearl, 2000).<br />
Very recently, a class of methods has been introduced in which the requirements that are necessary for a<br />
reconstruction of the causal structure of the network have been considerably relaxed (Peters et al., 2009,<br />
Hoyer et al., 2009, Peters et al., 2010). These methods introduce an additional, weak, assumption, namely<br />
the conceptual independence of mechanisms <strong>and</strong> initial conditions, i.e. they consider the CBNs as having node<br />
distributions which are separately specified from the conditional distributions assigned to the CBN edges.<br />
This seemingly minor assumption allows an approximate causal reconstruction in many cases where<br />
traditional methods such as IC* would fail.<br />
Another class of methodologies for causal reconstruction is based on recent information-theoretic results<br />
(Steudel <strong>and</strong> Ay, 2011). The use of information theory to characterise causality has been recognised already<br />
in Lloyd (1991), however, until recently it has been mostly implemented as an essentially predictive model,<br />
such as Schreiber’s transfer entropy (Schreiber, 2000), or as minimal non-redundant information transport<br />
requirements, such as determined by directed information (Massey, 1990). However, a causal picture of<br />
information flow had already been sketched in Lloyd, (1991), Tononi <strong>and</strong> Sporns (2003) <strong>and</strong> fully developed<br />
in Ay <strong>and</strong> Polani (2008) <strong>and</strong> Lizier et al., (2007), where the latter assumes the existence of a time-consistent<br />
compositionality structure (see also Ay <strong>and</strong> Wennekers, 2003).<br />
12.5 Technological Gaps<br />
Technological gaps exist for the following set of tasks:<br />
� goal/behaviour generation<br />
� goal/intention extraction<br />
� causal information flow estimation<br />
� transitional dynamics<br />
On the general level, the gaps that need to be addressed are the transfer to continuous <strong>and</strong> high-dimensional<br />
state spaces, the combination of the various levels of the process, as well as the adaptation of the formalisms<br />
for the particular task of human-robot intentional transfer interactions.<br />
12.5.1 Goal/Behaviour Generation<br />
For the goal <strong>and</strong> behaviour generation, various problems in conjunction with the high-dimensional <strong>and</strong><br />
complex structure of the deployment scenario need to be solved. Methods for high-dimensional<br />
130
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
empowerment computation need to be developed, <strong>and</strong> extended towards online performance.<br />
It is furthermore necessary to extend the empowerment methodology used in SOIAA by a mechanism to<br />
introduce externally known saliency points <strong>and</strong> goals into the empowerment framework.<br />
12.5.2 Goal/Intention Extraction<br />
In this context the main gaps consist of suitable regularisation formalisms that make the combination of<br />
observed behaviour possible with prospective/hypothetical tasks/goals generated in the above step. In<br />
addition, the formalism needs to work in the continuous environment.<br />
12.5.3 Causal Information Flow Estimation<br />
For the identification of the initiative-taker, it is necessary to identify the causal information flow in agenthuman<br />
interaction. Here, a number of gaps exist. It is necessary to develop methods to determine the causal<br />
flow from observational continuous high-dimensional data. Online capability is also an existing gap, but it<br />
may be mitigated by the ability of an on-line algorithm in SOIAA to probe the causal structure of the system<br />
<strong>and</strong> thus to turn a purely observational system into a partly interventional system <strong>and</strong> simplify the flow<br />
detection.<br />
12.5.4 Transitional Dynamics<br />
A gap exists at defining the transitional dynamics shifting between robot-initiative <strong>and</strong> human initiative can be<br />
seen as a regularisation task on the level of active behaviour. The gap to be addressed here is the fact that on<br />
an action level, intermediate actions may not exist, or may lead to undesired states; thus transitions in<br />
behaviours may need to exhibit phase transitions from one category of admissible behaviours to another. This<br />
transition needs therefore be aligned with the goals/purposes of the human-robot system as well as with<br />
feasibility/admissibility criteria.<br />
13 StateoftheArt in Architectures for Cognitive Robot Control (UB)<br />
For a long time engineers have been using control systems which use models <strong>and</strong> feedback loops in order to<br />
control real-world systems. Limitations of the model based controllers led to research <strong>and</strong> development of<br />
intelligent control techniques such as adaptive, fuzzy, neural, genetic <strong>and</strong> etc. (Aström et al., 1995)(Brown et<br />
al., 1994)(Passino 1998). Intelligent control tries to emulate biological intelligence to cope with uncertainty.<br />
It either seeks ideas from human who performs control tasks or borrows ideas from how biological systems<br />
solve control problems <strong>and</strong> applies that in order to solve control problems. However, artificial systems are far<br />
behind the biological systems concerning response generation (motor control). As the need to control<br />
complex systems increases, it is important to look beyond engineering <strong>and</strong> computer science as the challenges<br />
cannot be met by merely improving the software engineering <strong>and</strong> programming techniques. Rather the<br />
systems need built-in capabilities to deal with these challenges. Looking at natural intelligent systems, the<br />
most promising approach for h<strong>and</strong>ling them is to equip the systems with more powerful cognitive<br />
mechanisms. For example, humans are able to process variety of stimuli in parallel, to “filter” those that are<br />
the most important for a given task to be executed, to create an adequate response in time <strong>and</strong> to learn new<br />
motor actions with minimum assistance. This process is called cognitive control in psychology, <strong>and</strong> it is<br />
unique to humans <strong>and</strong> some higher-class animals (Botvinick et al., 2001). In recent years it has been a<br />
challenge for control engineers to find ways to realise such cognitive control functionality reflecting human’s<br />
robust sensori-motor mechanisms in robots (Kawamura, 2004). Although a lot of effort has been invested in<br />
131
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
research of learning algorithms <strong>and</strong> processing <strong>and</strong> perception of sensory data, how to efficiently respond<br />
(generate motor actions) to changes in dynamic environment is still one of the unsolved problems. This is the<br />
problem that has been emphasised by the European Commission (European robots, 2009): “Continued<br />
research is necessary to improve control systems <strong>and</strong> versatile hardware, particularly for robots designed to<br />
move around different environments.”<br />
An unanswered question is “to which extent the cognitive controlled robotic systems should copy control<br />
structures of human’s motor-sensory system?” It is obvious that in some aspects the human body is much<br />
more advanced in comparison to robotic systems. However, at the same time technical systems have some<br />
advantages over the human’s body motor-sensory system such as high resolution of sensors, versatility of<br />
sensor types <strong>and</strong> small feedback delays (Kawato, 1999). As robots are not limited to human sensors or<br />
effectors, e.g. robots can have lasers <strong>and</strong> wheels <strong>and</strong> non-human grippers, the robotic systems researchers<br />
generalise some of the structures found in cognitive architectures, as well as relax the adherence to human<br />
timing data for the performance of nonhuman sensors or effectors (Benjamin, 2004).<br />
13.1 Architectures for cognitive control of robotic systems<br />
Cognitive architectures represent attempts to create unified theories of cognition (Newell, 1990), i.e. theories<br />
that cover a broad range of cognitive issues, such as attention, memory, problem solving, decision making,<br />
learning from several aspects including psychology, neuroscience, <strong>and</strong> computer science. These architectures<br />
strive to explain, implement <strong>and</strong> measure a range of human cognitive activities. The specification of a<br />
cognitive architecture consists of its representational assumptions, the characteristics of its memories, <strong>and</strong> the<br />
processes that operate on those memories (Vernon, 2006). There are a number of cognitive architectures. The<br />
three pre-eminent architectures are EPIC (Kieras <strong>and</strong> Meyer, 1997), Soar (Lehmann et al., 2006) <strong>and</strong> ACT-R<br />
(Anderson et al. 2004). Each of these architectures has achieved a degree of success <strong>and</strong> is used in one or<br />
more applications. However, not all existing cognitive architectures can be employed in controlling complex<br />
robotic systems. For example Benjamin et al., (2004) used Soar to develop ADAPT cognitive architecture for<br />
robotics claiming that existing cognitive architectures such as Soar, ACT-R <strong>and</strong> EPIC, do not easily support<br />
certain mainstream robotics paradigms such as adaptive dynamics. Thus, the ADAPT cognitive architecture<br />
is a representative of the architectures for whose development the motivation comes from robotics where<br />
researchers want their robots to exhibit sophisticated behaviours including use of natural language, speech<br />
recognition, visual underst<strong>and</strong>ing, problem solving <strong>and</strong> learning, in complex analogue environments <strong>and</strong> in<br />
real-time. A growing number of robotics researchers have realised that programming robots one task at a time<br />
is not likely to lead to a robot with such general capabilities, so interest has turned to cognitive robotic<br />
architectures as a natural way to try to achieve this goal.<br />
For the purposes of this review the term cognitive architecture will be taken in the general <strong>and</strong> non-specific<br />
sense when considering an adequate cognitive architecture which includes elementary building blocks for<br />
technical cognition <strong>and</strong> intelligence of the robot. By this we mean the minimal configuration of a robotic<br />
system that is necessary for the system to exhibit cognitive capabilities <strong>and</strong> behaviours: the specification of<br />
the components in a system, their function, <strong>and</strong> their organisation as a whole. The focus is on cognitive<br />
architectures designed for controlling complex robotic systems that are supposed to function in dynamic realworld<br />
environments including humans. This is because two robotic systems that are going to be used as<br />
demonstrators in the <strong>CORBYS</strong> project are a mobile gait rehabilitation system to be developed during the<br />
project lifetime <strong>and</strong> an existing mobile robot used for contaminated /hazardous environment investigation. The<br />
first one is tightly coupled with a human, physically <strong>and</strong> psychologically as a robot gives physical support to<br />
the human while the human should accept a robot as a part of own body or of own physical abilities. Though<br />
the second demonstrator is not in direct physical contact with human it, as well as the first demonstrator, has<br />
132
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
to “work” synergistically with human in real-time. Therefore, the main focus in this review is on architectures<br />
for control of cognitive real-time robotic systems.<br />
While it is clearly impossible to cover all architectures that have been developed, the aim is to present an<br />
exploratory overview of the pre-eminent architectures that have been successfully utilised for control of<br />
robotics systems in recent years. As the main objective of the <strong>CORBYS</strong> project is to design, develop <strong>and</strong><br />
validate integrated generic cognitive robot control architecture, the review will cover several architectures that<br />
have been implemented in different robotic cognitive systems. The architectures will be analysed related to<br />
the technologies which have been identified as needed to meet the challenges of nowadays <strong>and</strong> future robotics.<br />
The focus will be on the following technologies that will be built on in <strong>CORBYS</strong>:<br />
Sensing <strong>and</strong> Perception: Sensing is transformation of physical entities such as contact, force, sound <strong>and</strong> etc.<br />
into internal digital representation. Perception is the extraction of key properties from these digital<br />
representations <strong>and</strong> integration of sensory data over time.<br />
Human-Robot Interaction: Human robot interaction is ability of robotic system to mutually communicate with<br />
humans. That communication can be multi-modal: voice, physical contact, gestures <strong>and</strong> different types of<br />
user interfaces.<br />
Real-time control: Control system that is able to operate in real time, where real-time operation is defined as:<br />
“The operating mode of a computer system in which the programs for processing of data arriving from the<br />
outside are permanently ready, so that their results will be available with the predetermined periods of time;<br />
the arrival times of the data can be r<strong>and</strong>omly distributed or be already a priori determined depending on the<br />
different applications” [DIN 44300].<br />
Planning: Planning is calculation <strong>and</strong> selection of actions, motions, paths <strong>and</strong> missions.<br />
Learning: Learning is change of robot behaviour based on practice, experience or teaching.<br />
Communication: This property is concerned with hardware <strong>and</strong> software communication. Hardware<br />
communication usually is based on industrial communication interfaces (CAN, Profibus, LIN, RS232,USB<br />
<strong>and</strong> others), while software communication between software modules of architecture (often called the<br />
middleware) is based on different protocols, like RPC (remote procedure call) or CORBA (common object<br />
request broker architecture). There are two basic approaches in communication between software modules:<br />
publish-subscribe <strong>and</strong> client-server.<br />
System (software) architecture: Architecture defines the structure of system components, their<br />
interrelationships, <strong>and</strong> the principles governing their design <strong>and</strong> evaluation over time.<br />
13.1.1 System architectures<br />
Although an “all win” architecture has not yet been developed, recent research has been mainly dedicated to<br />
the use of hybrid open architectures. These architectures enable sophisticated control in complex<br />
environments. Hybrid architectures replaced purely reactive or purely deliberative approach to the design of<br />
robot control architecture as many researchers have argued that neither a completely deliberative nor<br />
completely reactive approach is suitable for building robotic systems. The purely deliberative architectures<br />
are known as Sense-Plan-Act (SPA) architectures as they are characterised by sensing or gathering data,<br />
planning to take new actions based on this data, <strong>and</strong> acting out these plans. One of the most famous of the<br />
SPA robots was Shakey, developed in the Stanford Research Institute (Nilsson, 1984). However, in realworld<br />
applications Shakey’s planning system, like the planning systems of other SPA based robots, was<br />
133
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
unable to perform the job in a timely fashion. The system would produce a plan, but before this plan could be<br />
executed in full, it often became invalidated by changes in the real-world. Because of the inability of SPA<br />
architectures to perform real-world operations, researchers searched for a robotic control method that did not<br />
rely on high level reasoning so that the school of reactive <strong>and</strong> behavioural robotics emerged. This approach<br />
basically tries to produce “intelligent behaviour” driven by sensory data, the robot reacts intelligently to what<br />
it senses (Brooks 1990). Architectures from the reactive <strong>and</strong> behavioural robotics school were typified in their<br />
rejection of a symbolic representation of the outside world, along with a bottom-up approach to achieving<br />
complex behaviours (Ross, 2004). Thus, on the one h<strong>and</strong> SPA robotics failed to produce good experimental<br />
results because the level of planning <strong>and</strong> other cognitive tasks attempted was too complex to cope with in a<br />
real-world environment. On the other h<strong>and</strong> reactive robotics suffered in another way, in that only immediate<br />
reactive actions could be performed but not complex tasks which needed to be ’thought about’ ahead of their<br />
execution as the cognitive functions were not supported. The natural progression in the design of a robot<br />
control architecture was therefore the development of architectures comprising of both reactive <strong>and</strong><br />
deliberative components. Therefore this overview only concerns architectures which are hybrid in nature.<br />
The hybrid cognitive architectures are mainly characterised by a layering of capabilities where low-level<br />
layers provide reactive capabilities, supporting fast perception, control <strong>and</strong> task execution on a low-level <strong>and</strong><br />
high-level layers provide the more computationally intensive deliberative capabilities including symbolic<br />
reasoning as a necessity for recognition <strong>and</strong> interpretation of complex contexts, planning of intrinsic tasks, <strong>and</strong><br />
learning of behaviours. In such a layered architecture, a robot’s control subsystems are arranged into a<br />
hierarchy with higher layers dealing with information at increasing levels of abstraction. A key problem in<br />
such architectures is what kind of control framework to embed to manage the interactions between the various<br />
layers. There exists:<br />
Horizontal layering-where layers are each directly connected to the sensory input <strong>and</strong> action output.<br />
In effect, each layer itself acts like an agent, producing suggestions as to what action to perform.<br />
Vertical layering-Sensory input <strong>and</strong> action output are each dealt with by at most one layer each<br />
Typical three layers architecture based on vertical decomposition of components that supports cognitive<br />
functions is illustrated in Figure 28.<br />
Figure 28: The illustration of typical three layers architecture<br />
Classic example of three layered architecture is ATLANTIS (Gat, 1997) where the above illustrated three<br />
layers have the following functionalities:<br />
The Controller: collection of reactive “behaviours” where each behaviour is fast <strong>and</strong> has minimal<br />
internal state<br />
The Sequencer: decides which primitive behaviour to run next; does not do anything that takes a long<br />
134<br />
.
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
time to compute, because the next behaviour must be specified soon<br />
The Deliberator: slow but smart; can either produce plans for the sequencer, or respond to queries<br />
from it<br />
The majority of architectures to follow Gat’s architecture adopt the basic idea of layered architecture but adapt<br />
it to meet the challenges of complex robotic systems. Also, it should be noted that not all architectures are<br />
equally hybrid, with many hybrid architectures choosing to omit an explicit planning layer in favour of using<br />
plan libraries or a behaviour sequencer. These will be illustrated in the next section where several preeminent<br />
examples are analysed aiming to identify core elements which are essential to providing intelligent<br />
robot control architecture.<br />
13.2 Cognitive architectures used for controlling different robotic systems<br />
13.2.1 Armar – a cognitive architecture for a humanoid robot<br />
In order to develop architecture that supports fast perception, control <strong>and</strong> task execution on a low-level as well<br />
as recognition <strong>and</strong> interpretation of complex contexts, planning of tasks execution, <strong>and</strong> learning of behaviours<br />
at a high-level (Burghart et al.2005) chose a three-layered architecture Armar adapted to the requirements of a<br />
humanoid robot. It is a mixture of a hierarchical three-layered form on the one h<strong>and</strong> <strong>and</strong> a composition of<br />
behaviour-specific modules on the other h<strong>and</strong> as described in (Vernon et al. 2006).<br />
135
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
System Architecture:<br />
The layout of Armar architecture is given in Figure 29. It is based on parallel behaviour-based components<br />
interacting with each other, comprising a three-level hierarchical perception sub-system, a three-level<br />
hierarchical task h<strong>and</strong>ling system, a long-term memory sub-system based on a global knowledge database<br />
(utilising a variety of representational schemas, including object ontologies <strong>and</strong> geometric models, Hidden<br />
Markov Models, <strong>and</strong> kinematic models), a dialogue manager which mediates between perception <strong>and</strong> task<br />
planning, an execution supervisor, <strong>and</strong> an ‘active models’ short-term memory sub-system to which all levels<br />
of perception <strong>and</strong> task management have access. These active models play a central role in the cognitive<br />
architecture: they are initialised by the global knowledge database <strong>and</strong> updated by the perceptual sub-system<br />
<strong>and</strong> can be autonomously actualised <strong>and</strong> reorganised. As such Armar is a representative of an architecture<br />
following the classical three-layers architectures which has a common database shared by all layers<br />
Figure 29: Armar architecture (Burghart et al.2005)<br />
Robotic platform:<br />
Robot prototype described in Burghart et al. (2005) is a humanoid robot with 23 degrees of freedom<br />
consisting of five subsystems: head, left arm, right arm, torso <strong>and</strong> a mobile platform. The upper body of the<br />
robot is modular <strong>and</strong> light-weight while retaining a similar size <strong>and</strong> proportion of an average person. The<br />
control system of the robot is divided into separate modules. Each arm as well as torso, head <strong>and</strong> mobile<br />
platform has its own software- <strong>and</strong> hardware controller module. The head has two DOFs arranged as pan <strong>and</strong><br />
tilt <strong>and</strong> is equipped with a stereo camera system <strong>and</strong> a stereo microphone system. Each of the arms has 7<br />
DOFs <strong>and</strong> is equipped with 6 DOFs force torque sensors on the wrist. The arms are equipped with<br />
anthropomorphic five-fingered h<strong>and</strong>s driven by fluidic actuators. The mobile platform of the robot consists of<br />
a differential wheel pair <strong>and</strong> two passive supporting wheels. It is equipped with front <strong>and</strong> rear laser scanners<br />
<strong>and</strong> it hosts the power supply <strong>and</strong> the main part of the computer network.<br />
136
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Figure 30: A humanoid robot Armar (Burghart et al.2005)<br />
Sensing <strong>and</strong> Perception:<br />
The perception sub-system consists of low-, mid-, <strong>and</strong> high-level perception modules. The low-level<br />
perception module provides fast interpretation of sensor data without accessing the system knowledge<br />
database. It provides typically reflex-like low-level robot control. Within this module data coming from<br />
sensors such as joint position sensors, the force torque sensors located in the robot’s wrists, tactile sensor<br />
arrays used as artificial sensitive skin, <strong>and</strong> acoustic data for sound <strong>and</strong> speech activity detection are processed.<br />
The low-level perception module communicates with both the mid-level perception module <strong>and</strong> the task<br />
execution module via the active models. The mid-level perception module provides a variety of recognition<br />
components <strong>and</strong> communicates with both the system knowledge database (long-term memory) as well as the<br />
active models (short term memory). The high-level perception module provides more sophisticated<br />
interpretation facilities such as situation recognition, gesture interpretation, movement interpretation, <strong>and</strong><br />
intention prediction.<br />
Planning:<br />
The task h<strong>and</strong>ling sub-system comprises a three-level hierarchy with task planning, task coordination, <strong>and</strong><br />
task execution levels. Robot tasks are planned on the top symbolic level using task knowledge. A symbolic<br />
plan consists of a set of actions, represented either by XML-files or Petri nets, <strong>and</strong> acquired either by learning<br />
(e.g. through demonstration) or by programming. The task planner interacts with the high-level perception<br />
module, the (long-term memory) system knowledge database, the task coordination level, <strong>and</strong> an execution<br />
supervisor. This execution supervisor is responsible for the final scheduling of the tasks <strong>and</strong> resource<br />
management in the robot using Petri nets.<br />
Control (Execution):<br />
A sequence of actions generated are passed down to the task coordination level which then coordinates<br />
(deadlockfree) tasks to be run a the lowest task execution (control) level. In general, during the execution of<br />
any given task, the task coordination level works independently of the task planning level.<br />
Communication:<br />
A dialogue manager, which coordinates communication with users <strong>and</strong> interpretation of communication<br />
events, provides a bridge between the perception sub-system <strong>and</strong> the task sub-system. Its operation is<br />
effectively cognitive in the sense that it provides the functionality to recognise the intentions <strong>and</strong> behaviours<br />
of users.<br />
Learning:<br />
A learning sub-system is also incorporated with the early generations of Armar robot learning tasks <strong>and</strong> action<br />
sequences off-line by programming by demonstration or tele-operation. On-line learning has been work in<br />
progress in new generations of Armar (Dietsch, 2011). For instance Armar-III actively investigates its<br />
137
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
environment. In fact, entities therein only become semantically useful objects through the actions the robot<br />
performs on.<br />
Real-time control:<br />
Fast reactive components (low-level) <strong>and</strong> building blocks for recognition or task coordination (midlevel) act<br />
in real-time. Active models used for real-time perceptions <strong>and</strong> tasks are retrieved from the global knowledge<br />
database <strong>and</strong> stored in a cache memory. In addition, they have a certain a degree of autonomy for adaptation.<br />
The execution supervisor is responsible for the priority management of tasks <strong>and</strong> perceptual components.<br />
13.2.2 ISAC: IMA (Intelligent Machine Architecture)<br />
The IMA (Intelligent Machine Architecture) was specifically developed to ease the integration of<br />
heterogeneous software components. The general premise is that all software components can be modelled as<br />
basic agents which are loosely coupled over a DCOM (Distributed Component object model) connection.<br />
Here the application of multi-agent based hybrid cognitive architecture IMA in the humanoid robot ISAC<br />
where all aspects of the robots control system are modelled around agents is presented. It is implemented with<br />
symbolic components (software agents) which embed different connectionist algorithms. The layout of IMA<br />
cognitive architecture is depicted on Figure 31.<br />
Figure 31: IMA multiagent-based cognitive robot architecture (Kawamura et.al. 2004)<br />
Robotic platform (description, actuation <strong>and</strong> sensing):<br />
ISAC is an upper body part humanoid robot that has six DoF arms, pneumatically actuated by<br />
agonist/antagonist McKibben muscles. Each arm has four finger <strong>and</strong> thumb. ISAC is equipped with an active<br />
stereovision system, laser motion detectors, stereo microphones to localise the position of the sound source,<br />
touch sensors on fingers, proximity sensors on palms <strong>and</strong> two force sensors between h<strong>and</strong>s <strong>and</strong> arms.<br />
System (software) architecture:<br />
The IMA architecture is collection of software modules (agents). Each IMA agent encapsulates all aspects of<br />
138
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
single element (logical or physical) of a hardware component, computational task or data. Software agents<br />
communicate through asynchronous message passing. The agents are implemented as atomic or compound<br />
agents. Atomic agents encapsulate a single resource <strong>and</strong> it does not depend on other agents. Compound<br />
agents contain or depend on other agents for their primary function.<br />
Sensing <strong>and</strong> Perception<br />
In the IMA architecture each stimulus is processed by a different perception agent (different processing<br />
techniques have been used for different sensors). To each sensor an IMA agent that processes sensor<br />
information <strong>and</strong> stores the result in short-term memory is assigned. For example, there are separate visual<br />
agents that perform recognition of objects, object localisation, face recognition <strong>and</strong> etc. The modular<br />
approach makes it possible to add additional sensors.<br />
Human-robot Interaction:<br />
The two software agents are responsible for HRI (Kawamura et al., 2000), one of them is human-agent that<br />
encapsulates information that robot determined about human <strong>and</strong> other is self-agent that addresses the<br />
humanoid’s cognitive aspects.<br />
Real-time control:<br />
The initial usage of IMA architecture in the ISAC caused problems with real-time control (Kawamura et. al.<br />
2004). Asynchronous message passing has caused latency in passing control inputs to real-time controllers.<br />
Therefore, separate encapsulation of multi-agent tasks (such as visual serving) has been carried out. Head,<br />
arm <strong>and</strong> h<strong>and</strong> agents are responsible for controlling the head, arm <strong>and</strong> h<strong>and</strong> actuators. They accept comm<strong>and</strong>s<br />
from one or more clients <strong>and</strong> carry out comm<strong>and</strong> arbitration. The agents also provide the clients with<br />
information about its current state (for example joint position). The actuation agents pass referent joint angles<br />
<strong>and</strong> velocities to servo controllers. The servo control loops have been realised using a QNX real-time<br />
operating system.<br />
Planning:<br />
The self-agent is responsible for planning actions of the robot. The most important modules of the Self-agent<br />
are Central-Executive-Agent (CEA) <strong>and</strong> First-Order-Response (FOR). CEA module makes decisions <strong>and</strong><br />
invokes skills necessary for performing the given task that has been selected by Intention-Agent based on<br />
perceived sensory information. The FOR module is responsible for creating the reactive responses of the<br />
robot.<br />
Learning:<br />
The system is equipped with different types of memories in order to support learning. The memory structure is<br />
modular <strong>and</strong> divided into three components: short-term memory, long-term memory <strong>and</strong> working memory.<br />
Short-term memory holds most recent sensory information on the current environment in which ISAC robot<br />
operates. Long-term memory holds learned behaviours, semantic knowledge <strong>and</strong> past experience <strong>and</strong> it has<br />
three structures: semantic, procedural <strong>and</strong> episodic. Working-memory holds task-specific information that<br />
authors call “chunks”. The working-memory uses temporal difference learning algorithms <strong>and</strong> neural network<br />
to provide learning in IMA architecture.<br />
Communication:<br />
The communication between agents is based on asynchronous message passing. Any agent can be accessed<br />
by any other agent. By increasing the number of agents the system faced with communication “lock-ups”<br />
13.2.3 iCub<br />
The cognitive architecture of iCub robot is depicted in Figure 33. The architecture “..comprises a network of<br />
139
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
competing <strong>and</strong> cooperating distributed multi-functional perceptuo-motor circuits, a modulation circuit which<br />
effects homeostatic action selection by disinhibition of the perceptuo-motor circuits, <strong>and</strong> a system to effect<br />
anticipation through perception-action simulation.” (S<strong>and</strong>ini et al., 2007).<br />
Robotic platform (description, actuation <strong>and</strong> sensing):<br />
The iCub child size humanoid robot with 53 degrees of freedom <strong>and</strong> it is designed to crawl <strong>and</strong> sit. His h<strong>and</strong>s<br />
<strong>and</strong> arms are designed for dexterous manipulation <strong>and</strong> it also has visual, vestibular, auditory <strong>and</strong> haptic sensor<br />
capabilities. The sensory system consists of: vision, touch, audio <strong>and</strong> inertial sensors system.<br />
Figure 32: The layers of iCub architecture (S<strong>and</strong>ini et al. 2007)<br />
Figure 33: iCub cognitive architecture (S<strong>and</strong>ini et al. 2007)<br />
140
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
System (software) architecture:<br />
The iCub architecture is based on YARP (Yet Another Robot Platform) (Metta, 2006)], open-source<br />
framework that supports distributed computing. Figure 32 shows the layers of the iCub architecture. The<br />
lowest level API-0 of system architecture is used for accessing hardware components, by formatting <strong>and</strong><br />
unformatting IP packages into appropriate classes <strong>and</strong> data structures. IP packets are sent to the robot by GBit<br />
Ethernet connection. High-level cognitive processes are implemented as YARP processes.<br />
Real time control:<br />
In order to achieve real-time control of the robotic system, control of joint actuators is carried out by digital<br />
signal processing (DSP) units that are mounted on robotic platform. The iCub software runs in parallel on a<br />
distributed system of computers that are connected via Gbit Ethernet with the Hub unit (PC/104) that is also<br />
located on the robotic platform.<br />
Planning:<br />
In the brain, the basal ganglia are responsible for action selection <strong>and</strong> therefore in modulation circuit is present<br />
action selection circuit that corresponds to ganglia.<br />
Communication:<br />
Multiple CAN bus structure is used to communicate with control/driver <strong>and</strong> AD cards on the robotic system,<br />
while Gbit Ethernet is used to communicate data between the robotic system <strong>and</strong> distributed system of<br />
computers. YARP architecture supports building robot control system as a collection of programs<br />
communicating in peer-to-peer way, with broad range of connection types (tcp, udp, multicast, XML/RPC,<br />
tcpros <strong>and</strong> etc.)<br />
13.2.4 CareObot<br />
System architecture:<br />
The Care-O-bot System is controlled by a heterogeneous hybrid layered software architecture, which<br />
combines reactive <strong>and</strong> deliberative control. It consists of several basic modules, as displayed in the following<br />
figure.<br />
Figure 34: Architecture of Care-O-bot (Hans et. al. 2001)<br />
141
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
The communication between the user <strong>and</strong> the robot is realised via the man-machine interface <strong>and</strong> drives the<br />
task execution. Corresponding orders are sent to the symbolic planner, which serves as a task planning<br />
component. It converts orders from the user interface into a list of possible actions, which leads to a global<br />
goal. Once the list is generated, the execution module chooses a suitable action from the list <strong>and</strong> sends it to<br />
the robot control module in order to execute it. Information is shared via a world-model in the database<br />
module.<br />
Robotic platform:<br />
The Care-O-bot 3 system is a highly integrated <strong>and</strong> compact service robot. Its main components are a mobile<br />
base, a torso, a manipulator, a tray <strong>and</strong> a sensor carrier. The system can be seen in the following figure:<br />
Figure 35: Care-O-bot<br />
It has altogether 28 DOF, which includes a 7 DOF light-weight arm, equipped with a 7 DOF gripper, <strong>and</strong> a 5<br />
DOF sensor head with a laser scanner, stereo-vision cameras <strong>and</strong> a 3D-TOF-camera. The mobile platform is<br />
capable of omnidirectional movement <strong>and</strong> the torso is flexible such that simple gestures can be performed.<br />
For user-interaction, there are acoustic devices <strong>and</strong> a touch screen is integrated into the tray.<br />
Planning:<br />
The task planning is h<strong>and</strong>led by the symbolic planner at the high-level, which generates a list of actions to<br />
reach the goals specified by the user, by receiving input from the man-machine-interface <strong>and</strong> the database.<br />
The planner for the Care-O-bot system is based on the Action Description Language (ADL). It uses the Fast-<br />
Forward-planner (FF-planner), working with the Planning Domain Description Language (PDDL). Thus it<br />
uses a world model <strong>and</strong> a list of the abilities of the robot, both being saved in the systems database, in order to<br />
create a list of possible actions that can be accomplished next.<br />
Real-time Control (Execution)<br />
In order to realise the possible actions from the list of the symbolic planner, the execution module selects a<br />
suitable action <strong>and</strong> executes it. For that purpose, the low-level control of the robot is designed using a realtime<br />
framework. Depending on the context, open- <strong>and</strong> closed-loop controllers are realised.<br />
Reasoning:<br />
142
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
The actions, chosen from the list of the symbolic planner, are executed by the execution module. This is<br />
realised as a BDI (Believe-Desire-Intention)-theoretic agent architecture. The agent is able to determine<br />
intentions dynamically at runtime, based on known facts, current goals <strong>and</strong> available plans. Here both topdown<br />
goal-based reasoning <strong>and</strong> bottom-up data-driven reasoning are possible.<br />
Sensing <strong>and</strong> perception<br />
Considering the environment perception, sensory data is gathered continuously. This data is further processed<br />
<strong>and</strong> interpreted in order to acquire information about the environment. The results are then given to the<br />
system <strong>and</strong> possibly displayed to the user through the man-machine-interface.<br />
13.3 <strong>CORBYS</strong> enabling potential <strong>and</strong> constraints (current gaps/shortcomings)<br />
The following aspects of cognitive robot controlled systems have been identified as points of interest where<br />
the <strong>CORBYS</strong> project will contribute to research <strong>and</strong> development of cognitive controlled robotic systems:<br />
a) Cognitive control adaptation<br />
b) take-over/h<strong>and</strong>-over of goal-setting initiative between robot <strong>and</strong> external agent<br />
Cognitive control adaptation<br />
In reviewed cognitive architectures, the control of actuators is carried out by general control techniques that<br />
are usually used in robotics (position servo controllers <strong>and</strong> different form of force controllers) where setpoints<br />
for real-time (RT) controllers are provided by cognitive modules which initiate action of RT<br />
controllers. However, one important missing aspect is adaptation of real-time controllers by cognitive<br />
modules. This will be highly influenced by time latency of cognitive loops <strong>and</strong> communication paths between<br />
RT controllers <strong>and</strong> cognitive modules. In <strong>CORBYS</strong> both long term <strong>and</strong> short term adaptation of RT<br />
controllers will be investigated.<br />
14 StateoftheArt in Smart integrated actuators (SCHUNK)<br />
14.1 Introduction to Smart Integrated Actuators<br />
Smart Actuators are considered to be highly integrated mechatronical units incorporating a motor <strong>and</strong> the<br />
complete motion control electronics in one single unit. For reasons of minimised interfacing a smart actuator<br />
provides a bus communication interface <strong>and</strong> runs all necessary motion control tasks internally.<br />
Smart actuators were introduced at the beginning 1990s as microcontrollers reached the necessary computing<br />
performance for motor control. The progressing microcontroller technology integrating a microprocessor,<br />
capture compare <strong>and</strong> timer units enabling PWM control as well as analogue, digital <strong>and</strong> communication ports<br />
has started the ongoing process of miniaturisation of sensors <strong>and</strong> motor control electronics. At the same time<br />
power electronics developed rapidly by increasing drain current capacity in field effect transistors (especially<br />
the MOSFET technology) at lower internal resistance as well as by introducing new semiconductor<br />
technologies like IGBT (insulated gate bipolar transistor).<br />
Depending on the application <strong>and</strong> component sizing a smart actuator can also incorporate a speed reducing<br />
gear head thus enabling the device to provide higher torque output.<br />
Smart actuators can be distinguished by these criteria: motor technique <strong>and</strong> control system.<br />
14.2 Basic Actuator Technologies<br />
State-of-the-art smart actuators have been presented with fluidic drives as well as with electromagnetic<br />
143
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
motors.<br />
An example of a smart fluidic actuator is the Rotary Elastic Chambers – Actuator developed at IAT in<br />
Bremen. The modules comprise very few fully integrated components with matched mechanical <strong>and</strong><br />
electrical interfaces: fluidic vane motor, sensors, control elements, as well as an electronic unit <strong>and</strong> control<br />
algorithms.<br />
Figure 36: Rotary Elastic Chambers – Actuator (IAT<br />
Bremen)<br />
With integrated control unit<br />
144<br />
Figure 37: Flexible Fluidic actuator (AIA KIT, Karlsruhe)<br />
Shows the working principle of the vane motor<br />
Another fluidic actuator principle has been presented by Festo AG. The “fluidic muscle” provides a linear<br />
type of actuation, but does not include a control unit within the actuator.<br />
Other smart actuator solutions are mostly based on electromagnetic motors. The motor techniques used here<br />
are stepper motors, brushed DC motors <strong>and</strong> permanent magnet synchronous motors.<br />
Dimensioning an appropriate electromagnetic motor for a given application influences the choice of the motor<br />
technique. The most important criteria are housing size, weight, scalability, price, delivery st<strong>and</strong>ards but also<br />
costs resulting from motor control electronics <strong>and</strong> software. Depending on the motor choice the expenditure<br />
on drive control can be very different. Therefore some advantages <strong>and</strong> disadvantages of the named motors<br />
techniques are explained. It must be considered that smart actuators in most cases are also being used for<br />
measurement of actual data like position, speed, temperature <strong>and</strong> motor current.<br />
14.2.1 DC motor<br />
Until some years ago mechanically commutated DC motors had been used for highly dynamical servo drives.<br />
The negative features of this motor technique are based on the principle design where the rotating coil is<br />
centred inside the motor housing <strong>and</strong> heat is unable to dissipate appropriately. In addition the mechanical<br />
commutation limits the stall current but also the current at higher rotation speed because of brush fire. Low<br />
cost DC motors have limited life time because of brush wear. Brush fire can cause high frequency noise<br />
causing trouble in the power supply of other components connected.<br />
14.2.2 Stepper motor<br />
Stepper motors are an inexpensive alternative for small power drives (
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
motor for the application planned.<br />
14.2.3 Permanent magnet synchronous motor<br />
In applications with small <strong>and</strong> medium power drives permanent magnet synchronous motors (PMSM) have<br />
replaced st<strong>and</strong>ard DC motors today. This is due to the advantages PMSM provide, e.g. service free, high<br />
torque <strong>and</strong> overload capacity because of the missing mechanical commutation. In addition PMSM are better<br />
able to dissipate heat since the stator windings are oriented to the housing. This results in a high ratio of<br />
torque to housing size. On the other h<strong>and</strong> motor control for PMSM is highly complex <strong>and</strong> expensive.<br />
14.3 Control techniques, Interfacing, st<strong>and</strong>ardised drive modules<br />
The control techniques for electromagnetic motors have been widely discussed in science. Vector modulation<br />
is the state of the art for PMSM making precise motor current control possible. The available feedback<br />
systems are based on incremental encoders or resolvers. Absolute encoders have been widely introduced, but<br />
are not available as st<strong>and</strong>ard components for small sized drives yet.<br />
Most smart actuator solutions are based on simple two-wire bus communication such as the CAN bus,<br />
Profibus or other field bus systems. Recently Ethernet based smart actuators have been presented. In order to<br />
provide real time interfacing new transport layer protocols have been introduced according to the OSI/ISO<br />
layer model, e.g. Ethercat, Ethernet/IP, Profinet or SercosIII.<br />
Figure 38: Dynamixel smart actuators including reduction gear, controller, motor <strong>and</strong> driver<br />
The power supply in most cases is based on DC power (typically 24VDC), because most smart actuators have<br />
been optimised for small dimensions not allowing for voltage conversion. Some of the smart actuator<br />
products follow a modular approach by st<strong>and</strong>ardising flange sizes, torque capacity, electrical interfacing <strong>and</strong><br />
controls. A modular system is scalable <strong>and</strong> provides an easier way to extend the application for further use or<br />
reuse of the actuators <strong>and</strong> sensors for new applications.<br />
145
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Figure 39: SCHUNK modular smart actuator system with rotary <strong>and</strong> linear drives<br />
as well as gripper modules<br />
14.4 <strong>CORBYS</strong> enabling potential <strong>and</strong> constraints (current gaps/shortcomings)<br />
With regards to the <strong>CORBYS</strong> project there are a few criteria to discuss on the dimensions of the motors <strong>and</strong><br />
control subsystem.<br />
Miniaturisation is a major issue because many drives must be incorporated into a relatively small<br />
demonstrator (gait rehabilitation). In order to move human joints in an exoskeleton high torque motors with<br />
minimised dimensions are needed. An actuator providing a high torque – at low weight is the resulting<br />
requirement for the highest power density. Another aspect for dimensioning the drive is heat. The drives will<br />
need to be placed near the human skin <strong>and</strong> must therefore be limited in heat emission. Both demonstrators are<br />
mobile systems powered by batteries. This leads to several requirements with regards to power stability <strong>and</strong><br />
tolerance for a wide voltage range (typically 19... 30VDC). The safety <strong>and</strong> reliability is a clear issue, because<br />
the user must expect highest robustness during their training sessions. The drives must provide a high short<br />
time overload capacity, because smaller motors with small power consumption <strong>and</strong> small size would assist in<br />
the dimensioning of the actuators. Minimising friction in motor <strong>and</strong> gear heads is another important issue<br />
helping to significantly improve the sensibility of the drive.<br />
The cycle time aspect is based on the controls system coordinating the actuator motion independent of the<br />
training program <strong>and</strong> the sensor feedback system. Ideally, an open control architecture is foreseen allowing<br />
the control system to access the drive data <strong>and</strong> parameters at all times <strong>and</strong> on different levels. Finally minimal<br />
noise is desired in order to enable better acceptance by the human user. Price issues need to be discussed from<br />
a dissemination point of view.<br />
14.5 Technology innovation requirements Gaps filter elements<br />
Analysing the state of the art in actuator technology revealed several gaps. The first to name is integration of<br />
actuators with requirements as described above in a dimension limited mobile device with battery power. In<br />
order to keep battery lifetime high the complete system must be optimised for power consumption <strong>and</strong> high<br />
efficiency drive. The high power density of the system leads to possible sensitivity in data transfer, bus<br />
communication or malfunction due to power loss. The actuators need to provide variable stiffness, depending<br />
on the current control <strong>and</strong> training scheme. At the same time safety is important to reduce the risk of injuries.<br />
For easy service a modular approach <strong>and</strong> a simplified interfacing with electrical connectors is preferred.<br />
146
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
15 StateoftheArt in NonInvasive Brain Computer Interface (BBT)<br />
The objective of <strong>CORBYS</strong> project is to develop the underlying design principles of a cognitive robotic system<br />
with the focus on augmenting human locomotion capabilities. One of the key points of these principles is the<br />
way that the human <strong>and</strong> the robot interact <strong>and</strong> share their cognitive capabilities. In this area, the large<br />
majority of research was concentrated in a uni-directional way, where the human delivers orders to the<br />
machine responsible for the execution of the comm<strong>and</strong>s. The focus has been on exploring <strong>and</strong> improving the<br />
way that robots underst<strong>and</strong> natural human expressions (e.g. underst<strong>and</strong>ing natural language, gestures, etc).<br />
However, very little research has concentrated on a mutual exchange of cognitive information. This is<br />
because this information mainly resides in the human brain <strong>and</strong> progress was still required in brain computer<br />
interfacing. A few years ago there was much research in underst<strong>and</strong>ing how to decode this information from<br />
brain imaging techniques, leading to one of the objectives of <strong>CORBYS</strong>: to use brain computer interface<br />
technology to online decode human cognitive information. This information will be used to build a natural<br />
way of communication between the human <strong>and</strong> the robot, <strong>and</strong> to guide the design <strong>and</strong> operation principles of<br />
the cognitive robot.<br />
In this context the Brain-computer interfaces (BCI) emerge as systems that make it possible to translate the<br />
electrical activity of the brain in real-time using comm<strong>and</strong>s to control devices. They do not rely on muscular<br />
activity <strong>and</strong> can therefore provide communication <strong>and</strong> control for people with devastating neuromuscular<br />
disorders such as the amyotrophic lateral sclerosis, brainstem stroke, cerebral palsy <strong>and</strong> spinal cord injury. It<br />
has been shown that these patients are able to achieve EEG controlled cursor, limb movement, <strong>and</strong> prosthesis<br />
control <strong>and</strong> have even successfully communicated by means of a BCI (Birbaumer et al, 1999; Hochberg et al,<br />
2006; Buch et al, 2008).<br />
The remainder of this Deliverable reviews current non-invasive BCI technology applied to the control of<br />
robots focussing on the <strong>CORBYS</strong> technology gaps <strong>and</strong> innovations. Subsequently, the main components of a<br />
BCI system, Hardware, Software, Basic Signal Processing <strong>and</strong> Decoding will be described in the following<br />
sections, showing the state of the art, the constraints <strong>and</strong> the respective technology innovations.<br />
15.1 Invasive vs. NonInvasive BCI Technology <strong>and</strong> Robotics<br />
Recently there has been a great surge in research <strong>and</strong> development of brain-controlled devices for<br />
rehabilitation. The most significant characteristic of these systems is the use of invasive or non-invasive<br />
methods to record brain activity. Invasive techniques require a clinic intervention to implant the electrodes on<br />
the cortex, while non-invasive methods place the sensors outside the skull (e.g., Electroencephalogram EEG).<br />
The research arena is dominated by the US for invasive techniques <strong>and</strong> research in animals, while the EU<br />
leads in the development of non-invasive techniques with direct application on humans. On the one h<strong>and</strong>, US<br />
teams have achieved significant results using invasive recording methods to control artificial devices. A rat,<br />
for example, was able to move a robotic arm in one dimension to get water (Chapin et al, 1994), monkeys<br />
were trained to move a cursor on a screen <strong>and</strong> adjust its size (Carmena et al, 2003) <strong>and</strong> to self-feed using a<br />
robotic arm with a gripper, to grab food in three dimensions (Velliste et al, 2008), <strong>and</strong> a patient had sensors<br />
implanted directly in the cortex <strong>and</strong> learned to open <strong>and</strong> close a prosthetic h<strong>and</strong> <strong>and</strong> control a simple robotic<br />
arm in two dimensions (Hochberg et al, 2006). These achievements show that direct <strong>and</strong> online control of a<br />
robotic device is possible with invasive brain recordings. However, these settings involve technical <strong>and</strong><br />
clinical difficulties for humans, such as the maintenance of the electrodes in the cortex, infection risks, <strong>and</strong><br />
damage to the brain (Mc Farl<strong>and</strong> & J.R.Wolpaw, 2008).<br />
On the other h<strong>and</strong>, in line with <strong>CORBYS</strong>, much research in BCI has focused on non-invasive recording<br />
147
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
methods to be accessible for a wide range of users. Currently, Electroencephalography (EEG) is the most<br />
widely used technology. Following the human non-invasive brain-actuated robot control demonstrated in<br />
2004 (Millán et al, 2004), research has addressed control of other rehabilitation devices such as wheelchairs,<br />
robotic arms, small-size humanoids, <strong>and</strong> even teleoperation robots for telepresence applications (some of them<br />
developed by members of the Consortium). The majority of research in brain computer interfaces for robot<br />
control use non-invasive methods to record brain activity, <strong>and</strong> have explored the way that humans can deliver<br />
control orders to the machine using brain waves. For example, wheelchairs (Iturrate et al, 2009; Luth et al,<br />
2007;, Rebsamen et al, 2007) were driven by using a BCI that detects the using steady-state potentials or P300<br />
evoked potentials, or with detection of mental tasks (Vanacker et al, 2007). Other results focused on the<br />
motion of a robotic arm in two dimensions using motor imagery (Mc Farl<strong>and</strong> & Wolpaw, 2008), controlling<br />
the opening <strong>and</strong> closing of a h<strong>and</strong> orthosis (Pfurtscheller et al, 2000) with sensory motor rhythms, using motor<br />
imagery to move a neuroprosthesis (Muller-Putz et al, 2005), using P300 potentials to control a humanoid<br />
robot (Bell et al, 2008), <strong>and</strong> the teleoperation of a mobile robot remotely located to develop navigation <strong>and</strong><br />
exploration also with P300 potentials (Escolano, 2009). However, none of these projects has explored the<br />
type of human cognitive information that could be extracted in this process <strong>and</strong> how it could be used at all the<br />
autonomy levels of the robotic system.<br />
Technological Gaps in Merging Non-Invasive BCI <strong>and</strong> Robotics. Nevertheless, we are still far from any<br />
successful deployment of non-invasive brain-controlled devices, which is due to the fact that the already<br />
mentioned devices share common shortcomings that require substantial scientific advances:<br />
1. The mental protocol for robot control is not natural for the user, i.e., the user’s intention is not explicit<br />
in the control. For example, in one of the wheelchairs, the user had to concentrate on rotating a 3D<br />
figure to turn right or on complex arithmetic operations to turn left.<br />
2. There is no mutual self-adaptation between the human <strong>and</strong> the controlled device. Usually, adaptation<br />
is considered only at BCI level to take into account variability in brain activity across subjects <strong>and</strong><br />
time, but there is not a mutual adaptation of the human to the robot <strong>and</strong> vice versa.<br />
3. There is a lack of general <strong>and</strong> modular software architecture that successfully integrates the different<br />
BCI technologies, <strong>and</strong> that also complies with the requirements of hardware <strong>and</strong> software robotic<br />
architectures. This is important for large scale or integration projects.<br />
4. The working scenarios are very simple controlled situations given the current state-of-the-art in<br />
autonomous robotics. For instance, the most complex tasks achieved are to open or close a robotic<br />
h<strong>and</strong> or robot navigation in a two dimensional world.<br />
Innovation in <strong>CORBYS</strong><br />
In robotic related rehabilitation programs <strong>and</strong> in many other robot contexts it has been suggested that human<br />
cognitive processes, such as motor intention, attention, <strong>and</strong> higher level motivational states are important<br />
factors with potential to build a natural <strong>and</strong> increased cognitive interaction with the robot (Tee et al, 2008).<br />
The possible innovations of <strong>CORBYS</strong> are to build a BCI to decode <strong>and</strong> detect motor intentions in real-time<br />
<strong>and</strong> a rich cognitive information to be used in the subsequent levels of the robot hierarchy. This general<br />
objective needs innovation in several areas of brain computer interfacing, such as the development of signal<br />
processing <strong>and</strong> machine learning techniques in order to detect in real-time neural processes preceding<br />
movement <strong>and</strong> of cognitive processes related to the human execution of the task. In addition to this, a brain<br />
computer software architecture will have to be developed as an integration tool for the BCI system with interconnections<br />
with all the modules of the project. The previous aspect, plus other more related to the<br />
148
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
deployment of the application (human locomotion rehabilitation) will be addressed in the next sections.<br />
15.2 Hardware for Noninvasive BrainComputer Interfaces<br />
Neural activity produces not only electrical activity but also other types such as magnetic <strong>and</strong> metabolic<br />
changes that can also be measured non-invasively. However, this type of measurement requires<br />
magnetoencefalography MEG (for the magnetic fields); <strong>and</strong> positron emission tomography PET, functional<br />
magnetic resonance fMRI <strong>and</strong> optical imaging (for the metabolic activity). These measurement techniques<br />
require sophisticated devices <strong>and</strong> facilities out of scope of a technology focused on a wide range of people. In<br />
addition to this, these methods are still technically dem<strong>and</strong>ing <strong>and</strong> costly, <strong>and</strong> most of them also have low<br />
temporal resolution which is a strong limitation for rapid communication (i.e. information transfer rate) being<br />
one of the most important features of communication devices.<br />
Focusing on the electrical brain activity, the electroencephalogram (EEG) is a sensory system that places<br />
several sensors in the surface of the scalp. These sensors allow the scalp recordings of neural activity in the<br />
brain by measuring the potential changes over time between a signal electrode <strong>and</strong> a reference one. An extra<br />
third electrode (ground) is used for getting differential voltage. Minimal EEG configuration consist of one<br />
active electrode, one reference electrode <strong>and</strong> one ground electrode (Teplan, 2002). These measurements are<br />
amplified <strong>and</strong> digitalised, <strong>and</strong> subsequently transferred to the computer. The EEG is the most accepted brain<br />
recording technique for brain-computer interfacing, since it is relatively cheap, portable, easy to use <strong>and</strong> has a<br />
low set-up cost. For these reasons, in line with the BCI community, the EEG is most recommended recording<br />
technology for <strong>CORBYS</strong>. However, there are two main dimensions that affect the selection of the type of<br />
EEG technology: the sensor technology <strong>and</strong> the sensor acquisition system.<br />
On the one h<strong>and</strong>, the sensors are the bottleneck of today non invasive BCI since they need long time<br />
preparation due to the use of conductive gel. St<strong>and</strong>ard gel based EEG electrodes are passive electrodes, which<br />
require the skin to be cleaned, brushed, scraped in order to reduce their impedance <strong>and</strong> achieve sufficient<br />
signal quality. This abrasive skin treatment makes the subject uncomfortable <strong>and</strong>, when measured very<br />
repeatedly, exposed them to the risk of irritation, pain <strong>and</strong> infection (Teplan, 2002). Moreover, the use of<br />
electrolytic gel may cause electrical short between two electrodes in close proximity when the montage is not<br />
correctly carried out. To reduce time preparation, the active electrodes were developed which reduce artefacts<br />
<strong>and</strong> signal noise resulting from high impedance between the electrode(s) <strong>and</strong> the skin. The system setup<br />
becomes faster but the conductive gel is still used. Due to the previous reasons these wet sensors are not<br />
suitable for daily use in normal living environments, <strong>and</strong> in the last few years there has been an increasing<br />
interest in developing dry electrodes (they eliminate the need of conductive gel). These dry electrodes are<br />
based on a completely different sensory technology. The main features of both, wet <strong>and</strong> dry electrodes are<br />
listed below:<br />
1. Contact impedance: wet electrodes showed a value that is the half of the dry one (Searle & Kirkup, 2010).<br />
2. Static interference: the influence of an electrically charged object moving near recording electrodes (nonstationary<br />
electric fields) was analysed. Dry electrodes were tested in unshielded <strong>and</strong> shielded conditions,<br />
showing in both case smaller interference levels than wet types. In particular, dry electrodes, when<br />
shielding was employed, showed an interference level 40 dB lower than that experienced by wet<br />
electrodes (Searle & Kirkup, 2010).<br />
3. Motion artefact: change in potential at the electrode/skin interface, due to skin movement, caused<br />
unwanted signals. Movement tests were conducted showing for dry electrodes an artefact value 20 dB<br />
higher than that for wet types at the commencement of the trials. Whereas at the end of the trial period,<br />
149
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
due to a reduction in skin/electrode interface effects for dry electrodes over the length of the trial, RMS<br />
artefact values for dry electrodes were 8.2 dB lower than wet electrodes. This outcome may be attributed<br />
to the geometry of the dry electrodes housing (Searle & Kirkup, 2010).<br />
4. Usability: wet electrodes with its lengthy application time, subject discomfort <strong>and</strong> irritation, the need for<br />
extensive caregiver support <strong>and</strong> training, have made the technology impractical. Instead the dry electrode<br />
with its easy <strong>and</strong> reliable application in a home environment <strong>and</strong> its use without any skin preparation or<br />
gel application, may help the widespread use of BCI technology (Mason, 2005).<br />
5. Stability: the wet electrode signal tends to become unstable over time due to the use of conductive gel, it<br />
dries during prolonged measurements reducing signal quality <strong>and</strong> signal performance. On the other h<strong>and</strong><br />
dry electrodes are like wet types at the beginning but become more stable with ongoing time (Matteucci et<br />
al, 2007).<br />
6. Portability: typical EEG equipments using wet electrodes are not mobile, they have high power<br />
consumption <strong>and</strong> a mass of wires connecting the cap to a PC that process, store, display <strong>and</strong> analyse the<br />
signals. However, the requirements for power, control, <strong>and</strong> read-out are reduced in dry electrodes, making<br />
a more feasible a wireless solution (Sullivan et al, 2008).<br />
Some of these features have been corroborated by research teams that have developed working prototypes of<br />
dry EEG sensors, <strong>and</strong> demonstrated that the signal obtained can be largely comparable to wet electrodes<br />
(Popescu, 2007). They have also been tested in BCI paradigms such as alpha/mu rhythms <strong>and</strong> flash visual<br />
evoked potential (FVEP). They showed a high correlation with signals recorded in parallel with wet st<strong>and</strong>ard<br />
electrodes in the alpha/mu rhythms experiment <strong>and</strong> a negligible difference in the FVEP experiment (Gargiulo<br />
et al, 2010). In addition to the previous results, the N100 auditory evoked potential, the auditory evoked P300<br />
event related potentials <strong>and</strong> the sensory-motor rhythm (SMR) for 2-class motor imagery were also tested. Dry<br />
electrodes showed a signal close to the one recorded with gel-based electrodes between at least 7 Hz <strong>and</strong> 44<br />
Hz Bristle-sensors (Grozea et al, 2011).<br />
150
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Figure 40: EEG Dry Devices<br />
On the other h<strong>and</strong>, clinical acquisition systems for dry devices are not available on the market, instead, several<br />
commercial offerings have been made that could fulfil BCI requirements. An important key feature is that<br />
these systems are focussed on every day use of EEG equipment over a long-term period. For that reason,<br />
particular attention is paid to the headset design which has to be lightweight, comfortable <strong>and</strong> at the same time<br />
ensures coverage in the area of interest. One of the main key points of these systems is that they are portable<br />
since they incorporate wireless connection link with a computer. For instance, the Enobio system (Figure<br />
40(b)) developed by Starlab, is a device with 4 dry active digital electrodes placed on the forehead, it shows<br />
similar responses compared to Active Two system from Biosemi (gel based device) (Cester et al, 2008; Riera<br />
et al, 2008;, Starlab, n.d). A Neural Impulse Actuator (NIA) (Figure 40(e)) is also available, a OCZ<br />
Technology product, it uses three carbon-fibre sensors placed in the forehead (OCZ Technology, n.d); instead,<br />
NeuroSky provides with MindSet (Figure 40(d)) a sensor system with just one single dry sensor at EEG<br />
position FP1 (International 10-20 system) (NeuroSky, n.d). These are wearable wireless devices, however all<br />
of them have constrained in EEG research possibilities since the sensor location do not cover the areas of<br />
interest in the BCI community (like the sensory-motor cortex) <strong>and</strong> they sense the active in the prefrontal areas<br />
focalising in mental activity such as attention <strong>and</strong> meditation. So far, Epoch (Figure 40(a)), commercialised<br />
by Emotiv, is the most complete in fulfilling BCI requirements: it collects data from 14 saline sensors (they<br />
eliminate the electrolytic gel of wet electrodes but a saline solution has to be used) placed in a wearable<br />
wireless neuroheadset, located at AF3, AF4, F3, F4, F7, F8 FC5, FC6, T7, T8, P7, P8, O1 <strong>and</strong> O2. P300<br />
experiments were already conducted (Campbell et al, 2010). Unfortunately the Epoch neuroheadset does not<br />
support experiments involving motor cortex (brain area widely used in BCI) (Emotiv, n.d).<br />
151
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
The g.SAHARA (Figure 41(a)) electrode system, a g.tec product, consists of an 8 pin electrode made of a<br />
special golden alloy. The golden alloy <strong>and</strong> the 8 pins reduce the electrode-skin impedance. EEG recordings<br />
are performed at frontal, central, parietal <strong>and</strong> occipital regions of the head (a mechanical system is required<br />
that holds the electrode to the skin with a constant pressure at every possible recording location) (g.tec<br />
Technologies, n.d). The g.SAHARA can be used in combination with the g.MOBIlab+ (Figure 41(b)), a<br />
portable biosignal acquisition <strong>and</strong> analysis system.<br />
Figure 41: Dry <strong>and</strong> portable recording EEG system provided from g.tech: dry electrodes (left)<br />
<strong>and</strong> a portable bioginal acquisition <strong>and</strong> analysis system (right)<br />
15.3 Software for Noninvasive BrainComputer Interfaces<br />
Although the field of brain-computer interfaces is relatively young, significant research has been developed in<br />
the design of models <strong>and</strong> components to control interfaces <strong>and</strong> devices (see [Berger, 2008] for review).<br />
Despite this being a key point for the development <strong>and</strong> deployment of BCI-controlled devices, very little work<br />
has been devoted to design general, modular, <strong>and</strong> flexible software architectures for BCI (Mason et al, 2007).<br />
On one h<strong>and</strong>, the major existing BCI frameworks are very constrained in terms of usability <strong>and</strong> scalability.<br />
Some of these frameworks have been developed by private companies as small demonstrators of their<br />
technologies, being basically assessors of their provided hardware (g.tec Technologies, n.d; BioSig Project,<br />
n.d). Very specific open-source systems exist for biomedical signal processing <strong>and</strong> biosignal analysis, such as<br />
Matlab toolboxes (Delorme & Makeig, 2004) or software libraries (BioSig Project, n.d).<br />
The OpenEEG (OpenEEG, n.d) is an open-source project that promotes the creation of amateur software for<br />
neurofeedback using OpenEEG hardware designs. OpenEEG software is very specific <strong>and</strong> simple, but with a<br />
lack of a common architecture, incomplete documentation <strong>and</strong> that has not been extensively tested<br />
(OpenEEG, n.d). In addition, some other companies have developed proprietary software architectures for<br />
neurofeedback (BioEra, n.d.; BioExplorer, n.d.; Soft-dynamics, n.d.; Zengar Institute Inc., n.d.). Recently,<br />
other three software platforms have been proposed: BCI++ (Maggi et al, 2008), BF++ (Brainterface, n.d.) <strong>and</strong><br />
OpenViBE (Renard, 2010). The first one is a C/C++ framework for designing a BCI system, it also includes<br />
some 2D/3D features for BCI feedback. The second one is another C++ framework originally oriented<br />
towards neurofeedback applications, although later versions can support any kind of BCI system. The latter<br />
has rapidly emerged in the last few years, it has a set of software modules that can be easily <strong>and</strong> efficiently<br />
integrated to design BCI for both real <strong>and</strong> virtual environments. It is highly modular <strong>and</strong> operates<br />
independently from the different software targets <strong>and</strong> hardware devices. Anyway its graphical interface is still<br />
not intuitive <strong>and</strong> usable for non-programmers users. On the other h<strong>and</strong>, BCI2000 (Schalk et al, 2004) is the<br />
result of a joint effort of several laboratories <strong>and</strong> is currently the most utilised worldwide software platform<br />
for BCI.<br />
152
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
High-level Tools: Usability is a crucial part for a BCI software platform <strong>and</strong> its acceptance <strong>and</strong> spread relies<br />
on how easy-to-use the platform is for both non-programmers <strong>and</strong> developers. As a result, such a platform<br />
should provide its users with high-level tools featured by flexibility to customise the software <strong>and</strong> build<br />
different applications, as well as usability for non-programmers. As illustrated in Table 4, some examples<br />
exist. However, many of them require programming skills (e.g. code skeleton generators, Matlab analysis<br />
tools), while those aimed at non-programmers are too complex <strong>and</strong> limited (e.g. scene editors). The<br />
innovation at this point would be to develop graphical tools in a user-centred approach <strong>and</strong> minimise third<br />
party dependencies. Facing the use cases of both programmers <strong>and</strong> non-programmers, the major requirements<br />
would be captured. Those requirements must lead the development of this kind of tools to obtain powerful as<br />
well as usable software applications that support flexible BCI designs.<br />
Technological Gaps in Merging Non-Invasive BCI <strong>and</strong> Robotics: A software analysis of BCI systems of<br />
st<strong>and</strong>ard requirements (Mason et al, 2007) applied to this platform reveals, however, some limitations when<br />
used as a general software architecture <strong>and</strong> in particular in the context of the robotic project <strong>CORBYS</strong>:<br />
1. Recording software: only one recording technique can be used at a time for one application (e.g., EEG or<br />
MEG). This makes complex testing or validating a complementary approach to control devices.<br />
2. Only one type of neural paradigm can be processed at a time: As a result, it is not possible to use multiple<br />
control channels simultaneously (e.g., simultaneous robot control <strong>and</strong> online robot error recognition).<br />
3. It is a one-processing technique: a single signal-processing module can be processed simultaneously,<br />
which is a limitation in applications with redundant processing modules (for a given neural paradigm) or<br />
with parallel processing (of several paradigms as in the previous example).<br />
4. It is single application software: only one device can be controlled at a time, which prevents the<br />
development of applications where humans control several devices simultaneously (e.g., two arms for<br />
manipulation).<br />
5. It does not have interaction functionalities with other software architectures. This type of interface will<br />
open the possibility of interaction with other systems <strong>and</strong> re-using many existing algorithms already<br />
present in robotics architectures such as Stage/Player (Gerkey et al, 2003), OROCOS (Bruyninckx, 2011),<br />
CARMEN (Montemerlo et al, 2003), ROS (ROS.org, n.d.) or CoolBOT (CoolBOT Project, n.d.) .<br />
6. Non portable <strong>and</strong> single-platform: it relies on third party component for compilation <strong>and</strong> execution (it can<br />
only be compiled in Borl<strong>and</strong> under Windows operating system).<br />
7. Lack of high-level tools: it does not provide graphical tools or software components aimed at nonprogramming<br />
users in order to facilitate construction <strong>and</strong> customisation of BCI applications.<br />
Table 4: Main platform comparison<br />
153
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Table 4 shows a brief comparison between the main BCI frameworks to-date. It illustrates some drawbacks<br />
such as portability, in some cases due to third party dependencies (e.g. Matlab or development environments),<br />
or a lack of usable high-level tools aimed at non-programmers (those offered generate weakly customisable<br />
BCIs or are too complex). These drawbacks are a limitation for future developments in BCI research <strong>and</strong> its<br />
applications. Furthermore, to the best of our knowledge, there are no programming abstraction <strong>and</strong> support<br />
tools to facilitate modularity <strong>and</strong> flexible integration for this development. This is another limitation for large<br />
scale projects <strong>and</strong> for non-programming experts.<br />
Innovation in <strong>CORBYS</strong>: In this context, the innovation required in <strong>CORBYS</strong> would be to design a software<br />
architecture for BCI research <strong>and</strong> development that overcomes these constraints. This platform will be a<br />
software that supports multi-neural paradigm, multi-signal processing, <strong>and</strong> multi-application (which covers<br />
some of the limitations of existing BCI platforms). In addition to this, the new software architecture would be<br />
portable <strong>and</strong> multi-platform (supporting embedded systems for distributed intelligence in general-purpose<br />
micro-controllers), scalable <strong>and</strong> distributed (to balance workload <strong>and</strong> hardware capabilities), with real-time<br />
functionalities <strong>and</strong> with inter-connectivity to support collaborative applications for robotic platforms. These<br />
features are important for <strong>CORBYS</strong>, since the key point is to provide a framework to decode several mental<br />
processes simultaneously while providing a real-time integration framework portable <strong>and</strong> with connectivity<br />
with the robotic system.<br />
154
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Figure 42: Example EEG artefacts<br />
15.4 The Role of EEG Artefacts in NonInvasive BCIs<br />
One of the most important aspects in biomedical signal processing is to acquire knowledge about noise <strong>and</strong><br />
artefacts in order to minimise them. Artefacts are undesirable signals that can occur in the signal acquisition<br />
process <strong>and</strong> interfere with neurological phenomena. They may change the characteristics of EEG signal,<br />
limiting the accurate evaluation of the running brain processes, <strong>and</strong> be even incorrectly used as the source<br />
control in BCI systems (Fatourechi et al, 2007). This is the reason why it is of upmost importance to develop<br />
automatic computer based methods that h<strong>and</strong>le with artefacts, <strong>and</strong> this technology is a fundamental<br />
requirement for a BCI system (Srnmo & Laguna, 2005). In EEG recordings (<strong>and</strong> as a consequence in EEG-<br />
BCI systems) a wide range of artefacts can occur. While some of these artefacts can be easily identified, there<br />
are others that may have similar characteristics to the neural activity <strong>and</strong> result extremely difficult to<br />
recognise. One possible categorisation of artefacts is based on their origin: technical (originated from outside<br />
the human body, such as the 50/60 Hz power-line noise, changes in electrode impedances, etc); <strong>and</strong><br />
physiological (arising from a variety of bodily activities, such as potentials introduced by eye or body<br />
movements, muscular activity, cardiac activity, etc. Below (figure 4) some examples of artefacts are shown.<br />
155
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
As mentioned before, the artefacts may change the characteristics of EEG signal, limiting the accurate<br />
evaluation of the running brain processes, <strong>and</strong> be even incorrectly used as the source control in BCI systems.<br />
This is the reason why several automatic methods for artefact processing have been developed in the BCI<br />
literature.<br />
On the one h<strong>and</strong> there is the artefact Rejection to be taken into account, which is based on the rejection of<br />
contaminated trial, implicating loss of valuable data (Ramoser et al, 2000; Millán et al, 2002). Due to the very<br />
large number of undesirable signal in BCI systems, not all the contaminated trials can be rejected. Usually the<br />
most artefact affected epochs are excluded from the analysis. Therefore, the “cleaned” data are not<br />
completely free of artefacts. This methodology is only usable for offline analysis. In online real-time<br />
applications of a BCI system it is not possible to have time periods when the artefact contaminated signals are<br />
rejected, <strong>and</strong> as a consequence the BCI system cannot be used to control the device.<br />
On the other h<strong>and</strong>, there is the artefact removal, where the objective of these techniques is to remove the<br />
artefacts as much as possible while keeping the related neurological phenomenon intact. There are several<br />
types of artefact removal techniques:<br />
1. Linear filtering: used to remove artefacts located in frequency b<strong>and</strong>s that are not useful for the<br />
application of interest (Barlow, 1984; Ives & Schomer, 1988). Low-pass <strong>and</strong> high-pass filtering can be<br />
used to remove EMG <strong>and</strong> EOG artefacts respectively. The main advantage of this method is its<br />
simplicity, however this fails when neurological phenomena <strong>and</strong> the artefacts overlap or lie in the same<br />
frequency (Geetha & Geethalakshmi, 2011).<br />
2. Linear combination <strong>and</strong> regression: a common technique for removing ocular artefacts from EEG signals<br />
(Croft & Barry, 2000), it uses a linear combination of the EOG contaminated EEG signal <strong>and</strong> the EOG<br />
signal. One problem of this approach is that subtracting EOG signal from the EEG one may also remove<br />
part of it. Regression techniques can be used to remove head-movement, jaw clenching <strong>and</strong> saliva<br />
swallowing artefacts (Geetha & Geethalakshmi, 2011).<br />
3. Principal component analysis: strictly related to the mathematical technique of singular value<br />
decomposition (SVD), it requires uncorrelation between the artefacts <strong>and</strong> the EEG signal. This method<br />
has been reported to be not completely efficient with EOG, EMG <strong>and</strong> ECG artefacts, especially when<br />
they have comparable amplitude to the EEG signal (Lagerlund et al, 1997; Geetha & Geethalakshmi,<br />
2011).<br />
4. Blind source separation: a technique generally based on a wide class of unsupervised learning<br />
algorithms, it identifies the components that are attributed to artefacts <strong>and</strong> reconstruct the EEG signal<br />
without their contribution. Independent component analysis (ICA) is the most utilised (Choi et al, 2005).<br />
This method has been widely used to remove ocular artefacts, <strong>and</strong> also EMG <strong>and</strong> ECG artefacts in<br />
clinical studies. Its main advantage is that it does not rely on the availability of reference artefacts,<br />
however it usually needs prior visual inspection to identify artefact components (Geetha &<br />
Geethalakshmi, 2011).<br />
5. Others: wavelet transform (Browne & Cutmore, 2002), nonlinear adaptive filtering (He et al, 2004) <strong>and</strong><br />
source dipole analysis (SDA) (Berg & Scherg, 1994), even if they have so far a limited application in<br />
BCI systems.<br />
156
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Technological Gaps in Artefacting for Non-Invasive BCI <strong>and</strong> Robotics: Historically, EEG signals have<br />
been considered noise prone, <strong>and</strong> even if human cognition often occurs during dynamic motor action most<br />
EEG studies examine subjects in static conditions (e.g. seated). However, the <strong>CORBYS</strong> project requires the<br />
patient to be walking during the use of the BCI system. During locomotion a large number of mechanical<br />
artefacts arise, which are associated for instance with head movements <strong>and</strong> can have amplitudes that are one<br />
order of magnitude greater than the corresponding brain related EEG signal. These kinds of artefacts are nonstationary,<br />
since the kinematics <strong>and</strong> kinetics of human walking exhibit both short-term (step to step) <strong>and</strong> longterm<br />
(over many steps) variability (Gwin et al, 2010).<br />
Very few studies have been realised on EEG during human locomotion. These include recording brain<br />
activity during pedalling a stationary bicycle (Jain, 2009), <strong>and</strong> during walking <strong>and</strong> running on a treadmill<br />
(Gwin et al, 2010).<br />
In the latter, the EEG signals of eight healthy volunteers were recorded from 248 active electrodes during a<br />
visual oddball discrimination task while simultaneously they walked <strong>and</strong> ran on a treadmill. A two-stage<br />
approach was used to remove locomotion artefacts: first a channel-based template regression procedure was<br />
applied, <strong>and</strong> then an Infomax independent component analysis (ICA) followed by a component-based<br />
template regression. Results showed that during walking condition locomotion the artefacts slightly<br />
contaminate the EEG signals in an event-related paradigm. However, during running conditions, the EEG<br />
signals are strongly affected by movement artefacts. The artefact removal technique implemented in the study<br />
was successfully applied separating brain EEG signals from gait related noise. This is believed to be the first<br />
study of EEG <strong>and</strong> event-related potentials from a cognitive task recorded during human locomotion. Its<br />
results show the feasibility of removing gait related movement artefact from EEG signals. However, notice<br />
that the type of EEG signals that this method separates are event-related responses, which are of different<br />
nature that the process that the <strong>CORBYS</strong> project will address (spontaneous activity). As is well-known in the<br />
BCI community, the EEG event-related responses are much more robust features (since they are more<br />
stationary) that the ones of the spontaneous EEG to be detected <strong>and</strong> classified. Then, it is expected that the<br />
impact of the EEG artefacts during locomotion will be much stronger in the <strong>CORBYS</strong> context than in the<br />
previous study.<br />
Required innovation in <strong>CORBYS</strong> The innovation that <strong>CORBYS</strong> will require is to develop signal processing<br />
techniques to address <strong>and</strong> correctly filter EEG artefacts acquired during human locomotion. Notice that these<br />
techniques will have to accommodate the non-stationary nature of the spontaneous process that will be used in<br />
<strong>CORBYS</strong>. In addition to this, these techniques for movement artefacts removal will need to support online<br />
analysis with a low calibration time, they will need to work with a low number of electrodes (maximum<br />
number of 16), <strong>and</strong> will need to require low computational resources. This innovation step will be challenging<br />
<strong>and</strong> crucial for the deployment of the BCI in the project, since the complete performance of the BCI will<br />
strongly depend on the precision <strong>and</strong> robustness achieved.<br />
15.5 Decoding the Cognitive Process Required in <strong>CORBYS</strong><br />
As mentioned in the introduction of this document, the general objective of <strong>CORBYS</strong> needs innovation in<br />
several areas of brain computer interfacing, such as the development of signal processing <strong>and</strong> machine<br />
learning techniques in order to detect in real-time neural processes preceding movement <strong>and</strong> of cognitive<br />
processes (such as the feedback potentials <strong>and</strong> the attention) related to the human execution of the task.<br />
15.5.1 EEG Decoding of Motor Intentions<br />
Several studies have demonstrated the appearance of EEG activity preceding human voluntary movement.<br />
157
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
These signals are associated to motor task preparation, <strong>and</strong> dissimilar to those during the actual execution, in<br />
Figure 43 EEG activities related to preparation <strong>and</strong> execution are shown. The intention to move is related<br />
with at least three cortical activities, specifically, the readiness potential (or Bereitschaftspotential -BP), the<br />
contingent negative variation (CNV) <strong>and</strong> the event-related (de)synchronisation (ERD/ERS).<br />
Figure 43: Voluntary h<strong>and</strong> movement phases: motion intention,<br />
motion execution <strong>and</strong> motion end<br />
The readiness potential starts approximately 2.0 s before the movement onset <strong>and</strong>, as pointed out in the first<br />
reports (Kornhuber & Deecke, 1964, 1965) <strong>and</strong> in subsequent studies (Deecke et al, 1969, 1976; Kutas &<br />
Donchin, 1980), rapidly increases its gradient about 400ms before the muscular activity. It is characterised by<br />
a maximal activity in the midline centro-parietal area <strong>and</strong> by a symmetrical <strong>and</strong> wide scalp distribution<br />
independently of the site of movement. The BP occurrence, in respect to the movement onset, differs among<br />
subjects <strong>and</strong> movement conditions. The early <strong>and</strong> the late segment of the BP are called “early BP” <strong>and</strong> “late<br />
BP“ respectively due to their different scalp distribution. The late BP, for its asymmetric distribution<br />
associated with unilateral h<strong>and</strong> movement, has been studied as the lateralised readiness potential (LRP) (Coles<br />
et al, 1988).<br />
The contingent negative variation, a similar slow negativity preceding movements, occurs during the interval<br />
between a warning stimulus (S1) <strong>and</strong> a subsequent imperative stimulus (S2) before the movement onset<br />
(Walter et al, 1974; Tecce, 1972). The earlier part of the CNV can be distinguished from the later one (or<br />
terminal CNV -tCNV) based on the different scalp distribution. The former is generated in response to S1 <strong>and</strong><br />
it is maximal over the frontal cortex; the latter can start up to 1.5 s before S2 <strong>and</strong> it is characterised by a<br />
maximal activity over the frontal <strong>and</strong> prefrontal cortices indicating cognitive preparation, <strong>and</strong> over the primary<br />
motor cortex (M1) <strong>and</strong> supplementary motor area (SMA) reflecting motor preparation (Rohrbaugh et al, 1976;<br />
Rosahl & Knight, 1995; Ikeda et al, 1996).<br />
The event-related desynchronisation/synchronisation (ERD/ERS) reflect the power changes of EEG<br />
oscillatory activity of various frequency b<strong>and</strong>s, they are related to various tasks including voluntary<br />
movement. In the case of motor task preparation alpha (8-13 Hz) <strong>and</strong> beta (14-30 Hz) are the frequency b<strong>and</strong>s<br />
to analysed (Shibasaki & Hallett, 2006). The ERD, corresponding to a power decrease, is related to increased<br />
neural activity of the associated cortical area, while the ERS, corresponding to a power increase, is associated<br />
with decreased activation (Pfurtscheller et al, 1997). The ERD starts about 1.5 s before the movement onset<br />
<strong>and</strong> it is localised over the contralateral precentral <strong>and</strong> postcentral area of the scalp. At approximately 750-<br />
158
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
500 ms before the muscular activity desyncronization with relevant magnitude can be measured over the<br />
ipsilateral scalp as well. Instead, ERS starts about 1 s prior to the movement over the occipital cortex<br />
(Bastiaansen et al, 1999).<br />
Several research labs have analysed these EEG signals. Most of them have been focusing on upper<br />
extremities pre-movement signals, mainly in the discrimination between right <strong>and</strong> left h<strong>and</strong>. Just few of them<br />
have been investigating the lower extremities (e.g. legs). So far there are no studies in discriminating between<br />
right <strong>and</strong> left leg. The following list provides an insight into a representative variety of works related to premotor<br />
potentials that are currently being pursued in research groups:<br />
� Morash <strong>and</strong> his group utilised CNV <strong>and</strong> ERD/ERS to predict which of four movements (right-h<strong>and</strong><br />
squeeze, left h<strong>and</strong> squeeze, tongue press on the roof of the mouth, <strong>and</strong> right foot toe curl) was about to<br />
occur (Morash et al, 2008). EEG signals were recorded from eight, right-h<strong>and</strong>ed, healthy, BCI naive<br />
subjects using 29 channels over sensorimotor areas. The movement was instructed with a specfic<br />
stimulus (S1) <strong>and</strong> performed at a second stimulus (S2). A spatial filter (ICA) <strong>and</strong> a temporal filter<br />
(DWT) were used in the pre-processing while an off-line evaluation was done using a naive Bayesian<br />
(BSC) classifier. An average classification accuracy of 40% was reached. The results of this study<br />
suggest that the ERD/ERSD is the most specific neural signal preceding movements, related to the<br />
particular movement.<br />
� Pfurtscheller <strong>and</strong> his group analysed similar intention of movements (left index finger, right index<br />
finger <strong>and</strong> right foot) to show the feasibility of automatic methods for the selection of optimal<br />
electrode position <strong>and</strong> optimal number of channels for an EEG-based brain state classifier (Peters et<br />
al, 2001). Data collected from three healthy subjects recorded with 56 electrodes, developed <strong>and</strong><br />
presented in a previous study (Pfurtscheller et al, 1994), were used. Spatial <strong>and</strong> frequency filtering<br />
were applied, especially for the former several techniques were compared showing common average<br />
reference (CAR) <strong>and</strong> Laplace filter as the best. Classification was performed using an artificial neural<br />
network (ANN) with three perceptrons for each channel, where its output is an ”opinion“ in the<br />
majority voting process for the automatic selection. High classification accuracy was obtained for<br />
left/right index finger discrimination (93%) <strong>and</strong> for left/right index finger <strong>and</strong> right foot<br />
discrimination (89%). Anyway the data were collected from subjects who were prompted to specific<br />
tasks by a computer.<br />
� The Berlin Brain Computer Interface Group has given an important contribution with various studies.<br />
Blankertz utilised the LRP to predict single-trial EEG potentials preceding voluntary finger movement<br />
(Blankertz et al, 2003). Eight healthy subjects were instructed to press keyboard keys in a self-chosen<br />
order <strong>and</strong> timing. The signal was first filtered using a Fourier transform filtering technique <strong>and</strong> after<br />
classified with a linear classifier (LDA). They managed to predict the laterality of imminent h<strong>and</strong><br />
finger movements (right vs. left), <strong>and</strong> demonstrate the possibility to achieve good accuracy even at<br />
fast motor comm<strong>and</strong> rates (2 taps per second) with a single-trial EEG paradigm. The latter result is<br />
very relevant, since it takes into account fast motor sequences condition, <strong>and</strong> start to analyse how<br />
after-effects of one movement superimpose on the preparation of a consecutive movement (offline).<br />
In a similar experimental setting Blankertz compared different kind of classifiers, Support Vector<br />
Machines (SVMs) <strong>and</strong> variants of Fisher Discriminant, reaching high accuracy level (>96%). Beyond<br />
the previous classifier a second one was trained, to distinguish movements events from the rest<br />
(Blankertz et al, 2002). Krauledat showed the possibility of using the LRP in motor task classification<br />
even in time critical context (Krauledat et al, 2004). The EEG was recorded with 27 up to 120<br />
electrodes while the subject had to respond as quickly as possible with finger movements to different<br />
159
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
stimuli. After a b<strong>and</strong>-pass filtering relying on the fast Fourier transform a LDA classification, to<br />
discriminate pre-movement potentials of left vs. right h<strong>and</strong> finger movements, was performed. The<br />
LRP curves showed a similar shape in spontaneous motor activity <strong>and</strong> in critical situations.<br />
Technological Gaps <strong>and</strong> related <strong>CORBYS</strong> innovation: As mentioned before, most of existing works have<br />
focused on upper extremities pre-movement signals, mainly in the discrimination between the right <strong>and</strong> left<br />
h<strong>and</strong>. Just few of them have been investigating the lower extremities (e.g. legs). So far there are no studies in<br />
discriminating between right <strong>and</strong> left leg. <strong>CORBYS</strong> will progress the state-of-the-art in building a real-time<br />
anticipatory BCI based on the intention of motion of the human feet, distinguishing between the movement of<br />
the left <strong>and</strong> right legs. A fully integrated <strong>and</strong> online anticipatory BCI that will improve current recognition<br />
rates <strong>and</strong> will widen the scope of known neural protocols that can be used in these BCIs will be developed. It<br />
is noticed that the use of intention/execution of motor imagery as a protocol for BCIs is attracting interest in<br />
robot mediated therapies, since it has been demonstrated that during motor execution, attention to task related<br />
features of the movement has an effect on motor performance (Tee et al, 2008). This cognitive information<br />
has an impact in the robotic scenario of <strong>CORBYS</strong> since the information about the intention of feet movements<br />
will be used by high-level modules of cognitive control architecture for perception. Though the development<br />
of the <strong>CORBYS</strong> anticipatory BCI will be driven by the <strong>CORBYS</strong> demonstrator this BCI technique is not<br />
limited only to robot-assisted rehabilitation but it can be applied to any other robot-control application.<br />
15.5.2 EEG Decoding of Feedback Potentials<br />
In neuroscience <strong>and</strong> neuropsychophysiology it is well known that the usage of event-related brain potentials<br />
(ERPs), evoked responses elicited by the presence of an internal or external event (Wolpaw et al, 2002), to<br />
study the underlying mechanisms of human error processing <strong>and</strong> monitoring (Ferrez & Millán, 2004).<br />
Different types of errors have been described such as when a subject performs a choice reaction task under<br />
time pressure <strong>and</strong> realises that he has committed an error (Christ et al, 2000) (response ErrPs); when the<br />
subject is given feedback indicating that he has committed an error (Nieuwenhuis et al, 2004) (feedback or<br />
reinforcement ErrPs); when the subject perceives an error committed by another person (observation ErrPs)<br />
(Coles et al, 2004); or when the subject delivers an order <strong>and</strong> the machine executes another one (Ferrez &<br />
Millán, 2004) (interaction ErrP). Recently it has been demonstrated that the errors are also elicited when the<br />
subject perceives an error in the robot operation in both simulated <strong>and</strong> real scenarios (Iturrate et al, 2010). In<br />
addition, several works have shown that it is possible to use signal processing <strong>and</strong> machine learning<br />
techniques to perform automatic single trial classification of these ErrPs (Ferrez & Millán, 2004; Iturrate et al,<br />
2010; Dal Seno, 2009).<br />
Feedback is usually an event perceived by a person or an animal as a return of an executed task, given as a<br />
result of a conduct that was or not appropriate. Humans’ learning mainly depends on the ability to distinguish<br />
between positive <strong>and</strong> negative feedback, this discrimination is detectable in the brain activity (Nieuwenhuis et<br />
al, 2004). Furthermore, it is known that some skills do not develop properly if feedback inputs are absent. In<br />
the last few years, therapists have used positive/negative feedback to improve their practice <strong>and</strong> the<br />
motivation of patients whose advance is slow (Tee et al, 2008). Recently, in the field of Brain-Computer<br />
Interfaces (BCI), there has been an increasing interest in their online detection. This is because they carry<br />
information to measure indirect parameters of the human learning process that could be used to maximise the<br />
performance of the therapeutic strategy (Wolpaw et al, 2002).<br />
Technological Gaps <strong>and</strong> related <strong>CORBYS</strong> innovation There are very few works in the design of a BCI<br />
system for feedback ERPs online classification (Lopez et al, 2010). The experimental protocol used was the<br />
same as proposed in (Miltner et al, 1997), where the subjects were required to estimate a certain amount of<br />
time (1 s) pushing a button when they believed that the time had elapsed. After each response a feedback<br />
160
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
stimulus was provided indicating the accuracy of the previous estimation. Each trial starts with a warning cue.<br />
Brain activity of five healthy subjects was measured using 32 electrodes placed at FP1, FP2, F7, F8, F3, F4,<br />
T7, T8, C3, C4, P7, P8, P3, P4, O1, O2, AF3, AF4, FC5, FC6, FC1, FC2, CP5, CP6, CP1, CP2, Fz, FCz, Cz,<br />
CPz, Pz <strong>and</strong> Oz. The signals were classified employing a Support Vector Machines (SVM) with radial basis<br />
function. This study analysed the requirements of the classifier in terms of the amount of training data, its<br />
performance among sessions <strong>and</strong> the possibility of fast re-training in order to achieve good performances<br />
using data from previous sessions. Also an online analysis of the data was performed showing an average<br />
recognition rate of (78%). <strong>CORBYS</strong> will progress the state-of-the-art by designing a real-time detection<br />
system of feedback errors in robotic applications, analysing the different feedback modalities (e.g. visual,<br />
auditory, vibrotactile, etc) that best suit for the robotic rehabilitation scenario defined.<br />
15.5.3 EEG Decoding of Attentional States<br />
Cognitive processes are produced <strong>and</strong> controlled within the central nervous system (CNS), accordingly brain<br />
<strong>and</strong> physiological activity of the body reflect these states. Cognitive states change the patterns of<br />
physiological signals (e.g. heart rate, skin temperature, respiration, etc.), several biosensors have been used to<br />
identify them; stress, relaxation <strong>and</strong> exhaustion conditions were analysed the most often (Shi et al, 2007; Zhai<br />
& Barreto, 2006; Kulic & Croft, 2007). Over the last few decades several studies have put in evidence the<br />
relation between attention, or other relevant mental conditions, <strong>and</strong> EEG spectral features. For instance, in<br />
Jung et al, (1997) a power spectrum estimation was combined with principal component analysis (PCA) <strong>and</strong><br />
artificial neural networks to estimate a local error rate in a sustained attention task. Others, instead, have<br />
focused on specific EEG rhythms. In Kelly et al, (2003) <strong>and</strong> Huang et al, (2007) alpha b<strong>and</strong> power was<br />
examined to investigate the attentional dem<strong>and</strong>s <strong>and</strong> the brain dynamics following vehicle deviation in<br />
sustained attention tasks, respectively. In addition alpha, the gamma b<strong>and</strong>, with frequencies greater than 30<br />
Hz, was analysed to determine its enhancement during a visual spatial attention task (i.e. moving-bar-like<br />
paradigm) (Gruber et all, 1999). In Haufler et al, (2000), log-transformed EEG power spectral estimates for<br />
various frequency b<strong>and</strong>s were compared during a selfpaced visuospatial task from skilled marksmen <strong>and</strong><br />
novice gunmen. Beyond attention, increased in alpha (Foster, 1990; Lindsley, 1952; Brown, 1970) <strong>and</strong> theta<br />
powers have been interpreted as a signs of relaxation (Teplan et al, 2009), in Teplan et al, (2006) this was<br />
shown during long term audio-visual stimulation. Furthermore, related to attention, clinical studies were<br />
conduced on Attention Deficit Hyperactivity Disorder (ADHD), suggesting that theta/beta self-regulation<br />
reduces its symptoms (Barry et al, 2003; Monastra et al, 2005), quantifying the deficit (Clarke et al, 2001;<br />
Koehler et al, 2009) <strong>and</strong> represents the basis of the neurofeedback treatment (Lubar, 1991; Linden et al, 1996;<br />
Friel, 2007). Recently, there is evidence that the states of attention <strong>and</strong> non attention can be discriminated<br />
achieving up to 89% classification accuracy rate in an online environment using a novel approach to extract,<br />
select <strong>and</strong> learn EEG spectral-spatial patterns. This new approach combines advanced signal processing <strong>and</strong><br />
machine learning: the filtering pre-processing step consists of two stage, a filter-bank (FB) <strong>and</strong> common<br />
spatial patterns (CSP) filters, while a mutual information technique selecting best features with a linear<br />
classifier were applied to measure the attention level (Hamadicharef et al, 2009). Aside from visual attention,<br />
attentional modulation of auditory event-related potentials was reported. Listening to two concurrent auditory<br />
stimuli, the event-related EEG is modulated by the user selective attention to one or the other (Hillyard et al,<br />
1973; Ntnen, 1982, 1990). These results were exploited to develop a BCI paradigm, in which the subject<br />
could make a binary choice by focusing its attention (Hill et al, 2004).<br />
Technological Gaps <strong>and</strong> related <strong>CORBYS</strong> innovation: During motor execution, attention to movement<br />
task related features plays a fundamental role affecting motor performance (Ingram et al, 200; Zachry et al,<br />
2005). Furthermore, several studies have investigated in the attention function during the learning process of<br />
novel sensorimotor transformation <strong>and</strong> the adaptation process to novel force perturbations, reporting its<br />
161
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
importance in learning a novel motor task (Lang & Bastion, 2002). These results encouraged a focus on the<br />
role of attention in pattern recovery for unhealthy subjects (e.g. post-stroke motor rehabilitation (Blanc-Garin,<br />
1994)). In a robotic rehabilitation scenario repeated <strong>and</strong> unchallenging stimuli may decrease attention to<br />
sensory information in the motor control loop (Tee et al, 2008). These outcomes suggest that motor relearning<br />
in robotic rehabilitation can be made more optimal by measuring the patient’s attention during the tasks <strong>and</strong><br />
consequently modulating the feedback, increasing attention to task relevant features. <strong>CORBYS</strong> will make<br />
progress in real-time recognition of the degree of attention <strong>and</strong> stress/relaxation from EEG data while the user<br />
is interacting <strong>and</strong> operating with the robot using existing models from clinical <strong>and</strong> psycho/physiological areas<br />
to find new correlations with the biosensors.<br />
162
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
StateoftheArt Relevant to the two Demonstrator Application Domains<br />
16 StateoftheArt in Gait Rehabilitation Systems (VUB)<br />
Gait training, over ground or on a treadmill, has become an essential part of rehabilitation therapy in patients<br />
suffering from gait impairment caused by disorders such as stroke, spinal cord injury, multiple sclerosis <strong>and</strong><br />
Parkinson's disease. Its effectiveness is increasingly evidenced by clinical trials <strong>and</strong> advancements in<br />
neuroscience. Seemingly trivial, the notion of “(re)learning to walk by walking” hides some of the key<br />
research questions that puzzle not only the field of rehabilitation science.<br />
Similar to the neurological principles underlying human walking itself, the principles underlying motor<br />
learning <strong>and</strong> neural recovery are not yet fully understood <strong>and</strong> are the subject of ongoing research. As a<br />
consequence, research efforts in the field are focused on quantifying the rehabilitation process <strong>and</strong> identifying<br />
rehabilitation practice that maximises outcome. In one of the existing practices, body-weight supported<br />
treadmill training (BWSTT), the patient's body weight is partially supported by an overhead harness while<br />
his/her lower limb movements are assisted by one up to three physiotherapists. The strenuous physical effort<br />
encumbering the therapists <strong>and</strong> the resulting short training session duration was one of the main reasons for<br />
introducing robotics into gait rehabilitation. Although this introduction was envisaged by therapists as well, it<br />
was mainly driven by engineering, strengthened by technological advancements in robotics <strong>and</strong> prior research<br />
into powered exoskeletons for humans. The advantages that were initially aimed at by automating therapy,<br />
namely enhancing intensity, repeatability, accuracy <strong>and</strong> quantification of therapy, are indeed easily associated<br />
with robotics. However, a robot operating in close physical contact with an impaired human requires an<br />
approach to robot performance that differs significantly from the viewpoint of industrial robotics. Accurate<br />
repeated motion imposed by a position controlled robot is considered contraproductive for many reasons: a<br />
lack of adaptable <strong>and</strong> function specific assistance, a limitation of the learning environment, reduced<br />
motivation <strong>and</strong> effort by the patient. Nowadays, the field of rehabilitation robotics is increasingly convinced<br />
by a human-centred approach in which robot performance is focused on how the robot physically interacts<br />
with the patient.<br />
A focus on the human in the robot puts emphasis on the adaptability <strong>and</strong> task specificity of robotic assistance<br />
required to achieve “assistance-as-needed”. At the same time, safety of inter-action, preventing harm <strong>and</strong><br />
discomfort, is m<strong>and</strong>atory. Variable stiffness or variable impedance is a promising concept in robot design <strong>and</strong><br />
control that addresses both safety <strong>and</strong> adaptability of physical human-robot interaction (pHRI). It implies that<br />
the robot gives way to human interaction torques to a desired <strong>and</strong> adjustable extent. This adds to the high<br />
requirements that were already imposed by the application, for instance with regard to wearability (compact<br />
<strong>and</strong> light weight design, adjustable to the individual) <strong>and</strong> actuator performance (high torque output, high<br />
power-to-weight ratio). Hence, in the development of novel prototypes rehabilitation roboticists are faced<br />
with the challenge of combining suitable design concepts, high performance actuator technologies <strong>and</strong><br />
dedicated control strategies in view of improved physical human-robot interaction. Improvement, that should<br />
lead to a better insight into the effects <strong>and</strong> effectiveness of robot-assisted rehabilitation <strong>and</strong> ultimately, leads to<br />
better therapies.<br />
16.1 Gait rehabilitation<br />
In persons with damage to the central nervous system, for instance due to stroke (brain damage) or incomplete<br />
SCI (spinal cord damage), task-specific <strong>and</strong> intensive gait training leads to (partial) recovery of motor<br />
163
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
function <strong>and</strong> improved gait (Barbeau <strong>and</strong> Rossignol, 1994; Dietz et al.,1994; Hesse et al., 1995). In addition,<br />
there are several secondary positive effects on the physical <strong>and</strong> mental condition of these patients (Hidler et<br />
al., 2008). Gait training prevents many of the secondary complications that often result from neurological<br />
injury <strong>and</strong> gait impairment (e.g. joint stiffening, muscle atrophy, cardiovascular deterioration, pneumonia,<br />
deep venous trombosis). Therefore, treadmill training is a well-established practice nowadays in rehabilitation<br />
centres for the neurologically impaired.<br />
The driving force behind neural recovery is neural plasticity, the ability of neural circuits, both in the brain<br />
<strong>and</strong> the spinal cord, to reorganise or change function (Elbert et al., 1994). This process was clearly evidenced<br />
in prior animal research, revealing the existence of so called “central pattern generators” at the level of the<br />
spinal cord that allow to reinstill animal gait through training following spinal cord lesion. However, neural<br />
recovery has proven to be much more complex in humans, as human walking involves both spinal control <strong>and</strong><br />
brain control (Yang <strong>and</strong> Gorassini, 2006). For rehabilitation to be successful neural plasticity should thus be<br />
maximally promoted. Although the mechanisms underlying neural recovery are not yet fully understood,<br />
there is a growing consensus about the major enabling principles. Sensory input from the muscles <strong>and</strong> joints<br />
to the central nervous system (afferent input) is crucial (Ridding <strong>and</strong> Rothwell, 1999; Harkema, 2001). Also,<br />
these sensory cues should match as closely as possible with those normally involved in the task to be<br />
relearned. Some critical cues of human locomotion have been established, but are subject of ongoing research<br />
(Behrman <strong>and</strong> Harkema, 2000). Another important requirement for recovery is that training should be<br />
intensive <strong>and</strong> that it should be started as early as possible after the injury to maximise outcome (Sinkjaer <strong>and</strong><br />
Popovic, 2005). The need for intensive training <strong>and</strong> the aim of relieving therapists from the physical strain<br />
induced by manually assisted gait training, triggered the application of robotic assistance to gait rehabilitation.<br />
The development <strong>and</strong> use of gait rehabilitation robots both in rehabilitation practice <strong>and</strong> in research labs has<br />
strengthened the validity of some concepts that are believed to underlie gait retraining in general <strong>and</strong> also to<br />
increase the effectiveness of robot-assisted training itself. A key finding is that assisting movements (too<br />
much) may result in reduced effort <strong>and</strong> decreased motor learning (Marchal-Crespo <strong>and</strong> Reinkensmeyer, 2008).<br />
This is evidenced by motor learning studies in unimpaired subjects involving robotic assistance to learn a<br />
movement task, <strong>and</strong> appears to apply to neural recovery as well. Some studies explored the benefits of<br />
amplifying movement errors instead of correcting them, which was found to improve short term motor<br />
learning, as reported for instance in Reisman et al., 2007. It was also shown that the training should be<br />
adapted to the skills of the subject: similar to providing too much assistance, providing too little is<br />
counterproductive (Emken et al., 2007). Another important aspect to training is active participation, which is<br />
promoted by motivation. The robotic training environment should trigger the subject to self-initiate <strong>and</strong><br />
actively contribute to the movements <strong>and</strong> also to sustain efforts (Lotze et al., 2003). The suggestion that effort<br />
may be more important than (robotic) assistance, questions the rationale behind the use of robots in movement<br />
therapy (Reinkensmeyer et al., 2007). Nonetheless, many rationales for using robotic assistance in gait<br />
rehabilitation can be found in literature (Guglielmelli et al. 2009; Marchal-Crespo <strong>and</strong> Reinkensmeyer, 2009).<br />
Previously unexplored movements provide novel sensory information to the patient, assistance makes gait<br />
training more safe <strong>and</strong> intense, <strong>and</strong> helping the patient accomplish desired movements is an important<br />
motivating factor (an extensive overview can be found in Marchal-Crespo <strong>and</strong> Reinkensmeyer (2009)).<br />
The aforementioned concepts are encompassed by a best practice in assistance-based robotic therapy<br />
commonly referred to by “assistance-as-needed”, implying that the robot should assist only as much as needed<br />
<strong>and</strong> only where needed. Hence, the level of assistance should be adaptable <strong>and</strong> task (or function) specific.<br />
Newly developed robot technology for gait rehabilitation is increasingly focused on this paradigm. The<br />
following section puts emphasis on general concepts <strong>and</strong> hardware.<br />
164
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Figure 44: Gait rehabilitation robots: end-effector based (e.g. HapticWalker)<br />
<strong>and</strong> exoskeleton based (e.g. Lokomat®). Related applications supporting the development of exoskeleton based gait<br />
rehabilitation robots: human performance augmenting exoskeletons (e.g. HAL), assistive exoskeletons (e.g. ReWalk®),<br />
powered prosthetics (e.g. C-leg®)<br />
16.2 Gait rehabilitation robots<br />
In the course of merely ten years, the number of rehabilitation devices, <strong>and</strong> from a broader perspective, the<br />
advancements in assistive technology, have grown remarkably. Although common challenges are faced in the<br />
development of robots for the upper limbs, this overview is limited to devices for the lower limbs. Similarly<br />
to rehabilitation robots for the upper limb, gait rehabilitation robots can be categorised according to their<br />
underlying kinematic concept into end-effector based <strong>and</strong> exoskeleton based robots (Guglielmelli et al., 2009).<br />
End-effector based robots interact with the human body in a single point (through their end-effector), whereas<br />
exoskeleton based robots interact with the human body in different points across human joints. The latter<br />
typically have an anthropomorphic, serial linkage type structure that acts in parallel with the lower limbs.<br />
Seldom, there are gait training devices not belonging to any of these two categories. String-man, a device<br />
consisting of tensioned wires attached to the body is an example (Surdilovic et al., 2007).<br />
16.2.1 Endeffector type devices<br />
Commercially available end-effector type devices are the GT1 Gait Trainer <strong>and</strong> its successor G-EO<br />
(Rehastim, Germany). These are based on a doubled crank <strong>and</strong> rocker gear system, driving two<br />
programmable footplates, generating gait-like movements of the lower limbs (Hesse <strong>and</strong> Uhlenbrock, 2000).<br />
165
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
The HapticWalker is based on the same concept of permanent foot-machine contact<br />
Figure 45: End-effector type devices: (from left to right) HapticWalker®, G-EO®, ARTHuR<br />
(Schmidt et al., 2005a). This concept is also typically found in parallel type rehabilitation robots with a<br />
platform for rehabilitation of the ankle/foot <strong>and</strong> for balance training as for instance in Saglia et al. (2010);<br />
Yoon <strong>and</strong> Ryu (2005). ARTHuR is a unilateral 2 DOF device using a back driveable two-coil linear motor<br />
<strong>and</strong> a pair of lightweight linkages to drive a footplate (Emken et al., 2006). It has been used primarily to study<br />
motor learning principles <strong>and</strong> to evaluate a teach-<strong>and</strong>-replay procedure with impedance adaptation (Emken et<br />
al., 2008).<br />
16.2.2 Exoskeleton type devices<br />
Most gait rehabilitation robots are exoskeleton based <strong>and</strong> prototype development in this type of device is often<br />
supported <strong>and</strong> stimulated by advancements in related applications: assistive exoskeletons, human performance<br />
augmenting exoskeletons <strong>and</strong> powered prosthetics (Figure 46). Rehabilitation exoskeletons, assistive<br />
exoskeletons <strong>and</strong> human performance augmenting exoskeletons are the three main types of powered<br />
exoskeletons for humans <strong>and</strong> the past decade has seen a multitude of research prototypes <strong>and</strong> devices of this<br />
sort. As their common rationale is the assistance of human gait, these exoskeletons are not easily categorised<br />
<strong>and</strong> sometimes assistive exoskeletons <strong>and</strong> human performance augmenting exoskeletons find their way to a<br />
rehabilitation setting. Also, from an engineering viewpoint, these applications have common requirements<br />
with respect to the performance of the actuators, the weight <strong>and</strong> compactness of the structure <strong>and</strong> the<br />
wearability of the design. There remains however a clear distinction of basic functionality. Rehabilitation<br />
exoskeletons are aimed at recovery of impaired function, whereas assistive exoskeletons assist impaired<br />
function <strong>and</strong> human performance augmenting exoskeletons augment sound function. Powered prosthetics<br />
replace lost function. Although acting in series with the human body instead of in parallel, powered lower<br />
limb prostheses also inspire the development of gait rehabilitation exoskeletons, as they require high<br />
performance actuators, gait phase detection <strong>and</strong> user-oriented control. An overview of the state-of-the-art in<br />
powered lower limb prosthetics can be found in Martin et al., 2010; Versluys et al., 2009.<br />
16.2.2.1 Commercially available devices<br />
To date there are two commercially available exoskeleton-based gait rehabilitation robots: the AutoAmbulator<br />
(Healthsouth, US) <strong>and</strong> the Lokomat (Hocoma, Switzerl<strong>and</strong>). Both devices consist of a treadmill, an overhead<br />
suspension system with a harness <strong>and</strong> a robotic orthosis attached to the patient's lower limbs, assisting the hip<br />
<strong>and</strong> the knee bilaterally. The Lokomat, in particular, has undergone substantial testing with patients <strong>and</strong>, as<br />
opposed to AutoAmbulator, is extensively reported in literature. Lokomat, originally purely position<br />
166
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
controlled, uses ball screw actuators <strong>and</strong> joint-space impedance control to achieve naturalistic joint trajectories<br />
at the hip <strong>and</strong> knee. Various patient-cooperative control strategies have been investigated (Jezernik et al.,<br />
2004; Duschau-Wicke et al., 2008) as well as a hardware extension of the system with additional actuated<br />
DOF (ab/adduction of the hip <strong>and</strong> lateral <strong>and</strong> vertical pelvic displacement, see Bernhardt et al., 2005a), but<br />
most functionalities were not transferred to the device that is currently on the market <strong>and</strong> in use in<br />
rehabilitation centres. The KineAssist (Kinea Design, US), having no lower body structure, is primarily<br />
intended for adaptable body weight support <strong>and</strong> walking balance training of stroke patients.<br />
16.2.2.2 Research prototypes<br />
Figure 46: Commercially available exoskeleton type devices:<br />
(from left to right) Lokomat®, AutoAmbulator®, KineAssist®<br />
Several research groups recognised the need for assistance-as-needed control strategies <strong>and</strong> for more<br />
physiological gait movements, both considered essential to increase the effectiveness of robot-assisted gait<br />
training. Most research efforts are focused on introducing adaptable compliance (or variable impedance) into<br />
the hardware <strong>and</strong>/or the control of the system <strong>and</strong> on extending the number of DOF of the exoskeleton, i. e.<br />
active DOF (actuated) <strong>and</strong>/or passive DOF (passive elements or none).<br />
Bilateral prototypes In LOPES, besides flexion/extension of the knee <strong>and</strong> hip, lateral <strong>and</strong> forward/ backward<br />
displacement of the pelvis <strong>and</strong> abduction/adduction of the hip are assisted (Veneman et al., 2007). Bowdencable<br />
based series elastic actuators are used to power the exoskeleton's joints for reasons of inherent safety <strong>and</strong><br />
force tracking performance (Veneman et al., 2005). The device is intended for use in stroke patients <strong>and</strong> focus<br />
is on task-specificity of assistance by means of virtual model control (Ekkelenkamp et al., 2007). PAM <strong>and</strong><br />
POGO use pneumatic cylinders to compliantly assist five out of six DOF of the pelvis <strong>and</strong> flexion/extension<br />
of the knee <strong>and</strong> hip. Zero-force control <strong>and</strong> impedance control are used consecutively in a teach-<strong>and</strong>-replay<br />
procedure (Aoyagi et al., 2007). Both LOPES <strong>and</strong> PAM/POGO are treadmill based devices. The<br />
WalkTrainer (Stauffer et al., 2009) is a mobile overground walking device, that consists of a mobile base with<br />
an active body weight supporting harness, a pelvic orthosis (6 actuated DOF) <strong>and</strong> two leg orthoses (3 actuated<br />
DOF each) (Allem<strong>and</strong> et al., 2009). It combines task-space impedance control of the orthoses with closedloop<br />
functional electrical stimulation (FES) of the paraplegic patient's leg muscles. Unilateral <strong>and</strong> single-joint<br />
prototypes in addition to bilateral prototypes, several unilateral rehabilitation exoskeletons, comprising one or<br />
more powered joints, have been developed. ALEX is a leg exoskeleton of which the hip <strong>and</strong> knee joint are<br />
actuated by linear drives (Banala et al., 2009). A force field controller is implemented in task-space that<br />
displays a position dependent force field acting on the foot. In Sawicki et al. (2005) ankle-foot <strong>and</strong> kneeankle-foot<br />
orthoses powered by McKibben type pneumatic muscles are investigated for task-specific<br />
167
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
rehabilitation. This actuator type has also been implemented in, amongst others, a bilateral prototype reported<br />
in Costa <strong>and</strong> Caldwell (2006) <strong>and</strong> in an ankle rehabilitation device for stroke patients in combination with<br />
springs (spring over muscle actuator) reported in Bharadwaj <strong>and</strong> Sugar (2006). A different type of pneumatic<br />
muscle, the pleated pneumatic artificial muscle, is used in KNEXO, a powered knee exoskeleton controlled by<br />
an interaction-oriented trajectory controller (Beyl et al., 2009).<br />
Figure 47: Bilateral prototypes: (from left to right) LOPES, PAM/POGO, WalkTrainer<br />
Figure 48: Unilateral <strong>and</strong> single joint prototypes:<br />
(from left to right) ALEX, Ankle foot orthoses of University of Michigan, KNEXO, SUE<br />
SERKA is an active knee orthosis for gait training focusing on stiff knee gait in stroke patients (Sulzer et al.,<br />
2009). It is driven by a rotational series elastic actuator, capable of providing nearly zero to high assistive<br />
torque, while keeping the added mass low by means of remote actuation through Bowden cables. AKROD is<br />
a knee orthosis with an electro-rheological fluid (ERF) variable damper component to correct hyperextension<br />
of the knee <strong>and</strong> stiff knee gait in stroke patients (Weinberg et al., 2007). Entirely passive systems have been<br />
developed as well. The gravity balancing orthosis (GBO) compensates the gravitational torques acting at the<br />
hip <strong>and</strong> the knee of the combined system (orthosis <strong>and</strong> leg) during swing by means of a dedicated spring<br />
mechanism (Banala et al., 2006). SUE is a passive bilateral exoskeleton with torsion springs in the hip <strong>and</strong><br />
168
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
knee joints optimised for propulsion of the legs during swing in treadmill walking (Mankala et al., 2007).<br />
16.2.2.3 Assistive exoskeletons<br />
Also in the field of assistive exoskeletons, a multitude of devices <strong>and</strong> prototypes have been developed, some<br />
of them also envisaged for use in rehabilitation or performance augmentation. The ReWalk (Argo Medical<br />
Technologies, Israel) is a bilateral robotic suit for the mobility impaired that is near to being released to the<br />
market. Some exoskeletons are specifically aimed at assisting the elderly, such as the walker based<br />
exoskeleton EXPOS reported in Kong <strong>and</strong> Jeon (2006) <strong>and</strong> its successor SUBAR (Kong et al., 2009), others<br />
focus entirely on body weight support, such as the Moonwalker (Krut et al. (2010)) <strong>and</strong> the Bodyweight<br />
Support Assist by Honda. A combination of a quasi-passive exoskeleton with functional electrical stimulation<br />
(FES) is proposed in Farris et al. (2009). Many single joint exoskeletons have been developed. The DCO<br />
(Hitt et al., 2007) <strong>and</strong> the AAFO (Blaya <strong>and</strong> Herr, 2004) are examples of active ankle foot orthoses making<br />
use of series-elastic actuators to assist in push-off or to correct dropped foot gait.<br />
Figure 49: Assistive exoskeletons: (from left to right) ReWalk., Body Weight Support Assist, SUBAR<br />
Figure 50: Power augmenting exoskeletons: (from left to right) BLEEX, Sarcos Exoskeleton,<br />
MIT's Quasi-passive Leg Exoskeleton, HAL<br />
16.2.2.4 Human performance augmenting exoskeletons<br />
The majority of human performance augmenting exoskeletons for the lower limbs has been designed for load<br />
169
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
carrying augmentation (e.g. carrying a backpack) with military applications in mind, such as BLEEX<br />
(Kazerooni <strong>and</strong> Steger, 2006), the Sarcos exoskeleton (Sarcos, US) <strong>and</strong> NTU exoskeleton (Low et al., 2005).<br />
Their control strategies are dedicated to unimpaired users. A quasi-passive leg exoskeleton, using a fraction<br />
of the power consumed by the aforementioned devices, is reported in Walsh et al. (2007). The robot suit HAL<br />
(Cyberdyne, Japan), of which the control relies on muscle EMG measurements, is currently being evaluated as<br />
an assistive exoskeleton for the mobility impaired. For an extensive overview of powered lower limb<br />
exoskeletons the reader is referred to Dollar <strong>and</strong> Herr (2008).<br />
16.3 Robot control strategies for gait assistance<br />
In the previous section a general overview of the state-of-the-art in robot-assisted gait rehabilitation was given<br />
with emphasis on hardware. As mentioned in the introduction, various control strategies are being explored<br />
for robot-assisted rehabilitation of gait. This section deals with the state-of- the-art in robot control for gait<br />
assistance. The principles of motor learning <strong>and</strong> recovery in robot-assisted neuro-rehabilitation are not yet<br />
fully understood <strong>and</strong> difficult to translate into controller design guidelines (Reinkensmeyer et al., 2007).<br />
Control strategy developments therefore tend to be based on general concepts in rehabilitation, neuroscience<br />
<strong>and</strong> motor learning <strong>and</strong> many advances are, for the time being, engineering-driven, seeking improvements in<br />
robot control to realise those general concepts.<br />
Figure 51: High-level control strategies in robot-assisted gait rehabilitation:<br />
(from left to right) assistance based (LOPES), challenge based (ARTHuR), virtual-reality based (Lokomat), non-contact<br />
coaching based (USC's socially assistive robot for stroke patient<br />
16.3.1 Assistance based control<br />
According to Marchal-Crespo <strong>and</strong> Reinkensmeyer, 2009 the existing high-level controllers can be divided into<br />
four categories, depending on the underlying approach to achieve recovery:<br />
� Assistance based: the robotic device provides functional assistance, i.e. assistance of a functional task<br />
such as supporting the body-weight, advancing the limbs, ... (e.g. Duschau-Wicke et al., 2008;<br />
Ekkelenkamp et al., 2007; Aoyagi et al., 2007; Agrawal et al., 2007) .<br />
� Challenge based: in which a functional task is made more difficult or challenging, as opposed to<br />
assistance based strategies envisaged to facilitate a task (e.g. Yoon <strong>and</strong> Ryu, 2005; Lam et al., 2008;<br />
Emken <strong>and</strong> Reinkensmeyer, 2005).<br />
� Virtual-reality based: a haptic interface is used to simulate specific environments <strong>and</strong>/or activities<br />
170
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
(e.g. Schmidt et al., 2005b; Zimmerli et al., 2009).<br />
� Non-contact coaching based: a mobile robotic coach, making no physical contact, encourages the<br />
patient (e.g. Mataric et al., 2007).<br />
For a robotic device the terms assistance <strong>and</strong> challenge both relate to the way in which the device physically<br />
interacts with the patient in order to accomplish a certain task. As such they define a spectrum of possible<br />
control strategies that could apply to different patients with different impairment levels or to a single patient<br />
throughout his/her recovery. Challenge based strategies in robotic therapy are often based on the device<br />
providing resistance against movements or amplifying movement errors with respect to a reference trajectory<br />
(Marchal-Crespo <strong>and</strong> Reinkensmeyer, 2009). These challenge based strategies are likely to be more effective<br />
in mildly affected patients <strong>and</strong>, especially in gait recovery, these would require a combination with an<br />
assistance based strategy to safely support patients with insufficient strength <strong>and</strong>/or motor control. In the<br />
following overview focus is on assistance based strategies.<br />
A suitable way of differentiating between assistive control strategies is in terms of the activity level of the<br />
robot <strong>and</strong> the human-in-the-robot (Veneman et al. 2007). At the one side of the spectrum of possible assistive<br />
control strategies one can envisage a system that is controlled to minimally interact with the human (patientin-charge),<br />
at the other the system is controlled to, at least partly, take over human function (robot-in-charge).<br />
The effectiveness of either strategy will likely depend on the residual strength <strong>and</strong> motor control of the<br />
patient: severely impaired patients would require a robot in a robot-in-charge mode to ensure safety <strong>and</strong><br />
continuity of movements, whereas patients with sufficient walking ability would benefit more from a device<br />
in a patient-in-charge mode that only intervenes when required during movements initiated <strong>and</strong> largely<br />
controlled by the patient. The ideal limits of the assistance level scale, namely full assistance (100% robot-incharge)<br />
<strong>and</strong> zero assistance (100% patient-in-charge), respectively correspond with the robot providing the<br />
required power to generate its own <strong>and</strong> the patient's motion, <strong>and</strong> with the robot perfectly compensating its own<br />
dynamics, such that the presence of the device is not felt. Non-ideal solutions such as the uncompensated<br />
dynamics of the robot, yield negative assistance <strong>and</strong> cause a shift towards a challenge based situation.<br />
16.3.2 Assistanceasneeded<br />
As discussed previously some properties of assistance based therapy can negatively affect motor learning <strong>and</strong><br />
neural recovery. The assistance influences <strong>and</strong> eases the task to be relearned <strong>and</strong> as such it may induce a<br />
decrease of the patient's effort (Israel et al., 2006) <strong>and</strong> reduced or altered motor learning (Marchal-Crespo <strong>and</strong><br />
Reinkensmeyer, 2008). This observation has led to a paradigm shift in robot-assisted rehabilitation of gait<br />
towards “assistance-as-needed”. In order to maximise assistive therapy outcome the assistive controller<br />
should assist as much or little as needed for the specific task, while triggering the patient to maximise his/her<br />
own efforts. Initially, assistive controllers were conceived as (high feedback gain) position-based controllers<br />
with a fixed target trajectory (Colombo et al., 2001). Such an assistive environment does not promote nor<br />
does it allow patient induced variability of the gait pattern <strong>and</strong> therefore it can be considered almost equivalent<br />
to a pure robot-in-charge mode. With the assistance-as-needed paradigm focus has shifted towards adaptivity<br />
of assistance, both in the targeted function/task (task-specificity) <strong>and</strong> in the assistance level. The adaptivity of<br />
assistance is envisaged between different individuals, different phases in therapy <strong>and</strong> also online, within the<br />
individual training session. From the state-of-the-art overview in the introductory chapter it is clear how the<br />
paradigm shift has influenced robot design on the hardware level: compliance as a means to provide<br />
variability has found its way to actuator design, either in the form of an intrinsically compliant actuator, a<br />
passive compliant element in series with a stiff actuator, or a passive compliant element alone. The following<br />
171
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
non exhaustive overview is aimed at listing some of the different ways of implementing the assistance-asneeded<br />
concept on the control level.<br />
Variability in assistance is generally achieved by means of one or a combination of the following concepts:<br />
� task/function specific assistance<br />
� adaptivity of the assistance level<br />
� adaptivity of timing<br />
� adaptivity in space<br />
16.3.2.1 Task/function specific assistance<br />
Task/function specific assistance implies that the assistance is tailored to the gait function(s), joint(s) or<br />
limb(s) that need(s) it <strong>and</strong> in the meantime assistance is reduced where it is useless or adverse. Hybrid forceposition<br />
control has been used to promote free motion during swing (force control) while using position<br />
control during stance (Bernhardt et al., 2005b, Lokomat). A similar approach has been conceived for training<br />
of the hemiparetic: position control (Bernhardt et al., 2005b, Lokomat) or impedance control (Vallery et al.,<br />
2009, LOPES) of the impaired side <strong>and</strong> force control of the unaffected side. Another example of task-specific<br />
assistance is the use of Virtual Model Control (Ekkelenkamp et al., 2007, LOPES). A virtual model simulates<br />
a specific action that needs to be performed on the patient by means of the exoskeleton (e.g., foot lift during<br />
swing, body weight support, ...).<br />
The virtual model control is implemented by means of impedance control (in joint space or task space)<br />
defining the interaction between the actual joint/limb motion <strong>and</strong> a moving or fixed target position. For<br />
several reasons different research groups have used force control to apply the zero assistance mode (patientin-charge<br />
mode) also to the entire device: to record unassisted gait for use as target trajectories in a<br />
position/impedance control scheme (Aoyagi et al., 2007, PAM, van Asseldonk et al., 2007, LOPES) <strong>and</strong> as a<br />
reference or baseline for the assisted mode (van Asseldonk et al., 2008, LOPES). In order to approximate the<br />
ideal zero assistance level the robot's dynamics are modelled <strong>and</strong> partly compensated. In that case, instead of<br />
controlling the actuator output towards zero force/torque, the interaction forces/torques between the robot <strong>and</strong><br />
the human are minimised.<br />
16.3.2.2 Adaptivity of the assistance level<br />
Adaptivity of the assistance level is often accomplished by using a measure of the patient's effort or a measure<br />
of how well the patient performs a task either directly as a feedback control signal or indirectly as a means to<br />
scale one or more control parameter(s). Patient-driven motion reinforcement (Bernhardt et al., 2005b,<br />
Lokomat) belongs to the first category, since a support torque is calculated as the product of a scale factor <strong>and</strong><br />
the (modelled) active torque exerted by the patient. In Duschau-Wicke et al., 2008 (Lokomat) the support<br />
torque, proportional to the error between the actual <strong>and</strong> target trajectory, is recalculated at every gait cycle by<br />
an iterative learning controller. The second category groups several variations on the parameter scaling<br />
approach depending on the underlying control scheme. A forgetting factor is used for instance on the PD<br />
gains of a position control scheme (Emken et al., 2008, ARTHuR) <strong>and</strong> on an error-based learning controller<br />
(Emken et al., 2005, ARTHuR) reducing the assistance over time <strong>and</strong> ensuring the patient is sufficiently<br />
challenged. In Riener et al., 2005 (Lokomat) the impedance control parameters are scaled with the patient's<br />
effort, such that a larger contribution of the patient allows for larger trajectory deviations.<br />
172
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
16.3.2.3 Adaptivity of timing<br />
Adaptivity of timing is always related to adaptivity in space <strong>and</strong> vice versa, since a gait pattern is defined both<br />
in space <strong>and</strong> time. Moreover, for position based control (e.g. position control, impedance control) the timing<br />
<strong>and</strong> the amplitude of the target trajectory affect the assistance level as well. Imposing a target trajectory<br />
without imposing a related timing has been done by using a set point control or path control. In Banala et al.,<br />
2007 (ALEX) PD set point control is used in which the target position is only switched to the next set point if<br />
the actual position is close enough to the current set point. In Aoyagi et al., 2007 (PAM) the target trajectory<br />
of the PD controller is determined on the basis of the actual state of the robot <strong>and</strong> the target trajectory of the<br />
robot defined in state-space. A moving window limits c<strong>and</strong>idate target trajectory points to points close to the<br />
actual state. The difference in timing between the two points is fed to a synchronisation algorithm that selects<br />
the appropriate target trajectory point. The approach in Duschau-Wicke et al., 2008 (Lokomat) combines<br />
impedance control with the aforementioned set point generation method <strong>and</strong> the use of a moving window. In<br />
addition, an automatic treadmill speed adaptation algorithm changes the treadmill speed according to an<br />
admittance control scheme, using the measured interaction force between the human <strong>and</strong> an external fixed<br />
reference as an input. Instead of altering time dependent trajectories, some methods generate a position<br />
dependent force field or velocity field. The force field controller proposed in Banala et al., 2009 (ALEX)<br />
displays a force tangential to a required foot trajectory inside a “virtual tunnel”, a normal force towards the<br />
target trajectory outside that tunnel <strong>and</strong> a damping force to limit foot velocity. In Cai et al., 2006 two velocity<br />
field controllers are proposed. One has a virtual tunnel with tangential velocity inside <strong>and</strong> inward spiralling<br />
velocity outside, whereas the other has a small moving window with tangential velocity inside <strong>and</strong> a radial<br />
velocity field outside pointed towards the windows centre.<br />
16.3.2.4 Adaptivity in space<br />
Adaptivity in space is either achieved by altering the target trajectory of a position-based controller or it is<br />
intrinsic to a non-position-based controller (egg. force controller). The aforementioned force field <strong>and</strong><br />
velocity field controllers are examples of the latter: the actual position can vary almost freely within the<br />
boundaries of the virtual tunnel. The force controllers discussed in the paragraph about task/function specific<br />
assistance also allow for patient-induced gait pattern adaptation. The same goes for the impedance controllers<br />
used in Lokomat (Riener et al., 2005), LOPES (Veneman et al., 2007) <strong>and</strong> WalkTrainer (Stauffer et al., 2009)<br />
<strong>and</strong> for the position controllers implemented in devices with intrinsically compliant or back driveable<br />
actuators (ARTHuR, PAM, POGO (Reinkensmeyer et al., 2006), KNEXO (Beyl et al., 2009). The trajectory<br />
tracking controller implemented in KNEXO provides a tunable limitation of the assistive torque allowing for<br />
large deviations from the target trajectory. Different target trajectory adaptation algorithms for position-based<br />
control have been proposed in Jezernik et al., 2004. These algorithms calculate a target gait pattern adaptation<br />
that minimises the active patient torque. In Vallery et al., 2009 the target trajectory of the impedance<br />
controller for the impaired leg is based on the recorded motion of the unimpaired leg fed to a so called<br />
“complementary limb motion estimation” algorithm. This allows for a patient- induced adaptation of gait<br />
both in space <strong>and</strong> in timing. The use of surface EMG sensors in a so called proportional myoelectric control<br />
should be mentioned as well (Gordon <strong>and</strong> Ferris, 2007; Lee <strong>and</strong> Sankai, 2002). Providing an output torque<br />
proportional to processed EMG signals indirectly puts the human in control of the timing <strong>and</strong> the level of<br />
assistance. EMG-based torque control thus fits in the previous categories as well.<br />
173
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
17 StateoftheArt in Robotic Systems for examining Hazardous<br />
Environments<br />
One of two demonstrators that will be used in <strong>CORBYS</strong> project to demonstrate <strong>and</strong> evaluate technologies<br />
developed is an existing autonomous robotic system consisting of a robot arm mounted on a mobile platform<br />
that can be used for inspection of contaminated/hazardous environments in teleoperation application or as a<br />
co-worker of a human.<br />
Disaster management aims to reduce, or avoid, potential damage from the hazards, to assure prompt <strong>and</strong><br />
appropriate assistance to victims of disaster, <strong>and</strong> to achieve rapid <strong>and</strong> effective recovery. The main<br />
prerequisite of effective management of devastation is a fast <strong>and</strong> clear verification of possible contamination.<br />
Dependent on occurred scenario the first response teams are equipped with different techniques. For example<br />
the auxiliary fire brigades in Germany deploy NBC (nuclear-biological-chemical) Reconnaissance Vehicles<br />
(NBC RVs). The NBC RVs are mainly used for measuring, detecting <strong>and</strong> reporting radioactive <strong>and</strong>/or<br />
chemical as well as for recognising <strong>and</strong> reporting biological contamination (Meissner et al., 2005). However,<br />
the NBC RV is not permitted to enter contaminated area. In case of required measurements inside<br />
contaminated zone, the crew members, dressed in special overalls, has to leave the vehicle <strong>and</strong> continue the<br />
measurements. One of the common techniques for the verification of contamination is collection of samples<br />
in affected zones. Bruemmer (Bruemmer et al. 2002) describes baseline sample collection for laboratory<br />
analysis as a three phase process where, after an initial radiation survey performed by radiation control<br />
technician <strong>and</strong> after collecting of video coverage by a video technician, in the last phase a team of sampling<br />
technicians is sent into the facility to collect samples used to determine contamination levels (Figure 52).<br />
Typically, this data is then used to aid decontamination <strong>and</strong> decommissioning planning activities.<br />
Figure 52: Baseline sample collection for laboratory analysis (Bruemmer et al. 2002)<br />
In order to reduce humans’ exposure to danger during the exploration of contaminated/hazardous<br />
environments, in recent years significant effort has been made in developing <strong>and</strong> employment of mobile<br />
robotic systems. Existing Reconnaissance Robots, also known as Security Robots, enable mainly a visual<br />
investigation of the affected areas <strong>and</strong>/or chemical measurements which, however, do not require collection of<br />
samples. Security Robots consist of mobile platforms equipped with different sensors (cameras, laser<br />
scanners, etc.) which allow autonomous or teleoperated navigation. They can be used for indoor or outdoor<br />
applications. For example OFRO robot of a leading robotics company Robowatch Industries Ltd. from Berlin<br />
(http://www.robowatch.de) is the first mobile security robot worldwide for outdoor surveillance being able to<br />
determine the actual cause of an alarm, evaluate the situation <strong>and</strong> take counter measures. Mobile security<br />
robots are often equipped also with microphones <strong>and</strong> are used as rescue robots such as CRASAR robots used,<br />
among other incidents, in the World Trade centre (http://www.crasar.org) or KOHGA3 ground robot (Figure<br />
174
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
53) used by Japanese roboticists to assist with rescue <strong>and</strong> recovery operations after the<br />
http://spectrum.ieee.org/static/japans-earthquake-<strong>and</strong>-nuclear-emergencyearthquake that struck Japan in<br />
March 2011 (Guizzo, 2011). Robotic systems for Clearing of Explosive Devices represent a important group<br />
of Mobile Security Robots (http://www.rheinmetall-detec.com). The bomb disposal robotic systems consist of<br />
robot arm with gripper mounted on a mobile platform (Figure 53) <strong>and</strong> different sensors enabling remote robot<br />
control. Mobile platforms enable applications on staircases, flat or mountain areas. However, even equipped<br />
with robot arms, robots which are mainly used for clearing of explosive devices can not provide satisfactory<br />
sample quality required for reliable analysis.<br />
KOHGA3 robot<br />
EOD-Robot for clearing<br />
of explosive devices<br />
Rheinmetall robotic<br />
system<br />
Figure 53: Mobile security robots<br />
17.1 Robotic system for automated sampling<br />
(a) (b)<br />
175<br />
Mobile security robot OFRO<br />
(RoboWatch)<br />
The University of Bielefeld designed a robotic system, which h<strong>and</strong>les samples in a laboratory environment<br />
[Poggendorf, 2004]. Further significant systems belonging to the group of mobile security robots for<br />
sampling are the Packbot developed by iRobot Corporation [Yamauchi, 2004], Telerob’s Safety Guard <strong>and</strong><br />
tEODor [Saffiotti, 2004] <strong>and</strong> the NAT-II <strong>and</strong> T.S.R. EOD-robots from Elektrol<strong>and</strong><br />
[http://www.elektrol<strong>and</strong>.com.tr]. However, most of those systems focus on teleoperation where the robot arm<br />
serves as an elongation of the human arm leading to a low level of automation. In contrast the mobile robotic<br />
system for safe automated sampling, which will be used as the second <strong>CORBYS</strong> demonstrator, includes a<br />
redundant robot arm for dexterous manipulation in unstructured environment. RecoRob is navigated by the<br />
user but it performs the collection of samples autonomously. It has been developed within the German<br />
national project RecoRob (Kuzmitcheva et al., 2009). The RecoRob objective is that the mobile investigation<br />
<strong>and</strong> robotic sampling platform replace the human investigation team <strong>and</strong> transfer the sampling/investigation<br />
circle from conventional manual (Figure 54(a)) to the structure shown in the Figure 54(b).
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Figure 54: Autonomous vs. manual investigation of contamination area including sampling<br />
To meet the project objective, Reco Rob has to satisfy the following basic requirements:<br />
� the system must allow the user to guide the mobile unit for acquiring information concerning the<br />
air/water/soil relevant parameters (like contamination) <strong>and</strong> to gather samples for further investigations,<br />
� the sample gathering must be performed cross-contamination free,<br />
� the comm<strong>and</strong>s for guiding the unit as well as data acquired must be sent instantly to base using the<br />
wireless communication,<br />
� the transferred measurement data as well as information regarding stage of the task execution must be<br />
visualised on an ergonomic way,<br />
� the movement of the unit must be realised manually using a control joystick, or automatically,<br />
� a vision system is needed on the mobile to provide impression of tele-presence,<br />
� the mobile unit must report also the GPS data, for localisation <strong>and</strong> for data map generation,<br />
� in the case of an emergency, the mobile unit must be able to return to base executing a track-back of the<br />
route,<br />
� the construction of a mobile unit <strong>and</strong>/or components must allow decontamination in order to provide<br />
multiple usage of the system.<br />
To satisfy the functional requirements the system is endowed with various hardware devices as can be seen in<br />
Figure 55. For actuation the SCHUNK/degrees of freedom (DoF) lightweight robot arm is mounted on the<br />
mobile platform. The sensor system consists of a force-torque sensor in the manipulators wrist, a stereo<br />
camera system for 3D reconstruction, a camera for workspace observation <strong>and</strong> a camera for thermal<br />
inspection of the work area.<br />
(a)<br />
(b)<br />
Figure 55: (a) Hardware setup of the RecoRob reconnaissance robotic system,<br />
(b) Mapped Virtual Reality (MVR) used as simulation environment in the RecoRob system<br />
17.2 Intelligent automated investigation of a hazardous environment<br />
To overcome the difficulties <strong>and</strong> limitations of teleoperation it is absolutely necessary to move beyond<br />
teleoperation <strong>and</strong> develop robot intelligence that can be interleaved with human intelligence. The Idaho<br />
National Engineering <strong>and</strong> Environmental Laboratory (INEEL) has developed a mixed-initiative robotic<br />
system which can shift modes of autonomy on the fly, relying on its own intrinsic intelligence to protect itself<br />
176
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
<strong>and</strong> the environment as it works with human(s) to accomplish critical tasks (Bruemmer et al. 2002). With this<br />
system communication dropouts no longer resulted in the robot stopping dead in its tracks or, worse,<br />
continuing rampant until it has recognised that communications have failed. Instead, the robot may simply<br />
shift into a fully autonomous mode. Also in a remote situation, the robot is usually in a much better position<br />
than the human to react to the local environment, <strong>and</strong> consequently the robot may take the leadership role<br />
regarding navigation. As leader, the robot can then “veto” dangerous human comm<strong>and</strong>s to avoid running into<br />
obstacles or tipping itself over. For this the robot has to provide the operator with the situational awareness<br />
required for controlling a robot (Valero et al., 2011).<br />
The <strong>CORBYS</strong> will build on state-of-the-art robotic systems for examination of hazardous environment<br />
endowing them with cognitive capabilities to support autonomous sampling in two experimental scenarios.<br />
One of the most fascinating areas of future work is the need for the robot to be imbued (to be filled) with an<br />
ability to underst<strong>and</strong> <strong>and</strong> predict human behaviour. The robot’s theory of human behaviour may be a rule set<br />
at a very simple level, or it may be a learned expectation developed through practiced evolutions with its<br />
human counterpart. Interaction between the robot <strong>and</strong> human may be through direct communications (verbal,<br />
gesture, touch, radio communications link) or indirect observation (physically struggling, erratic behaviour,<br />
unexpected procedural deviation). Interaction may also be triggered by the observation of environmental<br />
factors (rising radiation levels, the approach of additional humans, etc.)<br />
18 Conclusion<br />
In this document, the domain of robotic cognitive systems <strong>and</strong> their application in two demonstrators is<br />
introduced. Knowledge <strong>and</strong> requirements elicitation process, interaction with clinical <strong>and</strong> robotic experts (for<br />
gait rehabilitation systems <strong>and</strong> autonomous robotic systems respectively), <strong>and</strong> prioritised collection of<br />
requirements for <strong>CORBYS</strong> systems are presented in detail. The requirements engineering analysis base<br />
reports the requirements engineering methodology (UI-REF), followed by the procedure <strong>and</strong> findings of<br />
knowledge elicitation from clinical partners regarding the first demonstrator, entailing important discussions<br />
on end-user demographics, gait biomechanics in normal <strong>and</strong> pathological walking.<br />
Finalised requirements for the first <strong>and</strong> second demonstrator are then presented, followed state-of-the-market<br />
in light of <strong>CORBYS</strong> solutions. In total, 345 requirements have been gathered, out of which 309 are classed as<br />
m<strong>and</strong>atory, 22 as desirable, <strong>and</strong> 14 as optional requirements. All the requirements are detailed under the<br />
mechatronic control systems of <strong>CORBYS</strong>, human control system of <strong>CORBYS</strong>, <strong>and</strong> the Robohumatic systems.<br />
<strong>Requirements</strong> for system integration <strong>and</strong> functional testing for the <strong>CORBYS</strong> solutions as well as evaluation<br />
are also reported in this document. The requirements prioritisation process has used a number of prioritisation<br />
filters as reported in chapter 4, leading to a final sub set of 44 main requirements for the <strong>CORBYS</strong> project<br />
falling under three main categories: Cognitive systems, Demonstrator I specific, Demonstrator II specific.<br />
Lastly, detailed state-of-the-art reviews are presented in the various relevant areas as applicable to <strong>and</strong> in<br />
scope of the project. These include sensors <strong>and</strong> perception, situation assessment, anticipation <strong>and</strong> initiation,<br />
cognitive robot control architectures, smart integrated actuators, non-invasive BCI, gait rehabilitation systems<br />
<strong>and</strong> hazardous area examining robots.<br />
177
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
19 References<br />
A surveillance based model to calculate the direct medical costs in Europe – Final Report. DG Sanco<br />
Public Health / Consumer Safety Institute, Amsterdam, 2004:<br />
http://www.eurosafe.eu.com/csi/eurosafe.nsf/projects.<br />
Agrawal, S., Banala, S., Mankala, K., Sangwan, V., Scholz, J., Krishnamoorthy, V., Hsu, W. (2007)<br />
Exoskeletons for gait assistance <strong>and</strong> training of the motor impaired, in Proceedings of the IEEE<br />
International Conference on Rehabilitation Robotics, pp. 1108-1113.<br />
Allem<strong>and</strong>, Y., Stauffer, Y., Clavel, R., <strong>and</strong> Brodard, R. (2009). Design of a new lower extremity orthosis<br />
for overground gait training with the WalkTrainer, in Proceedings of the 2009 IEEE International<br />
Conference on Rehabilitation Robotics, pp. 550-555.<br />
Almeida e Costa, F., Rocha, L. M., Costa, E., Harvey, I., <strong>and</strong> Coutinho, A., editors (2007). Advances in<br />
Artificial Life (Proc. ECAL 2007, Lisbon), volume 4648 of LNCS, Berlin. Springer.<br />
Anderson JR, Bothell D, Byrne MD, Douglass S, Lebiere C, Qin Y., (2004). An intergrated theory of the<br />
mind. Psychol Rev 111:1036–1060<br />
Anthony, T., Polani, D., <strong>and</strong> Nehaniv, C. (2009). Impoverished empowerment: ‘meaningful’ action<br />
sequence generation through b<strong>and</strong>width limitation. In Kampis, G. <strong>and</strong> Szathmáry, E., editors, Proc.<br />
European Conference on Artificial Life 2009, Budapest. Springer. In Press.<br />
Anthony, T., Polani, D., <strong>and</strong> Nehaniv, C. L. (2008). On preferred states of agents: how global structure is<br />
reflected in local structure. In (Bullock et al., 2008), pages 25–32.<br />
Aoyagi, D., Ichinose, W., Harkema, S., Reinkensmeyer, D., <strong>and</strong> Bobrow, J. (2007). A robot <strong>and</strong> control<br />
algorithm that can synchronously assist in naturalistic motion during body-weight-supported gait training<br />
following neurologic injury, IEEE Transactions on Neural Systems <strong>and</strong> Rehabilitation Engineering, vol.<br />
15, no. 3, pp. 387-400.<br />
Ashby, W. R. (1956). An Introduction to Cybernetics. Chapman & Hall Ltd.<br />
Assessing disability. Münchener Rückversicherungs-Gesellschaft, 2004. München:<br />
www.munichre.com/publications/302-04093_en.pdf.<br />
Aström, K.J., Wittenmark, B., (1995). Adaptive Control. Reading: Addison-Wesley.<br />
Atick, J. J. (1992). Could information theory provide an ecological theory of sensory processing. Network:<br />
Computation in Neural Systems, 3(2):213–251.<br />
Attneave, F. (1954). Informational aspects of visual perception. Psychol. Rev., 61:183–193.<br />
Avor, J.K. <strong>and</strong> Sarkodie-Gyan, T. (2009). An approach to sensor fusion in medical robots, IEEE 11th<br />
International Conference on Rehabilitation Robotics, Kyoto International Conference Center, Japan.<br />
Ay, N. <strong>and</strong> Polani, D. (2008). Information flows in causal networks. Advances in Complex Systems,<br />
11(1):17–41.<br />
Ay, N. <strong>and</strong> Wennekers, T. (2003). Dynamical properties of strongly interacting markov chains. Neural<br />
Networks, 16(10):1483–1497.<br />
Ay, N., Bertschinger, N., Der, R., Güttler, F., <strong>and</strong> Olbrich, E. (2008). Predictive information <strong>and</strong><br />
explorative behavior of autonomous robots. European Journal of Physics B, 63:329–339.<br />
Badii A, User-Intimate <strong>Requirements</strong> Hierarchy Resolution Framework (UI-REF): Methodology for<br />
Capturing Ambient Assisted Living Needs, Proceedings of the Research Workshop, Int. Ambient<br />
Intelligence Systems Conference (AmI’08), Nuremberg, Germany November 2008.<br />
Banala, S. K., Agrawal, S. K., Fattah, A., Krishnamoorthy, V., Hsu, W., Scholz, J., <strong>and</strong> Rudolph, K.<br />
(2006). Gravity-Balancing Leg Orthosis <strong>and</strong> Its Performance Evaluation, IEEE Transactions on Robotics,<br />
vol. 22, no. 6, pp. 1228-1239.<br />
178
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Banala, S. K., Kim, S. H., Agrawal, S. K., <strong>and</strong> Scholz, J. P. (2009). Robot Assisted Gait Training With<br />
Active Leg Exoskeleton (ALEX), IEEE transactions on neural systems <strong>and</strong> rehabilitation engineering, vol.<br />
17, no. 1, pp. 2-8.<br />
Banala, S. K., Kulpe, A., <strong>and</strong> Agrawal, S. K. (2007). A Powered Leg Orthosis for Gait Rehabilitation of<br />
Motor-Impaired Patients, in Proceedings of IEEE International Conference on Robotics <strong>and</strong> Automation,<br />
pp. 4140-4145.<br />
Barbeau, H. <strong>and</strong> Rossignol, S. (1994). Enhancement of locomotor recovery following spinal cord injury,<br />
Current Opinion in Neurology, vol. 7, pp. 517-524.<br />
Barlow, H. B. (1959). Possible principles underlying the transformations of sensory messages. In<br />
Rosenblith, W. A., editor, Sensory Communication: Contributions to the Symposium on Principles of<br />
Sensory Communication, pages 217–234. The M.I.T. Press.<br />
Barlow, H. B. (2001). Redundancy reduction revisited. Network: Computation in Neural Systems,<br />
12(3):241–253.<br />
Barlow, J. S. (1984). Emg artifact minimization during clinical eeg recordings by special analog filtering.<br />
Electroencephalogr. Clin. Neurophysiol., 58(2):161–174.<br />
Barry, R.J., Clarke, A.R. <strong>and</strong> Johnstone, S.J. (2003). A review of electrophysiology in<br />
attentiondeficit/hyperactivity disorder: I. qualitative <strong>and</strong> quantitative electroencephalography. Clinical<br />
Neurophysiology, 114(2):171–183.<br />
Bastiaansen, M.C.M., Bocker, K.B.E., Cluitmans, P.J.M. <strong>and</strong> Brunia, C.H.M. (1999). Event-related<br />
desynchronization related to the anticipation of a stimulus providing knowledge of results. Clinical<br />
Neurophysiology, 110(2):250–260.<br />
BCC Research, http://www.bccresearch.com/report/HLC045A.html HEALTHCARE Prosthetics,<br />
Orthotics <strong>and</strong> Cosmetic Enhancement Products Report Code:HLC045APublished:October 2005<br />
Behrman, A. <strong>and</strong> Harkema S. (2000). Locomotor training after human spinal cord injury: a series of case<br />
studies, Physical Therapy, vol. 80, no. 7, pp. 688-700.<br />
Bell, C.J. et al. (2008) Control of a humanoid robot by a noninvasive brain-computer interface in humans.<br />
J. Neural Eng., 5(2):214–220.<br />
Benjamin, D., Lyons, D., Lonsdale, D., (2004). Adapt: A cognitive architecture for robotics, in 2004<br />
International Conference on Cognitive Modeling, A. R. Hanson <strong>and</strong> E. M. Riseman, Eds., Pittsburgh, PA.<br />
Berg, P. <strong>and</strong> Scherg, M. (1994). A multiple source approach to the correction of eye artifacts.<br />
Electroencephalogr. Clin. Neurophysiol., 90(3):229–241.<br />
Berger, W. (2008) International assesment of research <strong>and</strong> development in brain-computer interfaces.<br />
World Technology Evaluation Center.<br />
Bernhardt, M., Frey, M., Colombo, G., <strong>and</strong> Riener. R. (2005). Hybrid force-position control yields<br />
cooperative behaviour of the rehabilitation robot Lokomat, in 9th International Conference on<br />
Rehabilitation Robotics (ICORR 2005), pp. 536-539.<br />
Bernhardt, M., Lutz, P., <strong>and</strong> Frey, M. (2005). Physiological Treadmill Training with the 8-DOF<br />
Rehabilitation Robot LOKOMAT, in BMT 2005, Jahrestagung der deutschen Gesellschaft für<br />
biomedizinische Technik.<br />
Bertschinger, N., Olbrich, E., Ay, N., <strong>and</strong> Jost, J. (2008). Autonomy: an information-theoretic perspective.<br />
Biosystems, 91:331–345.<br />
Beyl, P., Van Damme, M., Van Ham, R., V<strong>and</strong>erborght, B., <strong>and</strong> Lefeber, D. (2009). Design <strong>and</strong> control of<br />
a lower limb exoskeleton for robot-assisted gait training, Applied Bionics <strong>and</strong> Biomechanics, vol. 6, no. 2,<br />
pp. 229-243.<br />
Bharadwaj, K. <strong>and</strong> Sugar, T. G. (2006). Kinematics of a Robotic Gait Trainer for Stroke Rehabilitation, in<br />
Proceedings of the 2006 IEEE International Conference on Robotics <strong>and</strong> Automation, pp. 3492-3497.<br />
Bialek, W., Nemenman, I., <strong>and</strong> Tishby, N. (2001). Predictability, complexity <strong>and</strong> learning. Neural<br />
Computation, 13:2409–2463.<br />
179
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
BioEra, n.d. BioEra - visual designer for biofeedback. [online] Available at<br />
[Accessed 13 May 2011].<br />
BioExplorer, n.d. BioExplorer . [online] Available at [Accessed 13<br />
May 2011].<br />
BioSig Project, n.d. The BioSig Project. [online] Available athttp://biosig.sourceforge.net/ [Accessed 13<br />
May 2011].<br />
Birbaumer, N. et al. (1999). A spelling device for the paralysed. Nature, 398:297–298.<br />
Bladon, P., Hall, R. J., Wright, W. A. (2002). Situation Assessment using Graphical Models, Proceedings<br />
of the Fifth International Conference on Information Fusion, Vol. 2, Pages 886-893.<br />
Blahut, R. (1972). Computation of channel capacity <strong>and</strong> rate distortion functions. IEEE Transactions on<br />
Information Theory, 18(4):460–473.<br />
Blanc-Garin, J. (1994). Patterns of recovery from hemiplegia following stroke. Neuropsychological<br />
rehabiliation, 4(4):359385.<br />
Blankertz, B., Curio, G. <strong>and</strong> Mller, K.-R. (2002). Classifying single trial eeg: Towards brain computer<br />
interfacing. In Advances in Neural Inf. Proc. Systems (NIPS 01), 14:157–164.<br />
Blankertz, B., Dornhege, G., Schäfer, C., Krepki, R., Kohlmorgen, J., Müller, K.R., Kunzmann, V., Losch,<br />
F., <strong>and</strong> Curio, G. (2003). Boosting bit rates <strong>and</strong> error detection for the classification of fastpaced motor<br />
comm<strong>and</strong>s based on single-trial eeg analysis. IEEE Trans. Neural Syst. Rehabil. Eng., 11:127–131.<br />
Blasch, E <strong>and</strong> S. Plano, JDL Level 5 fusion model: user refinement issues <strong>and</strong> applications in group<br />
tracking, SPIE Vol 4729, Aerosense, 2002, pp. 270 – 279.<br />
Blaya, J. A. <strong>and</strong> Herr, H. (2004). Adaptive control of a variable-impedance ankle-foot orthosis to assist<br />
drop-foot gait, IEEE Transactions on Neural Systems <strong>and</strong> Rehabilitation Engineering, vol. 12, no. 1, pp.<br />
24-31.<br />
Bolt, R.A. (1980). Put-that-there: Voice <strong>and</strong> gesture at the graphic interface. Computer Graphics,<br />
14(3):262-270.<br />
Bosman, P. A. N. <strong>and</strong> Poutré, H. L. (2007). Learning <strong>and</strong> anticipation in online dynamic optimization with<br />
evolutionary algorithms: The stochastic case. In Proc. GECCO 2007, pages 1165–1172, New York. ACM<br />
Press.<br />
Botvinick, M. M., Braver, T. S., Barch, D. M., Carter, C. S., Cohen, J. D., (2001). Conflict monitoring <strong>and</strong><br />
cognitive control. Psychological Review, 108:624–652.M.S.<br />
Brainterface, n.d. BF++ 2.0: The Body Language Framework. [online] Available<br />
at [Accessed 13 May 2011].<br />
Brenner, N., Bialek, W., <strong>and</strong> de Ruyter van Steveninck, R. (2000). Adaptive rescaling optimizes<br />
information transmission. Neuron, 26:695–702.<br />
Brooks, R. A., (1986). A Robust Layered Control System for a Mobile Robot, IEEE Journal of Robotics<br />
<strong>and</strong> Automation, Vol. 2, No. 1, pp. 14–23.<br />
Brooks, R. A. (1991). Intelligence without representation. Artificial Intelligence, 47(1–3):139–159.<br />
Brown, B. (1970). Recognition of aspects of consciousness through association with eeg alpha activity<br />
represented by a light signal. Psychophysiology, 6:442–452.<br />
Brown, J. <strong>and</strong> Frank, J. (1987). Influence of event anticipation on postural actions accompanying<br />
voluntary movement. Exp. Brain Res., 67:645–650.<br />
Brown, M., Harris, C., (1994). Neurofuzzy Adaptive Modeling <strong>and</strong> Control, Prentice-Hall: Englewood<br />
Cliffs.<br />
Browne, M. <strong>and</strong> Cutmore, T. R. (2002). Low-probability event-detection <strong>and</strong> separation via statistical<br />
wavelet thresholding: an application to psychophysiological denoising. Clin. Neurophysiol., 113(9):1403–<br />
180
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
1411.<br />
Bruemmer, D. J., Marble J. L., Dudenhoeffer, D. D., Anderson, M. O., Mark D. McKay, M. D. (2002).<br />
Intelligent Robots for Use in Hazardous DOE Environments, in Robotics <strong>and</strong> Intelligent Machines in the<br />
U.S. Department of Energy: A Critical Technology Roadmap<br />
Bruyninckx, H. (2001). Open robot control software: the orocos project. In IEEE International Conference<br />
on Robotics <strong>and</strong> Automation, volume 3.<br />
Buch, E. et al. (2008). Think to move: a neuromagnetic brain-computer interface (bci) system for chronic<br />
stroke. Stroke, 39(910).<br />
Bullock, S., Noble, J., Watson, R., <strong>and</strong> Bedau, M. A., editors (2008). Artificial Life XI: Proceedings of the<br />
Eleventh International Conference on the Simulation <strong>and</strong> Synthesis of Living Systems, Winchester 5–8.<br />
Aug. MIT Press, Cambridge, MA.<br />
Burghart, C., Mikut, R., Stiefelhagen, R., Asfour, T., Holzapfel, H., Steinhaus, P.,, Dillman, R., (2005). A<br />
cognitive architecture for a humanoid robot: A first approach, IEEE-RAS International Conference on<br />
Humanoid Robots (Humanoids 2005), pp. 357–362.<br />
Business Wire, http://www.allbusiness.com/health-care/medical-devices-equipment-prosthetic/11781497-<br />
1.html Research <strong>and</strong> Markets: Market Tracking: Wheelchairs in Europe Top 5 2009 Publication: Business<br />
Wire Date: Thursday, February 12 2009<br />
Cai, L. L., Fong, A. J., Otoshi, C. K., Liang, Y., Burdick, J. W., Roy, R. R., <strong>and</strong> Edgerton, V. R. (2006).<br />
Implications of assist-as-needed robotic step training after a complete spinal cord injury on intrinsic<br />
strategies of motor learning, Journal of Neuroscience, vol. 26, no. 41, pp. 10 564-8.<br />
Campbell, A.T., Choudhury, T., Hu, S., Lu, H., Mukerjee, Rabbi, M.K., M. <strong>and</strong> Raizada, R.D.S. (2010).<br />
Neurophone: Brain-mobile phone interface using a wireless eeg headset.<br />
Capdepuy, P., Polani, D., <strong>and</strong> Nehaniv, C. L. (2007a). Constructing the basic umwelt of artificial agents:<br />
An information-theoretic approach. In (Almeida e Costa et al., 2007), pages 375–383.<br />
Capdepuy, P., Polani, D., <strong>and</strong> Nehaniv, C. L. (2007b). Construction of an internal predictive model by<br />
event anticipation. In Butz, M., Sigaud, O., Pezzulo, G., <strong>and</strong> Baldassarre, G., editors, Proceedings of the<br />
Third Workshop on Anticipatory Behavior in Adaptive Learning Systems, LNCS/LNAI, pages 218–232.<br />
Springer.<br />
Capdepuy, P., Polani, D., <strong>and</strong> Nehaniv, C. L. (2007c). Grounding action-selection in event-based<br />
anticipation. In (Almeida e Costa et al., 2007), pages 253–262.<br />
Carmena, J.M. et al. (2003). Learning to control a brain-machine interface for reaching <strong>and</strong> grasping by<br />
primates. PLoS Biology, 1(2):193–208.<br />
Carver, N., Lesser, V. (1991). A New Framework for Sensor Interpretation: Planning to Resolve Sources<br />
of Uncertainty, Proceedings of AAAI-91, 724-731.<br />
Cester, I., Dunne, S., Riera, A., Ru ni, G. (2008). Enobio: Wearable, wireless, 4-channel<br />
electrophysiology recording system optimized for dry electrodes.<br />
Chapin, J.K. et al. (1999). Real-time control of a robot arm using simultaneously recorded neurons in the<br />
motor cortex. Nature Neuroscience, 2(7):664–670.<br />
Cheyer, A., <strong>and</strong> Martin, D., The Open Agent Architecture, In: Journal of Autonomous Agents <strong>and</strong> Multi-<br />
Agent Systems, vol. 4, no. 1, pp. 143-148, March 2001<br />
Choi, S., Cichocki, A., Park, H. M. <strong>and</strong> Lee, S. Y. (2005). Blind source separation <strong>and</strong> independent<br />
component analysis: A review. Neural Information Processing-Letters <strong>and</strong> Review, 6(1):1–57.<br />
Christ, S., Hohnsbein, J., Falkenstein, M., Hoormann, J. (2000). Erp components on reaction errors <strong>and</strong><br />
their functional significance: A tutorial. Biological Psychology, 51:87–107.<br />
Clarke, A. R., Barrya, R. J., McCarthyb, R., <strong>and</strong> S., Mark. (2001). Eeg-defined subtypes of children with<br />
attention-deficit/hyperactivity disorder. Clinical Neurophysiology, 112(11):2098–2105.<br />
181
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Coles, M. Bekkering, H. Van Schie, H., Mars, R. (2004). Modulation of activity in medial frontal <strong>and</strong><br />
motor cortices during error observation. Nature Neuroscience, 7:549–554.<br />
Coles, M.G.H., Gratton, G. <strong>and</strong> Donchin, E. (1988). Detecting early communication: using measures of<br />
movement-related potentials to illuminate human information processing. Biol Psychol, 26:69–89, 1988.<br />
Colombo, G., Wirz, M., <strong>and</strong> Dietz, V. (2001). Driven gait orthosis for improvement of locomotor training<br />
in paraplegic patients, Spinal Cord, vol. 39, pp. 252-253.<br />
CoolBOT Project, n.d. CoolBOT Project’s site. [online] Available at:http://www.coolbotproject.org<br />
[Accessed 13 May 2011].<br />
Corradini, A., Mehta, M., Bernsen, N.O., Martin , J.-C., Abrilian, S. (2005). Multimodal Input Fusion in<br />
Human-Computer Interaction: On the Example of the NICE Project. Data Fusion for Situation<br />
Monitoring, Incident Detection, Alert <strong>and</strong> Response Management, E. Shahbazian et al. Ed. IOS Press.<br />
Costa, N. <strong>and</strong> Caldwell, D. (2006). Control of a biomimetic "soft-actuated" 10 DoF lower body<br />
exoskeleton, in Proceedings of the IEEE/RAS-EMBS International Conference on Biomedical Robotics<br />
<strong>and</strong> Biomechatronics, pp. 495-501.<br />
Cover, T. M. <strong>and</strong> Thomas, J. A. (1991). Elements of Information Theory. Wiley, New York.<br />
CRASAR-Center for Robot-Assisted Search <strong>and</strong> Rescue at Texas A&M University [online] Available at:<br />
<br />
Croft, R. J. <strong>and</strong> Barry, R. J. (2000). Removal of ocular artifact from the eeg: a review. Neurophysiol. Clin.,<br />
30(1):5–19.<br />
Csíkszentmihályi, M. (1978). Beyond Boredom <strong>and</strong> Anxiety: Experiencing Flow in Work <strong>and</strong> Play.<br />
Cambridge University Press, Cambridge.<br />
Dal Seno, B. (2009). Toward An Integrated P300-And ErrP-Based Brain-Computer Interface. PhD thesis,<br />
Politecnico di Milano.<br />
Darken, C. J. (2005). Towards learned anticipation in complex stochastic environments. In Proc. Artificial<br />
Intelligence for Interactive Digital Entertainment Conference (AIIDE).<br />
Das, S., Grey, R., Gonsalves, P. (2002). Situation Assessment via Bayesian Belief Networks, Proceedings<br />
of the fifth International Conference on Information Fusion, Annapolis, Maryl<strong>and</strong><br />
Deecke, L., Grozinger, B. <strong>and</strong> Kornhuber, H.H. (1976). Voluntary finger movement in man: cerebral<br />
potentials <strong>and</strong> theory. Biol Cybernet, 23:99–119.<br />
Deecke, L., Scheid, P. <strong>and</strong> Kornhuber, H.H. (1969). Distribution of readiness potential, pre-motion<br />
positivity <strong>and</strong> motor potential of the human cerebral cortex preceding voluntary finger movement. Exp<br />
Brain Res, 7:158–168.<br />
Delorme, A. <strong>and</strong> Makeig, S. (2004). Eeglab: an open source toolbox for analysis of single-trial eeg<br />
dynamics. Journal of Neuroscience Methods, 134:9–21.<br />
Der, R. (2000). Selforganized robot behavior from the principle of homeokinesis. In Groß, H.-M., Debes,<br />
K., <strong>and</strong> Böhme, H.-J., editors, Proc. Workhop SOAVE ’2000 (Selbstorganisation von adaptivem<br />
Verhalten), volume 643 of Fortschritt-Berichte VDI, Reihe 10, pages 39–46, Ilmenau. VDI Verlag.<br />
Der, R. (2001). Self-organized acqusition of situated behavior. Theory Biosci., 120:1–9.<br />
Der, R., Hesse, F., <strong>and</strong> Martius, G. (2006). Rocking stumper <strong>and</strong> jumping snake from a dynamical system<br />
approach to artificial life. J. Adaptive Behavior, 14(2):105–115.<br />
Der, R., Steinmetz, U., <strong>and</strong> Pasemann, F. (1999). Homeokinesis – a new principle to back up evolution<br />
with learning. In Mohammadian, M., editor, Computational Intelligence for Modelling, Control, <strong>and</strong><br />
Automation, volume 55 of Concurrent Systems Engineering Series, pages 43–47. IOS Press.<br />
Deutsche Industrie Norm für Begriffe der Informationsverarbeitung: Teil 5: Begriffe, Aufbau digitaler<br />
Rechensysteme<br />
182
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Dietsch, J., (2011), Imitating Ourselves in Silicon, IEEE ROBOTICS & AUTOMATION MAGAZIN.<br />
Dietz, V., Colombo, G., <strong>and</strong> Jensen, L. (1994). Locomotor activity in spinal man, Lancet, vol. 44, pp.<br />
1260-1263.<br />
Dollar, A. <strong>and</strong> Herr , H. (2008). Lower extremity exoskeletons <strong>and</strong> active orthoses: challenges <strong>and</strong> stateof-the-art,<br />
IEEE Transactions on Robotics, vol. 24, no. 1, pp. 144-158.<br />
Donaldson-Matasci, M. C., Bergstrom, C. T., <strong>and</strong> Lachmann, M. (2010). The fitness value of information.<br />
Oikos, 119:219–230.<br />
Dragon Runner Reconnaissance Robot. [online] Available at: <br />
[Accessed 15.07.2011]<br />
Dreinhöfer KE, Merx H., Puhl W. (2003). In Schwerpunkbericht Decade of Bone <strong>and</strong> Joint Diseases.<br />
Hrsg. Statistisches Bundesamt.<br />
Dubois, D. M. (2003). Mathematical foundations of discrete <strong>and</strong> functional systems with strong <strong>and</strong> weak<br />
anticipations. In Anticipatory Behavior in Adaptive Learning Systems, Lecture Notes in Computer<br />
Science, pages 107–125. Springer.<br />
Duschau-Wicke, A., Zitzewitz, J. v., Lünenburger, L., <strong>and</strong> Riener, R. (2008). Patient-driven cooperative<br />
gait training with the rehabilitation robot Lokomat, in ECIFMBE-IFMBE Proceedings 22, pp. 1616-1619.<br />
EC Directorate-General Health & Consumer Protection, Public Health. Musculoskeletal Problems And<br />
Functional Limitation, The Great Public Health Challenge for the 21st Century (Grant agreement<br />
S12.297217), University of Oslo, Department of General Practice <strong>and</strong> Community Medicine, The Bone &<br />
Joint Decade 2000 – 2010, Oslo, October 2003<br />
Edelman, G. M., (1987). Neural Darwinism: The Theory of Neuronal Group Selection, New York: Basic<br />
Books.<br />
Ekkelenkamp, R., Veltink, P., Stramigioli, S., <strong>and</strong> Van der Kooij, H. (2007). Evaluation of a Virtual Model<br />
Control for the selective support of gait functions using an exoskeleton, in Proceedings of the IEEE 10th<br />
International Conference on Rehabilitation Robotics, pp. 693-699.<br />
Elbert, T., Flor, H., Birbaumer, N., Knecht, S., Hampson, S., Larbig, W., <strong>and</strong> Taub, E. (1994). Extensive<br />
reorganization of the somatosensory cortex in adult humans after nervous system injury, Neuroreport, vol.<br />
5, no. 18, pp. 2593-2597.<br />
Elektrol<strong>and</strong> Company, Ankara, Turkey, http://www.elektrol<strong>and</strong>.com.tr/<br />
Elting, C., Strube, M., Moehler, G., Rapp, S., <strong>and</strong> Williams, J., The Use of Multimodality within the<br />
EMBASSI System, M&C2002 - Usability Engineering Multimodaler Interaktionsformen Workshop,<br />
Hamburg, Germany, 2002<br />
Emken, J. L. <strong>and</strong> Reinkensmeyer, D. J. (2005). Robot-Enhanced motor learning: accelerating internal<br />
model formation during locomotion by transient dynamic amplifcation, IEEE Transactions on Neural<br />
Systems <strong>and</strong> Rehabilitation Engineering, vol. 13, pp. 33-39.<br />
Emken, J. L., Benitez, R., <strong>and</strong> Reinkensmeyer, D. J. (2007). Human-robot cooperative movement training:<br />
Learning a novel sensory motor transformation during walking with robotic assistance-as-needed, Journal<br />
of NeuroEngineering <strong>and</strong> Rehabilitation, vol. 4, no. 8.<br />
Emken, J. L., Harkema, S. J., Beres-Jones, J. A., Ferreira, C. K., <strong>and</strong> Reinkensmeyer, D. J. (2008).<br />
Feasibility of manual teach-<strong>and</strong>-replay <strong>and</strong> continuous impedance shaping for robotic locomotor training<br />
following spinal cord injury, IEEE Transactions on Biomedical Engineering, vol. 55, no. 1, pp. 322-334.<br />
Emken, J. L., Wynne, J. H., Harkema, S. J., <strong>and</strong> R. D. J. (2006). A Robotic Device for Manipulating<br />
Human Stepping, IEEE Transactions on Robotics, vol. 22, no. 1, pp. 185-189.<br />
Emken, J., Bobrow, J., <strong>and</strong> Reinkensmeyer, D. (2005). Robotic movement training as an optimization<br />
problem: designing a controller that assists only as needed, in Proceedings of 9th International Conference<br />
on Rehabilitation Robotics, pp. 307-312.<br />
Emotiv, n.d. EPOC neuroheadset. [online] Available at http://www.emotiv.com/store/hardware/epocbci/epoc-neuroheadset/<br />
[Accessed 13 May 2011].<br />
183
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Endsley, M. (2000). Theoretical underpinnings of Situation Awareness: a critical review, Situation<br />
Awareness Analysis <strong>and</strong> Measurement, Mahwah.<br />
Erman, L. D., Hayes-Roth, F., Lesser, V. R., Reddy, D. R. (1980). The Hearsay-II speech-underst<strong>and</strong>ing<br />
system: Integrating knowledge to resolve uncertainty, In ACM Computing Surveys, volume 12 (2), pages<br />
213–253.<br />
Escolano, C., Antelis, J., <strong>and</strong> Minguez, J. (2009). Human Brain-Teleoperated Robot between Remote<br />
Places. IEEE International Conference on Robotics <strong>and</strong> Automation (ICRA).<br />
European Bone <strong>and</strong> Joint Health Strategies Project, A Public Health Strategy to Reduce the Burden of<br />
Musculoskeletal Conditions, (Grant Agreement : SI2.304 598), The Bone & Joint Decade, Department of<br />
Orthopedics, University Hospital, SE-221 85 LUND, Sweden, ISBN 91-975284-0-4<br />
European L<strong>and</strong> Robot trials, 2006. [online] Available at: < http://www.rheinmetall-detec.com/> [Accessed<br />
15 July 2011].<br />
European Opinion Research Group EEIG. Health, Food <strong>and</strong> Alcohol <strong>and</strong> Safety. Special Eurobarometer<br />
186. 2003. European Commission.<br />
Fairclough, S.H. (2009). Fundamentals of physiological computing, Interacting with Computers 21, pp.<br />
133-145.<br />
Farris, R. J., Quintero, H. A., Withrow, T. J., <strong>and</strong> Goldfarb, M. (2009). Design of a Joint-Coupled Orthosis<br />
for FES-Aided Gait, in 2009 IEEE International Conference on Rehabilitation Robotics, pp. 246-252.<br />
Fatourechi, M., Bashashati, A., Ward, R.K., Birchi, G.E. (2007). Emg <strong>and</strong> eog artifacts in brain computer<br />
interface systems: A survey. Clinical Neurophysiology, 118:480–494.<br />
Ferrez, P.W. <strong>and</strong> Del R Millan, J. (2004). Error-related eeg potentials generated during simulated<br />
braincomputer interaction. IEEE Transactions on Biomedical Engineering, 55(3):923–929.<br />
Foster, D. (1990). Eeg <strong>and</strong> subjective correlates of alpha-frequency binaural-beat stimulation combined<br />
with alpha biofeedback.<br />
Fowler, F.J., Gill, H.S. (1990). The industrial orthopedic rehabilitation market: a niche opportunity, Hosp<br />
Technol Ser. May;9(13):1-13.<br />
Friel, P.N. (2007). Eeg biofeedback in the treatment of attention deficit/hyperactivity disorder. Alternative<br />
Medicine Review, 12(2):146–151.<br />
g.tec Technologies, n.d. g.SAHARA, active dry electrode system. [online] Available at<br />
[Accessed 13 May<br />
2011].<br />
g.tec Technologies, n.d. g.tec medical engineering. [online] Available at < http://www.gtec.at/> [Accessed<br />
13 May 2011].<br />
Gargiulo, G., Calvo, R.A., Bifulco, P., Cesarelli, M., Jin, C., Mohamed, A. <strong>and</strong> Van Schaik, A. (2010). A<br />
new eeg recording system for passive dry electrodes. Clinical Neurophysiology, 121(5):686–693.<br />
Gat, E., (1997). On three-layer architectures. In Artificial Intelligence <strong>and</strong> Mobile Robots. MIT/AAAI<br />
Press.<br />
Geetha, G. <strong>and</strong> Geethalakshmi, S.N. (2011). Scrutinizing di erent techniques for artifact removal from<br />
eeg signals. International Journal of Engineering Science <strong>and</strong> Technology, 3(2).<br />
Gerkey, B. P., Vaugh, R. T. <strong>and</strong> Howard, A. (2003). The player/stage project: Tools for multi-robot <strong>and</strong><br />
distributed sensor systems. In Proceedings of the International Conference on Advanced Robotics, pages<br />
317–323, Coimbra, Portugal.<br />
Gibbon, D., Mertins, I., Moore, R. (2000). H<strong>and</strong>book of Multimodal <strong>and</strong> Spoken Dialogue Systems:<br />
Resources, Terminology <strong>and</strong> Product Evaluation, Springer<br />
Gordon, K. E. <strong>and</strong> Ferris, D. P. (2007). Learning to walk with a robotic ankle exoskeleton, Journal of<br />
Biomechanics, vol. 40, pp. 2636-2644.<br />
184
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Gratch, J. <strong>and</strong> Marsella, S. (2004). A domain-independent framework for modeling emotion. Journal of<br />
Cognitive Systems Research, 5(4):269–306.<br />
GROUND ROBOTS – 510 PACKBOT®. [online] Available at:<br />
[Accessed 10 July 2011].<br />
Grozea, C., Voinescu, C.D. <strong>and</strong> Fazli, S. (2011). Bristle-sensors-low-cost flexible passive dry eeg<br />
electrodes for neurofeedback <strong>and</strong> bci applications. Journal of Neural Engineering, 8(2).<br />
Gruber, T., Muller, M., <strong>and</strong> Elbert, T. (1999). Selective visual-spatial attention alters induced gamma-b<strong>and</strong><br />
responses in the human eeg. Clinical Neurophysiology, 110:2074.<br />
Guglielmelli, E., Johnson, M. J., <strong>and</strong> Shibata, T. (2009). Guest Editorial Special Issue on Rehabilitation<br />
Robotics, IEEE Transactions on Robotics, vol. 25, no. 3, pp. 477-480.<br />
Guizzo, E. (2011), Japan Earthquake: More Robots to the Rescue, IEEE Spectrum Magazine, May 2011<br />
issue<br />
Gwin, J.T., Gramann, K., Makeig, S. <strong>and</strong> Fer, D. P. (2010). Removal of movement artifact from<br />
highdensity eeg recorded during walking <strong>and</strong> running. J Neurophysiol., 103:3526–3534.<br />
Hamadicharef, B., Zhang, H.H., Guan, C., Wang, C.C., Phua, K.S., Tee, K.P. Tee, <strong>and</strong> Ang, K.K. (2009).<br />
Learning eeg-based spectral-spatial patterns for attention level measurement. In IEEE International<br />
Symposium on Circuits <strong>and</strong> Systems (ISCAS).<br />
Hans, M. <strong>and</strong> Baum, W. (2001). Concept of a hybrid architecture for Care-O-bot. In Proc. IEEE<br />
International Workshop on Robot <strong>and</strong> Human Interactive Communication (RoMan), pp. 407- 411.<br />
Harkema, S. J. (2001). Neural plasticity after human spinal cord injury: application of locomotor training<br />
to the rehabilitation of walking, The Neuroscientist, vol. 7, no. 5, pp. 455-468.<br />
Haufler, A.J., Spalding, T.W., Santa Maria, D.L., <strong>and</strong> Hatfield, B.D. (2000). Neuro-cognitive activity<br />
during a self-paced visuospatial task: comparative eeg profiles in marksmen <strong>and</strong> novice shooters.<br />
Biological Psychology, 53:131–160.<br />
Hayes-Roth, B., Buchanan, B., Lichtarge, O., Hewett, M., Altman, R., Brinkley, J., Cornelius, C., Duncan,<br />
B., <strong>and</strong> Jardetzky, O. (1986). PROTEAN: Deriving Protein Structure from Constraints, Proceedings of<br />
AAAI-86, 904-909.<br />
He, P., Wilson, G. <strong>and</strong> Russell, C. (2004). Removal of ocular artifacts from electro-encephalogram by<br />
adaptive filtering. Med. Biol. Eng. Comput., 42(3):407–412.<br />
Hericks, M., Krebs, U., Kuzmicheva, O., (2011). A Mobile Reconnaissance Robot for Investigation of<br />
Dangerous Sites, the 2011 IEEE/RSJ International Conference on Intelligent Robots <strong>and</strong> Systems (IROS<br />
2011), San Francisco, California, 2011 (accepted)<br />
Hesse S. <strong>and</strong> Uhlenbrock, D. (2000). A mechanized gait trainer for restoration of gait, Journal of<br />
Rehabilitation Research <strong>and</strong> Development, vol. 37, no. 6, pp. 701-708.<br />
Hesse, S., Bertelt, C., Jahnke, M. T., Scha_rin, A., Baake, P., <strong>and</strong> Malezic, M. (1995). Treadmill Training<br />
with partial body weight support compared with physiotherapy in nonambulatory hemiparetic patients,<br />
Stroke, vol. 26, pp. 976-981.<br />
Hidler, J., Hamm, L. F., Lichy, A., <strong>and</strong> Groah, S. L. (2008). Automating activity-based interventions: The<br />
role of robotics, Journal of Rehabilitation Research & Development, vol. 45, no. 2, pp. 337-344.<br />
Higgins, R.P. (2005). Automatic Event Recognition for Enhanced Situational Awareness in UAV Video,<br />
SIMA 2005, Atlantic City, NJ.<br />
Hill, N.J., Lal, T.N., Bierig, K., Birbaumer, N. <strong>and</strong> Schlkopf, B. (2004). Attentional modulation of<br />
auditory event-related potentials in a brain-computer interface. Biomedical Circuits <strong>and</strong> Systems.<br />
Hillyard, S.A., Hink, R.F., Schwent, V.L. <strong>and</strong> Picton, T.W. (1973). Electrical signs of selective attention in<br />
the human brain. Science, 182:177–180.<br />
Hitt, J., Oymagil, A. M., <strong>and</strong> Sugar, T. (2007). Dynamically controlled ankle-foot orthosis (DCO) with<br />
regenerative kinetics: incrementally attaining user portability, 2007 Proceedings of the IEEE International<br />
Conference on Robotics <strong>and</strong> Automation Roma, Italy, pp. 1541-1546.<br />
185
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Howard, C., Stumptner, M. (2005). Situation Assessments Using Object Oriented Probabilistic Relational<br />
Models, Proceedings of the 7th International Conference on Information Fusion, Vol. 2<br />
Howard, R. A. (1966). Information value theory. IEEE Transactions on Systems Science <strong>and</strong> Cybernetics,<br />
SSC-2:22–26.<br />
Hoyer, P. O., Janzing, D., Mooij, J. M., Peters, J., <strong>and</strong> SchÃlkopf, B. (2009). Nonlinear causal discovery<br />
with additive noise models. In Koller, D., Schuurmans, D., Bengio, Y., <strong>and</strong> Bottou, L., editors, Advances<br />
in Neural Information Processing Systems 21: Proceedings of the 2008 Conference, pages 689–696, Red<br />
Hook, NY. Curran.<br />
Huang, R.-S., Jung, T.-P., <strong>and</strong> Makeig, S. (2007). Multi-scale eeg brain dynamics during sustained<br />
attention tasks. Proceedings of the 2007 IEEE International Conference on Acoustics, 4:1173– 1176.<br />
Hunt, J. (2002). Blackboard Architectures. JayDee Technology Ltd. 2002.<br />
Hurtig, T., Jokinen, K. (2006). Modality Fusion in a Route Navigation System, Workshop on Effective<br />
Multimodal Dialogue Interface in International Conference on Intelligent User Interfaces, Sydney,<br />
Australia.<br />
Ikeda, A., Lunders, H.O., Collura, T.F., Burgess, R.C., Morris, H.H., Hamano T., et al. (1996). Subdural<br />
potentials at orbitofrontal <strong>and</strong> mesial prefrontal areas accompanying anticipation <strong>and</strong> decision making in<br />
humans: a comparison with bereitschaftspotential. Electroenceph Clin Neurophysiol, 98:206–12.<br />
Ingram, H.A., van Donkelaar, P., Cole, J., Vercher, J.L., Gauthier, G.M. <strong>and</strong> Miall, R.C. (2000). The role<br />
of proprioception <strong>and</strong> attention in a visuomotor adaptation task. Exp Brain Res, 132:114126.<br />
Injuries in the European Union Statistics Summary 2005-2007, Robert Bauer, Monica Steiner (KfV),<br />
European Commission, Health <strong>and</strong> Consumers (DG Sanco) Vienna November 2009<br />
Israel, J., Campbell, D., Kahn, J., <strong>and</strong> Hornby, T. (2006). Metabolic costs <strong>and</strong> muscle activity patterns<br />
during robotic- <strong>and</strong> therapist-assisted treadmill walking in individuals with incomplete spinal cord injury,<br />
Physical Therapy, vol. 86, no. 11, pp. 1466-78.<br />
Iturrate, I., Antelis, J., Kubler, A., <strong>and</strong> Minguez, J. . Non-Invasive Brain-Actuated Wheelchair based on a<br />
P300 Neurophysiological Protocol <strong>and</strong> Automated Navigation. IEEE Transactions on Robotics (in press).<br />
Iturrate, I., Montesano, L., <strong>and</strong> Minguez, J. (2010). Elicitation <strong>and</strong> Online Recognition of Error-Related<br />
Potentials During Observation of Robot Operation. In Annual Conference of IEEE Engineering in<br />
Medicine <strong>and</strong> Biology Society (EMBS).<br />
Iturrate, I., Montesano, L., <strong>and</strong> Minguez, J. (2010). Robot Reinforcement Learning using EEG-based<br />
reward signals. In IEEE International Conference on Robotics <strong>and</strong> Automation (ICRA), 2010.<br />
Ivansson, J. (2002). Situation Assessment in a Stochastic Environment using Bayesian Networks, Masters<br />
Thesis at Department of Electrical Engineering, Linköping University,<br />
Ives, J. R. <strong>and</strong> Schomer, D. L. (1988). A 6-pole filter for improving the readability of muscle contaminated<br />
eegs. Electroencephalogr. Clin. Neurophysiol., 69(5):486–490.<br />
Jacko, J.A., Sears, A. (2002). The Human-computer Interaction H<strong>and</strong>book: Fundamentals, Evolving<br />
Technologies <strong>and</strong> Emerging Applications, 2nd ed. CRC.<br />
Jain, A., Hong, L., <strong>and</strong> Kulkarni, Y. (1999). A Multimodal Biometric System Using Fingerprint, Face <strong>and</strong><br />
Speech, Proceedings of 2nd Int'l Conference on Audio- <strong>and</strong> Video-based Biometric Person Authentication,<br />
Washington D.C., pp. 182-187<br />
Jezernik, S., Colombo, G., <strong>and</strong> Morari, M. (2004). Automatic gait-pattern adaptation algorithms for<br />
rehabilitation with a 4-DOF robotic orthosis, IEEE Transactions on Robotics <strong>and</strong> Automation, vol. 20, no.<br />
3, pp. 574-582.<br />
Jung, T., Polani, D., <strong>and</strong> Stone, P. (2011). Empowerment for continuous agent-environment systems.<br />
Adaptive Behaviour. Published online 13. January 2011.<br />
Jung, T.P., Makeig, S., Stensmo, M. <strong>and</strong> Sejnowski, T.J. (1997). Estimating alertness from the eeg power<br />
spectrum. IEEE Transactions on Biomedical Engineering, 44(1):60–69.<br />
186
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Kaplan, F. <strong>and</strong> Oudeyer, P.-Y. (2004). Maximizing learning progress: an internal reward system for<br />
development. In Iida, F., Pfeifer, R., Steels, L., <strong>and</strong> Kuniyoshi, Y., editors, Embodied Artificial<br />
Intelligence, volume 3139 of LNAI, pages 259–270. Springer.<br />
Kawamura, K., Dodd, W., Ratanaswasd, P. (2004). Robotic body-mind integration: next gr<strong>and</strong> challenge<br />
in robotics, in Proceedings of the 2004 IEEE International Workshop on Robot <strong>and</strong> Human Interactive<br />
Communication (Kurashiki, Okayama, Japan), pp. 23–28.<br />
Kawamura, K., Peters, A.R., Wilkes, M.D., Alford, A.W, Rogers, T.E. (2000). ISAC: Foundations in<br />
human-humanoid interaction. IEEE Intelligent Systems 15 (4): 38–45.<br />
Kawamura, K., Peters, R.A., Bodenheimer, R.E., Sarkar, N., Park, J., Clifton, C.A., Spratley, A.W.,<br />
Hambuchen, K.A. (2004). A parallel distributed cognitive control system for a humanoid robot.<br />
International Journal of Humanoid Robotics 1 (1): 65–93.<br />
Kawato, M., (1999). Internal models for motor control <strong>and</strong> trajectory planning. Current Opinion in<br />
Neurobiology, 9:718-727.<br />
Kazerooni, H. <strong>and</strong> Steger, R. (2006). The Berkeley Lower Extremity Exoskeleton, Journal of Dynamic<br />
Systems, Measure- ment <strong>and</strong> Control, vol. 128, pp. 14-25.<br />
Kelly, J. L. (1956). A new interpretation of information rate. Bell System Technical Journal, 35:917–926.<br />
Kelly, S.P., Docktree, P., Reilly, R.B., <strong>and</strong> Robertson, I.H. (2003). Eeg alpha power <strong>and</strong> coherence time<br />
courses in a sustained attention task. Proceedings of the International Conference on Neural Engineering,<br />
pages 83–86.<br />
Kieras, D. E., Meyer, D. E., (1997). An overview of the EPIC architecture for cognition <strong>and</strong> performance<br />
with application to human–computer interaction. Human-Computer Interaction, 12(4): 391–438.<br />
Kirstein, K.-U., Sedivy, J., Salo ,T., Hagleitner, C., Vancura, T., <strong>and</strong> Hierlemann, A. (2005). A CMOSbased<br />
Tactile Sensor for Continuous Blood Pressure Monitoring, Proceedings IEEE Conference, Design,<br />
Automation <strong>and</strong> Test in Europe.<br />
Klyubin, A. S., Polani, D., <strong>and</strong> Nehaniv, C. L. (2004). Organization of the information flow in the<br />
perception-action loop of evolved agents. In Proceedings of 2004 NASA/DoD Conference on Evolvable<br />
Hardware, pages 177–180. IEEE Computer Society.<br />
Klyubin, A. S., Polani, D., <strong>and</strong> Nehaniv, C. L. (2005a). All else being equal be empowered. In Advances<br />
in Artificial Life, European Conference on Artificial Life (ECAL 2005), volume 3630 of LNAI, pages<br />
744–753. Springer.<br />
Klyubin, A. S., Polani, D., <strong>and</strong> Nehaniv, C. L. (2005b). Empowerment: A universal agent-centric measure<br />
of control. In Proc. IEEE Congress on Evolutionary Computation, 2-5 September 2005, Edinburgh,<br />
Scotl<strong>and</strong> (CEC 2005), pages 128–135. IEEE.<br />
Klyubin, A. S., Polani, D., <strong>and</strong> Nehaniv, C. L. (2008). Keep your options open: An information-based<br />
driving principle for sensorimotor systems. PLoS ONE, 3(12):e4018.<br />
Klyubin, A., Polani, D., <strong>and</strong> Nehaniv, C. (2007). Representations of space <strong>and</strong> time in the maximization of<br />
information flow in the perception-action loop. Neural Computation, 19(9):2387–2432.<br />
Koehler, S., Lauer, P., Schreppel, T., Jacob, C., Heine, M., Boreatti-Hummer, A., Fallgatter, A. J., <strong>and</strong><br />
Herrmann, M. J. (2009). Increased eeg power density in alpha <strong>and</strong> theta b<strong>and</strong>s in adult adhd patients.<br />
Journal of Neural Transmission, 116(1):97–104.<br />
Kong, K. <strong>and</strong> Jeon, D. (2006). Design <strong>and</strong> Control of an Exoskeleton for the Elderly <strong>and</strong> Patients,<br />
IEEE/ASME Trans-actions on Mechatronics, vol. 11, no. 4, pp. 428-423.<br />
Kong, K., Moon, H., Hwang, B., Jeon, D., <strong>and</strong> Tomizuka, M. (2009). Impedance Compensation of<br />
SUBAR for Back-Drivable Force-Mode Actuation, IEEE Transactions on Robotics, vol. 25, pp. 512-521.<br />
Körding, K. P. <strong>and</strong> Wolpert, D. M. (2004). Bayesian integration in sensorimotor learning. Nature,<br />
427:244–247.<br />
Kornhuber, H.H., Deecke, L. (1964). Hirnpotential<strong>and</strong>erungen beim menschen vor und nach<br />
willkurbewegungen, dargestellt mit magnetb<strong>and</strong>-speicherung und ruckwartsanalyse. Pflugers Arch,<br />
281(52).<br />
187
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Kornhuber, H.H., Deecke, L. (1965). Hirnpotentiala nderungen bei willkurbewegungen und passiven<br />
bewegungen des menschen: Bereitschaftspotential und rea erente potentiale. Pflugers Arch, 284:1–17.<br />
Kott, A. <strong>and</strong> Ownby, M. (2005). Tools for real-time anticipation of enemy actions in tactical ground<br />
operations. In 10th International Comm<strong>and</strong> <strong>and</strong> Control Research <strong>and</strong> Technology Symposium, McLean,<br />
VA.<br />
Krauledat, M., Dornhege, G., Blankertz, B., Losch, F., Curio, G. <strong>and</strong> Mller, K.-R. (2004). Improving speed<br />
<strong>and</strong> accuracy of brain-computer interfaces using readiness potential features. Engineering in Medicine <strong>and</strong><br />
Biology Society, pages 4511–4515.<br />
Krichmar, J. L., Edelman, G. M., (2003). Brain-based devices: Intelligent systems based on principles of<br />
the nervous system. In IEEE/RSJ International Conference on Intelligent Robotics <strong>and</strong> Systems, Las<br />
Vegas, Nevada.<br />
Krut, S., Benoit, M., Dombre, E., <strong>and</strong> Pierrot, F. (2010). MoonWalker, a Lower Limb Exoskeleton able to<br />
Sustain Bodyweight using a Passive Force Balancer, in 2010 IEEE International Conference on Robotics<br />
<strong>and</strong> Automation, pp.2215-2220.<br />
Krut, S., Benoit, M., Dombre, E., <strong>and</strong> Pierrot, F. (2010). MoonWalker, a Lower Limb Exoskeleton able to<br />
Sustain Bodyweight using a Passive Force Balancer, in 2010 IEEE International Conference on Robotics<br />
<strong>and</strong> Automation, pp.2215-2220.<br />
Kulic, D. <strong>and</strong> Croft, E.A. (2007). Affective State Estimation for Human-Robot Interaction, IEEE<br />
Transactions on Robotics, Vol 23, No 5, pp. 991-999.<br />
Kulic, D. <strong>and</strong> Croft, E.A. (2007). A ective state estimation for human-robot interaction. IEEE transaction<br />
on Robotics, 23(5):991–1000.<br />
Kutas, M. <strong>and</strong> Donchin, E. (1980). Preparation to respond as manifested by movement-related brain<br />
potentials. Brain Res, 202:95–115.<br />
Kuzmitcheva, O., Gräser, A., (2009), A Concept of a Robotic System for Safe Sampling Procedure during<br />
Reconnaissance of CBRN-Disaster, 3rd Int. Advanced Robotics Programme IARP-RISE’2009 - Brussels,<br />
Belgium.<br />
Lagerlund, T.D., Sharbrough, F.W., <strong>and</strong> Busacker, N.E. (1997). Spatial filtering of multichannel<br />
electroencephalographic recordings through principal component analysis by singular value<br />
decomposition. Clin. Neurophysiol., 14(1):73–82.<br />
Lam, T., Wirz, M., Lünenburger, L., <strong>and</strong> Dietz, V. (2008). Swing phase resistance enhances exor muscle<br />
activity during treadmill locomotion in incomplete spinal cord injury, Neurorehabil Neural Repair, vol. 22,<br />
no. 5, pp. 438-446.<br />
Lambert, D. A. (1999). Assessing Situations, Proceedings of 1999 Information, Decision <strong>and</strong> Control, pp.<br />
503 - 508. IEEE.<br />
Lambert, D. A. (2001). Situations for Situation Awareness, Proceedings of the Fourth International<br />
Conference on International Fusion, Montreal<br />
Lambert, D. A. (2003). Gr<strong>and</strong> Challenges of Information Fusion, Proceedings of the Sixth International<br />
Conference on Information Fusion. Cairns, Australia. pp. 213 -219.<br />
Lambert, D.A. (2006). Formal Theories for Semantic Fusion Lambert, 9th International Conference on<br />
Information Fusion, pp 1 – 8,<br />
L<strong>and</strong>ragin, F. (2007). Physical, semantic <strong>and</strong> pragmatic levels for multimodal fusion <strong>and</strong> fission, Seventh<br />
International Workshop on Computational Semantics (IWCS-7), Tilburg, The Netherl<strong>and</strong>s<br />
Lang, C.E. <strong>and</strong> Bastion, A.J. (2002). Cerebellar damage impairs automaticity of a recently practiced<br />
movement. J Neurohysiol, 87:1336–1347.<br />
Laughlin, S. B. (2001). Energy as a constraint on the coding <strong>and</strong> processing of sensory information.<br />
Current Opinion in Neurobiology, 11:475–480.<br />
Laughlin, S. B., de Ruyter van Steveninck, R. R., <strong>and</strong> Anderson, J. C. (1998). The metabolic cost of neural<br />
information. Nature Neuroscience, 1(1):36–41.<br />
188
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Lawrence RC, Helmick CG, Arnett FC, Deyo RA, Felson DT, Giannini EH et al. (1999). Estimates of the<br />
prevalence of arthritis <strong>and</strong> selected musculoskeletal disorders in the United States. Arthritis Rheum,<br />
41:778-799.<br />
Lee, S. <strong>and</strong> Sankai, Y. (2002). Power assist control for leg with HAL-3 based on virtual torque <strong>and</strong><br />
impedance adjustment, in IEEE International Conference on Systems, Man <strong>and</strong> Cybernetics, vol. 4, pp. 6-<br />
9.<br />
Lehmann, J., Laird, J., Rosenbloom, P (2006), A Gentle Introduction to SOAR, an Architecture for Human<br />
Cognition: 2006 Update, Web: http://ai.eecs.umich.edu/soar/sitemaker/docs/misc/GentleIntroduction-<br />
2006.pdf<br />
Leigh R. Hochberg et al. (2006). Neuronal ensemble control of prosthetic devices by a human with<br />
tetraplegia. Nature, 442:164–171.<br />
Liang, H. <strong>and</strong> Wang, H. (2003). Top-down anticipatory control in prefrontal cortex. Theory in<br />
Biosciences, 122(1):70–86.<br />
Linden, M., Habib, T. <strong>and</strong> Radojevic, V. (1996). A controlled study of the effects of eeg biofeedback on<br />
cognition <strong>and</strong> behavior of children with attention deficit disorder <strong>and</strong> learning disabilities. Applied<br />
Psychophysiology <strong>and</strong> Biofeedback, 21(1):35–49.<br />
Lindsley, D. (1952). Psychological phenomena <strong>and</strong> the electroencephalogram. Electroenceph. Clin.<br />
Neurophysiol.<br />
Lizier, J., Prokopenko, M., <strong>and</strong> Zomaya, A. (2007). Detecting non-trivial computation in complex<br />
dynamics. In (Almeida e Costa et al., 2007), pages 895–904.<br />
Lloyd, S. (1991). Causality <strong>and</strong> information flow. In Atmanspacher, H. <strong>and</strong> Scheingraber, H., editors,<br />
Information Dynamics, pages 131–142. Plenum Press.<br />
Lopez, E., Iturrate, I., Montesano, L., <strong>and</strong> Minguez, J. (2010). Real-time recognition of feedback<br />
errorrelated potentials during a time-estimation task. In Annual Conference of IEEE Engineering in<br />
Medicine <strong>and</strong> Biology Society (EMBS).<br />
Lotze, M., Braun, C., Birbaumer, N., Anders, S., <strong>and</strong> Cohen Cohen L. G. (2003). Motor learning elicited<br />
by voluntary drive, Brain, vol. 126, no. 4, pp. 866-872.<br />
Low, K. H., Liu, X., <strong>and</strong> Yu, H. (2005). Development of NTU Wearable Exoskeleton System for Assistive<br />
Technologies, in Proceedings of the 2005 IEEE International Conference on Mechatronics & Automation,<br />
pp. 1099-1106.<br />
Lubar, J.F. (1991). Discourse on the development of eeg diagnostics <strong>and</strong> biofeedback for attentiondeficit/hyperactivity<br />
disorders. Applied Psychophysiology <strong>and</strong> Biofeedback, 16(3):201–225.<br />
Lungarella, M. <strong>and</strong> Sporns, O. (2005). Information self-structuring: Key principle for learning <strong>and</strong><br />
development. In Proceedings of 4th IEEE International Conference on Development <strong>and</strong> Learning, pages<br />
25–30. IEEE.<br />
Lungarella, M. <strong>and</strong> Sporns, O. (2006). Mapping information flow in sensorimotor networks. PLoS<br />
Computational Biology, 2(10).<br />
Luth, T., Ojdanic, D., Friman, O., Prenzel, O., <strong>and</strong> Graser, A. (2007). Low level control in a<br />
semiautonomous rehabilitation robotic system via a Brain-Computer Interface.<br />
Maass, W. (2003). Computation with spiking neurons. In Arbib, M. A., editor, H<strong>and</strong>book of Brain Theory<br />
<strong>and</strong> Neural Networks, pages 1080–1083. MIT Press, Cambridge, 2 edition.<br />
Macfarlane GJ, Croft PR, Schollum J, Silman AJ. Widespread pain: is an improved classification possible?<br />
Journal of Rheumatology 1996; 23(9):1628-1632.<br />
Maggi, L., Parini, S., Perego, P., <strong>and</strong> Andreoni, G. (2008). Bci++: an objectoriented bci prototyping<br />
framework. In 4th international brain-computer interface workshop.<br />
Major <strong>and</strong> Chronic Diseases Report 2007, Directorate-General for Health & consumers, European<br />
Commission, April 2008<br />
189
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Mankala, K. K., Banala, S. K., <strong>and</strong> Agrawal, S. K. (2007). Passive Swing Assistive Exoskeletons for<br />
Motor-Incomplete Spinal Cord Injury Patients, in Proceedings of the 2007 IEEE International Conference<br />
on Robotics <strong>and</strong> Automation, pp. 3761-3766.<br />
Marchal-Crespo, L. <strong>and</strong> Reinkensmeyer, D (2008). Haptic guidance can enhance motor learning of a<br />
steering tasks, Journal of motor behaviour, vol. 40, no. 6, pp. 545-557.<br />
Marchal-Crespo, L. <strong>and</strong> Reinkensmeyer, D. J. (2009). Review of control strategies for robotic movement<br />
training after neurologic injury, Journal of NeuroEngineering <strong>and</strong> Rehabilitation, vol. 6, no. 20.<br />
Martin, J., Pollock, A., <strong>and</strong> Hettinger, J. (2010). Microprocessor Lower Limb Prosthetics: Review of<br />
Current State of the Art, Journal of Prosthetics & Orthotics, vol. 22, no. 3, pp. 183-193.<br />
Mason, S. (2005). Dry electrode technology: What exists <strong>and</strong> what is under development?<br />
Mason, S., et al. (2007). A comprehensive survey of brain interface technology designs. Annnals of<br />
Biomedical Engineering, 35(2).<br />
Massey, J. (1990). Causality, feedback <strong>and</strong> directed information. In Proc. Int. Symp. Inf. Theory Applic.<br />
(ISITA-90), pages 303–305.<br />
Mataric, M. J., Eriksson, J., Feil-Seifer, D. J., <strong>and</strong> Winstein, C. J. (2007). Socially assistive robotics for<br />
post-stroke rehabilitation, Journal of NeuroEngineering <strong>and</strong> Rehabilitation, vol. 4, no. 5.<br />
Matteucci, M., Carabalona, R., Casella, M., Di Fabrizio, E., Gramatica, F., Di Rienzo, M., Snidero, E.,<br />
Gavioli, L. <strong>and</strong> Sancrotti, M. (2007). Micropatterned dry electrodes for braincomputer interface. Journal<br />
Microelectronic Engineering, 84(2).<br />
Mc Farl<strong>and</strong>, D.J. <strong>and</strong> Wolpaw, J.R. (2008).. Brain-computer interface operation of robotic <strong>and</strong> prosthetic<br />
devices. IEEE Computer Society, pages 52–56.<br />
McAllester, D. A. (1999). Pac-bayesian model averaging. In Proceedings of the Twelfth Annual<br />
Conference on Computational Learning Theory, Santa Cruz, CA, pages 164–170, New York. ACM.<br />
McCarney, R., Croft, P.R. (1999). Knee pain. In: Crombie IK, Croft PR, Linton SJ, LeResche L,Von Korff<br />
M, editors. Epidemiology of pain. Seattle: IASP Press: 299-313.<br />
Meier, U., Stiefelhagen, R., Yang, J., <strong>and</strong> Weibel, A. (2000). Towards Unrestricted Lip Reading,<br />
International Journal of Pattern Recognition <strong>and</strong> Artificial Intelligence, vol. 14, no. 5, pp. 571-585.<br />
Meissner, A., Schönfeld, W. (2005). Data Communication Between the German NBC Reconnaissance<br />
Vehicle <strong>and</strong> Its Control Center Uniut, From Integrated Publication <strong>and</strong> Information Systems to<br />
Information <strong>and</strong> Knowledge Environments, Lecture Notes in Computer Science, 2005, Volume 3379/2005<br />
Merx, H., Dreinhöfer, K.E.., Schrader, P., Sturmer, T., Puhl, W., Gunther, K.P., Brenner, H. (2003).<br />
International variation in hip replacement rates. Ann Rheum Dis, 62-3:222-6.<br />
Metta, G., Fitzpatrick, P., Natale, L. (2006). Yarp: yet another robot platform. International Journal on<br />
Advanced Robotics Systems 3(1), 43–48<br />
Millan, J., Franze, M., Mourino, J., Cincotti, F. <strong>and</strong> Babiloni, F. (2002). Relevant eeg features for the<br />
classification of spontaneous motor-related tasks. Biol. Cybern., 86(2):89–95.<br />
Millan, J.d.R., Renkens, Mourino, F., J., <strong>and</strong> Gerstner, W. (2004). Noninvasive Brain-Actuated Control of<br />
a Mobile Robot by Human EEG. IEEE Transactions on Biomedical Engineering, 51(6).<br />
Miltner, W.H.R., Braun, C.H. <strong>and</strong> Coles, M.G.H. (1997). Event-related brain potentials following<br />
incorrect feedback in a time-estimation task: Evidence for a generic neural system for error detection.<br />
Journal of Cognitive Neuroscience, 9(6):788–798.<br />
Möller, M. <strong>and</strong> Polani, D. (2008). Common concepts in agent groups, symmetries <strong>and</strong> conformity in a<br />
simple environment. In (Bullock et al., 2008), pages 420–427.<br />
Modular Reconnaissance <strong>and</strong> EOD Robot ASENDRO, 2007. [online] Available at:<br />
[Accessed<br />
15 July 2011].<br />
190
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Monastra, V.J., Lynn, S., Linden, M., Lubar, J.F., Gruzelier, J., <strong>and</strong> LaVaque, T.J. (2005).<br />
Electroencephalographic biofeedback in the treatment of attention-deficit/hyperactivity disorder. Applied<br />
Psychophysiology <strong>and</strong> Biofeedback, 30(2):95–114.<br />
Montemerlo, M., Roy, N., <strong>and</strong> Thrun, S. (2003). Perspectives on st<strong>and</strong>ardization in mobile robot<br />
programming: the carnegie mellon navigation (carmen) toolkit. In IEEE/RSJ International Conference on<br />
Intelligent Robots <strong>and</strong> Systems, volume 3, pages 2436–2441.<br />
Morash, V., Bai, O., Furlani, S., Lin, P., <strong>and</strong> Hallett, M. (2008). Classifying eeg signals preceding right<br />
h<strong>and</strong>, left h<strong>and</strong>, tongue, <strong>and</strong> right foot movements <strong>and</strong> motor imageries. Clinical Neurophysiology,<br />
119(11):2570 – 2578.<br />
Mormann, F., Elger, C. E., <strong>and</strong> Lehnertz, K. (2006). Seizure anticipation: from algorithms to clinical<br />
practice. Curr. Opin. Neurol., 19:187–193.<br />
Muller-Putz, G.R. et al. (1977). Eeg-based neuroprosthesis control: A step towards clinical practise.<br />
Neuroscience Letters, 382(1-2):169–174.<br />
Murphy, R.R (1996). Biological <strong>and</strong> Cognitive Foundation of Intelligent Sensor Fusion, IEEE<br />
Transactions on Systems, Man <strong>and</strong> cyberntices-Part A: Systems <strong>and</strong> Humans,Vol 26, No1, pp. 42-51<br />
Myklebust, H., Nunes, N., Hallén, J., <strong>and</strong> Gamboa, H. (2011). Morphological analysis of acceleration<br />
signals in cross-country skiing, Proceedings of Biosignals - International Conference on Bio-inspired<br />
Systems <strong>and</strong> Signal Processing (BIOSTEC 2011), Rome, Italy.<br />
Nadin, M. (2005). Anticipating extreme events — the need for faster-than-real-time models. In Extreme<br />
Events in Nature <strong>and</strong> Society, Frontiers Collection, pages 21–45. Springer, New York/Berlin.<br />
Natvig, B, Nessiøy, I, Bruusgaard, D, Rutle, O. (1995). Musculoskeletal symptoms in a local community.<br />
Eur J Gen Pr; 1:25-28.<br />
Natvig, B., Bruusgaard, D., Eriksen, W. (2001). Localised low back pain <strong>and</strong> low back pain as part of<br />
widespread musculoskeletal pain, two different disorders? Sc<strong>and</strong> J Rehabil Med, 33:21-25<br />
Nery, B. <strong>and</strong> Ventura, R. (2010). Online event segmentation in active perception using adaptive strong<br />
anticipation. Technical Report RT-701-1, Instituto de Sistemas e Robótica, Lisboa, Portugal.<br />
Newell, A., (1990), Unified Theories of Cognition. Cambridge MA: Harvard University Press.<br />
Nieuwenhuis, S., Holroyd, C.B., Mola, N., <strong>and</strong> Coles, M.G.H. (2004). Reinforcement-related brain<br />
potentials from medial frontal cortex: origins <strong>and</strong> functional significance. Neuroscience <strong>and</strong> Biobehavioral<br />
Reviews, 28:441–448.<br />
Nilsson, N. J., (1984). Shakey the Robot. Technical note 323 AI Center, SRI International Menlo Park,<br />
CA.<br />
Nirenburg, S., Lesser, V., Nyburg, E. (1989). Controlling a Language Generation Planner, Proceedings of<br />
IJCAI-89, 1524-1530.<br />
Nock, H. J., Iyengar, G., <strong>and</strong> Neti, C. (2002). Assessing Face <strong>and</strong> Speech Consistency for Monologue<br />
Detection in Video, Proceedings of ACM Multimedia, Juan-les-Pins, France.<br />
Novák, V., Perfilieva, I. <strong>and</strong> Močkoř, J. (1999) Mathematical principles of fuzzy logic Dodrecht: Kluwer<br />
Academic. ISBN 0-7923-8595-0<br />
Ntnen, R. (1982). Processing negativity: An evoked-potential reflection of selective attention.<br />
Psychological Bulletin, 92(3):605–640.<br />
Ntnen, R. (1990). The role of attention in auditory information processing as revealed by event-related<br />
potentials <strong>and</strong> other brain measures of cognitive function. Behavioral <strong>and</strong> Brain Sciences, 13:201–288.<br />
OCZ Technology, n.d. NIA GAME CONTROLLER. [online] Available at<br />
[Accessed 13 May 2011].NeuroSky, n.d.<br />
What We Do. [online] Available at < http://www.neurosky.com/People/WhatWeDo.aspx > [Accessed 13<br />
May 2011].<br />
191
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
OFRO – Mobile outdoor surveillance at the highest level [online] Available at:<br />
[Accessed 20 July 2011]<br />
OpenEEG, n.d. BrainBay – an OpenSource Biosignal project. [online] Available<br />
at [Accessed 13 May 2011].<br />
OpenEEG, n.d. BWView -- recorded brain-wave viewing application. [online] Available<br />
at [Accessed 13 May 2011].<br />
OpenEEG, n.d. Welcome to the OpenEEG project. [online] Available at<br />
http://openeeg.sourceforge.net/doc/ [Accessed 13 May 2011].<br />
Orchard, R. (2001). Fuzzy Reasoning in Jess: The FuzzyJ Toolkit <strong>and</strong> FuzzyJess, Proceedings of the Third<br />
International Conference on Enterprise Information Systems (ICEIS), pp. 553-542<br />
Palmer, K.T., Walsh, K., Bendall, H., Cooper, C., Coggon, D. (2000). Back pain in Britain: comparison of<br />
two prevalence surveys at an interval of 10 years. BMJ; 320:1577-1578.<br />
Parunak, H. V. D., Brueckner, S., <strong>and</strong> Savit, R. (2004). Universality in multi-agent systems. In<br />
Proceedings of Third International Joint Conference on Autonomous Agents <strong>and</strong> Multi-Agent Systems<br />
(AAMAS 2004), pages 930–937. IEEE.<br />
Parush, N., Tishby, N., <strong>and</strong> Bergman, H. (2011). Dopaminergic balance between reward maximization <strong>and</strong><br />
policy complexity. Frontiers in Systems Neuroscience.<br />
Passino, K. M., Yurkovich, S., (1998). Fuzzy control. Menlo Park: Addison Wesley Longman.<br />
Paul, C. (2006). Morphological computation: A basis for the analysis of morphology <strong>and</strong> control<br />
requirements. Robotics <strong>and</strong> Autonomous Systems, 54(8):619–630.<br />
Pearl, J. (2000). Causality: Models, Reasoning <strong>and</strong> Inference. Cambridge University Press, Cambridge,<br />
UK.<br />
Peters, B.O., Pfurtscheller, G. <strong>and</strong> Edlinger, G. (2001). Automatic di erentiation of multichannel eeg<br />
signals. IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 48(1).<br />
Peters, J., Janzing, D., <strong>and</strong> SchÃlkopf, B. (2010). Identifying cause <strong>and</strong> effect on discrete data using<br />
additive noise models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence<br />
<strong>and</strong> Statistics (AISTATS 2010), pages 1–8.<br />
Peters, J., Janzing, D., Gretton, A., <strong>and</strong> SchÃlkopf, B. (2009). Detecting the direction of causal time<br />
series. In Danyluk, A., Bottou, L., <strong>and</strong> Littman, M. L., editors, Proceedings of the 26th International<br />
Conference on Machine Learning (ICML 2009), pages 801–808, New York, NY. ACM Press.<br />
Petridis, M., Knight, B. (2001). A blackboard architecture for a hybrid CBR system for scientific software.<br />
Pfeifer, R. <strong>and</strong> Bongard, J. (2007). How the Body Shapes the Way We think: A New View of Intelligence.<br />
Bradford Books.<br />
Pfurtscheller, G. et all. (2000). Brain oscillations control h<strong>and</strong> orthosis in a tetraplegic. Neuroscience<br />
Letters, 292(3):211–214.<br />
Pfurtscheller, G., Flotzinger, D. <strong>and</strong> Neuper, C. (1994). Di erentiation between finger, toe <strong>and</strong> tongue<br />
movement in man based on 40 hz eeg. Electroenceph. Clin. Neurophysiol., 90:456–460.<br />
Pfurtscheller, G., Neuper, C., Andrew, C. <strong>and</strong> Edlinger, G. (1997). Foot <strong>and</strong> h<strong>and</strong> area mu rhythms. Int J<br />
Psychophysiol, 26:121–35.<br />
Picard, R.W., Papert, S., Bender, W., Blumberg, B., Breazeal, C., Cavallo, D., Machover, T., Resnick, M.,<br />
Roy, D. <strong>and</strong> Strohecker, C. (2004). Affective learning – a manifesto, BT Technology Journal 22 (4), pp.<br />
253-269<br />
Poggendorf, I., (2004) Einsatz eines Serviceroboters zur Automatisierung der Probenentnahem und des<br />
Probenmanagements während Kultivierung tierischer Zellen in einer Technikumsumgebung, Universität<br />
Bielefeld<br />
192
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Poh, M-Z, Swenson, N.C. <strong>and</strong> Picard, R.W. (2010). A Wearable Sensor for Unobtrusive, Long-Term<br />
Assessment of Electrodermal Activity, IEEE Transactions on Biomedical Engineering, vol. 57, no. 5,<br />
pp.1243-52.<br />
Polani, D. (2009). Information: Currency of life? HFSP Journal, 3(5):307–316.<br />
Polani, D., Nehaniv, C., Martinetz, T., <strong>and</strong> Kim, J. T. (2006). Relevant information in optimized<br />
persistence vs. progeny strategies. In (Rocha et al., 2006), pages 337–343.<br />
Polani, D., Sporns, O., <strong>and</strong> Lungarella, M. (2007). How information <strong>and</strong> embodiment shape intelligent<br />
information processing. In Lungarella, M., Iida, F., Bongard, J., <strong>and</strong> Pfeifer, R., editors, Proc. 50th<br />
Anniversary Summit of Artificial Intelligence, pages 99–111. Springer-Verlag: Berlin, Heidelberg, New<br />
York.<br />
Popescu, F., Fazli, S., Badower, Y., Blankertz <strong>and</strong> Mller, KR. (2007). Single trial classification of motor<br />
imagination using 6 dry eeg electrodes. PLoS ONE, 2(2).<br />
Porr, B., von Ferber, C., <strong>and</strong> Wörrgötter, F. (2003). Iso learning approximates a solution to the inversecontroller<br />
problem in an unsupervised behavioral paradigm. Neural Computation, 15(4):865–884.<br />
Prevalence of disability <strong>and</strong> long-st<strong>and</strong>ing health problems (unintentional injuries only, population aged 15<br />
to 64). Labour Force Survey. Eurostat, 2002.<br />
Prokopenko, M., Gerasimov, V., <strong>and</strong> Tanev, I. (2006a). Evolving spatiotemporal coordination in a<br />
modular robotic system. In Nolfi, S., Baldassarre, G., Calabretta, R., Hallam, J. C. T., Marocco, D., Meyer,<br />
J.-A., Miglino, O., <strong>and</strong> Parisi, D., editors, From Animals to Animats 9: 9th International Conference on the<br />
Simulation of Adaptive Behavior (SAB 2006), Rome, Italy, volume 4095 of Lecture Notes in Computer<br />
Science, pages 558–569, Berlin, Heidelberg. Springer.<br />
Prokopenko, M., Gerasimov, V., <strong>and</strong> Tanev, I. (2006b). Measuring spatiotemporal coordination in a<br />
modular robotic system. In (Rocha et al., 2006), pages 185–191.<br />
PRWeb, http://www.prweb.com/releases/orthopedic_prosthetics/knee_prosthesis/prweb8072141.htm<br />
Friday, May 20, 2011 Global Orthopedic Prosthetics Market to Reach US$19.4 Billion by 2015,<br />
According to a New Report by Global Industry Analysts, Inc. San Jose, CA (Vocus/PRWEB) January 19,<br />
2011<br />
PRWeb, http://www.wheelchairbilling.com/a262996-world-wheelchairs-market-to-exceed-4-2.cfm San<br />
Jose, CA (PRWEB) March 6, 2008<br />
Ramoser, H., Muller-Gerking, J. <strong>and</strong> Pfurtscheller, G. (2000). Optimal spatial filtering of single trial eeg<br />
during imagined h<strong>and</strong> movement. IEEE Trans. Rehab. Eng., 8(4):441–446.<br />
Rani, P., Sarkar, N., Smith, C.A., <strong>and</strong> Kirby, L.D. (2004). Anexity Detecting Robotic System – Towards<br />
Implicit Human-Robot Collaboration, Robotica, Volume 22, pp. 85-95.<br />
Rani, P., Sims, J., Brackin, R., <strong>and</strong> Sarkar, N. (2002). Online Stress Detection using Psychological Signal<br />
for Implicit Human-Robot Cooperation, Robotica, Volume 20 (6) pp. 673-686.<br />
Rebsamen, B., Teo, C.L., Zeng, Q., Ang, M.H., Burdet, E., Guan, C., Zhang, H., <strong>and</strong> Laugier, C. (2007).<br />
Controlling a Wheelchair Indoors Using Thought. IEEE Intelligent Systems, 07:1541–1672.<br />
Reddy, D. R., Erman, L., Neely, R. (1973). A model <strong>and</strong> a system for machine recognition of speech,<br />
IEEE Transactions on Audio <strong>and</strong> Electroacoustics, vol. AU-21, 229-238.<br />
Reinkensmeyer , D., Aoyagi, D., Emken, J., Galvez, J., Ichinose, W., Kerdanyan, G., Maneekobkunwong,<br />
S., Minakata, K., Nessler, J., Weber, R., Roy, R., de Leon, R., Bobrow, J., Harkema, S., <strong>and</strong> Edgerton, V.<br />
(2006). Tools for underst<strong>and</strong>ing <strong>and</strong> optimizing robotic gait training, Journal of Rehabilitation Research &<br />
Development, vol. 43, no. 5, pp. 657-670.<br />
Reinkensmeyer, D., Galvez, J., Marchal, L., Wolbrecht, E., <strong>and</strong> Bobrow, J. (2007). Some key problems for<br />
robot-assisted movement therapy research: a perspective from the University of California at Irvine, in<br />
Proceedings of the IEEE 10th International Conference on Rehabilitation Robotics, pp. 1009-1015.<br />
Reisman, D. S., Wityk, R., Silver, K., <strong>and</strong> Bastian, A. J. (2007). Locomotor adaptation on a split-belt<br />
treadmill can improve walking symmetry post-stroke, Brain, vol. 130, no. 7, pp. 1861-1872.<br />
193
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Renard, Y., Lotte, F., Gibert, G., Congedo, M., Maby, E., Delannoy, V., Bertr<strong>and</strong>, O. <strong>and</strong> Lecuyer, A.<br />
(2010). Openvibe: An open-source software platform to design, test <strong>and</strong> use brain-computer interfaces in<br />
real <strong>and</strong> virtual environments. Presence: teleoperators <strong>and</strong> virtual environments, 19:35–53.<br />
Reprinted from the Lancet, 353, Tunstall-Pedoe H, Kuulasmaa K, Mahonen M, et al. For the WHO<br />
MONICA project, Contribution of trends in survival <strong>and</strong> coronary-event rates to changes in coronary heart<br />
disease mortality: 10-year results from 37 WHO MONICA Project populations, 1547-57, 1999, with<br />
permission from Elsevier.<br />
Rheinmetall robotic system (http://www.rheinmetall-detec.com)<br />
Riadh, F. (2006). Analysing spatial-temporal geographic information based on blackboard architecture <strong>and</strong><br />
multi-agent systems, IJCSNS International Journal of Computer Science <strong>and</strong> Network Security, VOL.6<br />
No.8A.<br />
Ridding, M. C. <strong>and</strong> Rothwell, J. C. (1999). A_erent input <strong>and</strong> cortical organisation: a study with magnetic<br />
stimulation, Experimental Brain Research, vol. 126, no. 4, pp. 536-544.<br />
Rieke, F., Warl<strong>and</strong>, D., de Ruyter van Steveninck, R., <strong>and</strong> Bialek, W. (1999). Spikes. A Bradford Book.<br />
MIT Press.<br />
Riener, R., Lunenburger, L., Jezernik, S., Anderschitz, M., Colombo, G., <strong>and</strong> Dietz, V. (2005). Patientcooperative<br />
strategies for robot-aided treadmill training: _rst experimental results, IEEE Transactions on<br />
Neural Systems <strong>and</strong> Rehabilitation Engineering, vol. 13, no. 3, pp. 380-394.<br />
Riera, A., Soria-Frischi, A., Caparrini, M., Grau, C. <strong>and</strong> Ruffini, G. (2008). Unobtrusive biometric system<br />
based on ectroencephalogram analysis. ERASIP Journal on Advances in Signal Processing.<br />
Rivera-Ruiz, M., Cajavilca, C., <strong>and</strong> Varon, J. (2008). Einthoven’s String Galvanometer - The First<br />
Electrocardiograph, Tex Heart Inst J, 35(2):174-8<br />
Robowatch Industries Ltd. (http://www.robowatch.de)<br />
Rocha, L. M., Bedau, M., Floreano, D., Goldstone, R., Vespignani, A., <strong>and</strong> Yaeger, L., editors (2006).<br />
Proc. Artificial Life X.<br />
Rohrbaugh, J.W., Syndulko, K. <strong>and</strong> Lindsley, D.B. (1976). Brain wave components of the contingent<br />
negative variation in humans. Science, 191:1055–7.<br />
ROS.org, n.d. ROS. [online] Available at: [Accessed 13 May 2011].<br />
Rosahl, S.K. <strong>and</strong> Knight, R.T. (1995). Role of prefrontal cortex in generation of the contingent negative<br />
variation. Cereb Cortex, 5:123–34.<br />
Ross, R. (2004). The SharC Cognitive Control Architecture, Technical Report<br />
Ross, A., Jain, A., Information Fusion in biometrics, Pattern Recognition Letters, vol. 24, no. 13, pp. 2115-<br />
2121, 2003<br />
Rumelhart D. E., McClell<strong>and</strong> J. L., <strong>and</strong> The PDP Research Group, (1986) Eds.,Parallel Distributed<br />
Processing: Explorations in the Microstructure of Cognition. Cambridge: The MIT Press.<br />
Russell, S., Norvig, P. (2003). Artificial Intelligence - A Modern Approach, Second Edition, Prentice Hall,<br />
New Jersey.<br />
Sadeh, N. M. (1998). A Blackboard Architecture for Integrating Process Planning <strong>and</strong> Production<br />
Scheduling, Concurrent Eng.: Res. <strong>and</strong> Apps, vol. 6, no. 2.<br />
Saerens, M., Achbany, Y., Fuss, F., <strong>and</strong> Yen, L. (2009). R<strong>and</strong>omized shortest-path problems: Two related<br />
models. Neural Computation, 21:2363–2404.<br />
Saffiotti, A., (2004) Platforms for Rescue Operations, AASS Mobile Robotics Laboratory, Örebro<br />
University, Örebro, Sweden.<br />
194
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Saglia, J. A., Tsagarakis, N. G., Dai, J. S., <strong>and</strong> Caldwell, D. G. (2010). Control Strategies for Ankle<br />
Rehabilitation using a High Performance Ankle Exerciser, in Proceedings of the 2010 IEEE International<br />
Conference on Robotics <strong>and</strong> Automation, pp. 2221-2227.<br />
Salge, C. <strong>and</strong> Polani, D. (2011). Digested information as an information theoretic motivation for social<br />
interaction. Journal of Artificial Societies <strong>and</strong> Social Simulation (JASSS), 14(1):5.<br />
Salvo, P., Di Francesco, F., Costanzo, D., Ferrari, C., Trivella, M.G. <strong>and</strong> De Rossi, D. (2010). A Wearable<br />
Sensor for Measuring Sweat Rate, IEEE Sensors Journal, vol. 10, no. 10, pp. 1557-58.<br />
S<strong>and</strong>ini, G., Metta, G., <strong>and</strong> Vernon, D. (2007). The iCub Cognitive Humanoid Robot: An Open-System<br />
Research Platform for Enactive Cognition, in 50 Years of AI, M. Lungarella et al. (Eds.), Festschrift,<br />
LNAI 4850, Springer-Verlag, Heidelberg, pp. 359-370.<br />
Sanket G. J. (2009). Eeg during pedaling: Brain activity during a locomotion-like task in humans. Master<br />
Thesis.<br />
Sawicki, G., Gordon, K., <strong>and</strong> Ferris, D. (2005). Powered lower limb orthoses: applications in motor<br />
adaptation <strong>and</strong> rehabilitation, in 9th International Conference on Rehabilitation Robotics (ICORR 2005),<br />
June-July, pp. 206-211.<br />
Schalk, G., McFarl<strong>and</strong>, D.J., Hinterberger, T., Birbaumer, N., <strong>and</strong> Wolpaw, J.R. (2004). Bci2000: a<br />
general-purpose brain-computer interface (bci) system. Biomedical Engineering, IEEE Transactions on,<br />
51(6):1034–1043.<br />
Schmidhuber, J. (1991). A possibility for implementing curiosity <strong>and</strong> boredom in model-building neural<br />
controllers. In Meyer, J. A. <strong>and</strong> Wilson, S. W., editors, Proc. of the International Conference on Simulation<br />
of Adaptive Behavior: From Animals to Animats, pages 222–227. MIT Press/Bradford Books.<br />
Schmidhuber, J. (2002). Exploring the predictable. In Ghosh, A. <strong>and</strong> Tsutsui, S., editors, Advances in<br />
Evolutionary Computing, pages 579–612. Springer.<br />
Schmidt, D.C. (1998). The design <strong>and</strong> performance of real-time object request brokers. Computer<br />
Communications, 21(4), 294-324.<br />
Schmidt, H., Hesse, S., <strong>and</strong> Bernhardt, R. (2005). Haptic Walker - A novel haptic foot device, ACM<br />
Transactions on Applied Perception, vol. 2, pp. 166-180.<br />
Schmidt, H., Piorko, F., Bernhardt, R., Kruger, J., <strong>and</strong> Hesse, S. (2005). Synthesis of perturbations for gait<br />
rehabilitation robots, in Proceedings of the 2005 IEEE International Conference on Rehabilitation<br />
Robotics, pp. 74-77.<br />
Schneider, W. (1999). Working memory in a multilevel hybrid connectionist control architecture (CAP2).<br />
In Miyake, A., Shah, P., editors, Models of Working Memory: Mechanisms of Active Maintenance <strong>and</strong><br />
Executive Control, New York:Cambridge University Press.<br />
Schrater, P. <strong>and</strong> Kersten, D. (2002). Vision, psychophysics <strong>and</strong> bayes. In Rao, R. P. N., Olshausen, B. A.,<br />
<strong>and</strong> Lewicki, M., editors, Probabilistic Models of the Brain: Perception <strong>and</strong> Neural Function, Neural<br />
Information Processing Series, pages 37–60. A Bradford Book. The MIT Press.<br />
Schreiber, T. (2000). Measuring information transfer. Phys. Rev. Lett., 85:461–464.<br />
Searle, A. <strong>and</strong> Kirkup, L. (2010). A direct comparison of wet, dry <strong>and</strong> insulating bioelectric recording<br />
electrodes. Physiological Measurement, 21(2):271–283.<br />
Seeberg, T., Hjelstuen, M., Austad, H.O., Færevik, A.L.H., Tjønnås, M.S., Storholmen, T.C.B. (2011).<br />
Smart Textiles - Safety for Workers in Cold Climate, Submitted to Ambience 2011<br />
Shalfield, R. (2005). Flint Reference, Logic Programming Associates, http://www.lpa.co.uk, London<br />
Shalizi, C. R. (2001). Causal Architecture, Complexity <strong>and</strong> Self-Organization in Time Series <strong>and</strong> Cellular<br />
Automata. PhD thesis, University of Wisconsin-Madison.<br />
Shalizi, C. R. <strong>and</strong> Crutchfield, J. P. (2002). Information bottlenecks, causal states, <strong>and</strong> statistical relevance<br />
bases: How to represent relevant information in memoryless transduction. Advances in Complex Systems,<br />
5:1–5.<br />
195
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Shi, Y., Choi, E.H.C., Ruiz, N., Chen, F., <strong>and</strong> Taib, R. (2007). Galvanic skin response (gsr) as an index of<br />
cognitive load. In Conference on Computer <strong>and</strong> Human Interaction, pages 2651–2656.<br />
Shibasaki, H. <strong>and</strong> Hallett, M. (2006). What is the bereitschaftspotential. Clinical Neurophysiology,<br />
117(11):2341–56.<br />
Sinkjaer, T., <strong>and</strong> Popovic, D.B. (2005). Trends in the Rehabilitation of Hemiplegic Subjects, Journal of<br />
Automatic Control, vol. 15, pp. 1-10.<br />
Smith, S., Ow, P. S. (1985). The Use of Multiple Problem Decompositions in Time Constrained Planning<br />
Tasks, Proceedings of IJCAI-85, 1013-1015.<br />
Sobocki, P., Pugliatti, M., Lauer, L., Kobelt, G. (2007). Estimation of the cost of MS in Europe:<br />
Extrapolations from a multinational cost study. Mult Scler Jul 10 [epub].<br />
Soft-dynamics, n.d. Soft Skill <strong>and</strong> dynamical Systems. [online] Available at<br />
[Accessed 13 May 2011].<br />
Sola, J., Chetelat, O., Sartori, C., Allemann, Y., <strong>and</strong> Rimoldi, S.F. (2011). Parametric Chest Pulse Wave<br />
Velocity: A Novel Approach to Assess Arterial Stiffness, IEEE Trans. Biomed. Eng. 58(1), pp. 215-223.<br />
Srnmo, L., Laguna, P. (2005). Bioelectrical signal processing in cardiac <strong>and</strong> neurological applications.<br />
Starlab, n.d. Enobio, Wireless brain monitoring. [online] Available at: <br />
[Accessed 13 May 2011].<br />
Stauer, Y., Allem<strong>and</strong>, Y., Bouri, M., Fournier, J., Clavel, R., Metrailler, P., Brodard , R., <strong>and</strong> R. F. The<br />
WalkTrainer - A new generation of walking reeducation device combining orthoses <strong>and</strong> muscle<br />
stimulation, IEEE Transactions<br />
Steels, L. (2004). The autotelic principle. In Iida, F., Pfeifer, R., Steels, L., <strong>and</strong> Kuniyoshi, Y., editors,<br />
Embodied Artificial Intelligence: Dagstuhl Castle, Germany, July 7-11, 2003, volume 3139 of Lecture<br />
Notes in AI, pages 231–242. Springer Verlag, Berlin.<br />
Stegemann, S.K., Funk, B., <strong>and</strong> Slotos, T. (2007). A blackboard architecture for workflows. In Johann<br />
Eder, Stein L. Tomassen, Andreas L. Opdahl, <strong>and</strong> Guttorm Sindre, editors, CAiSE Forum, volume 247 of<br />
CEUR Workshop Proceedings. CEUR-WS.org.<br />
Stepp, N. <strong>and</strong> Turvey, M. T. (2010). On strong anticipation. Cognitive Systems Research, 11(2):148–164.<br />
Steudel, B. <strong>and</strong> Ay, N. (2011). Private communication. Submitted.<br />
Still, S. (2009). Information-theoretic approach to interactive learning. EPL (Europhysics Letters),<br />
85(2):28005–28010.<br />
Sullivan, T.J., Deiss, S.R., Jung, T.P., <strong>and</strong> Cauwenberghs, G. (2008). A brain-machine interface using drycontact,<br />
low-noise eeg sensors. Circuits <strong>and</strong> Systems 2008. ISCAS 2008. IEEE International Symposium<br />
on, pages 1986–1989.<br />
Sulzer, J., Roiz, R., Peshkin, M., <strong>and</strong> Patton, J. (2009). A Highly Backdrivable, Lightweight Knee<br />
Actuator for Investigating Gait in Stroke, IEEE Transactions on Robotics, vol. 25, no. 3, pp. 539-548.<br />
Surdilovic, D., Zhang, J., <strong>and</strong> Bernhardt, R. (2007). STRING-MAN: Wire-robot technology for safe,<br />
_exible <strong>and</strong> humanfriendly gait rehabilitation, in Proceedings of the 2007 IEEE 10th International<br />
Conference on Rehabilitation Robotics, pp. 446-453.<br />
Sutton, C., Morrison, C., Cohen, P.R., Moody, J. <strong>and</strong> Adibi, J. (2004). A Bayesian blackboard for<br />
information fusion. In: P. Svensson <strong>and</strong> J. Schubert, Editors, Proceedings of the Seventh International<br />
Conference on Information Fusion, June 2004 vol. II, International Society of Information Fusion,<br />
Mountain View, CA, USA<br />
Symmons, D., Turner, G., Webb, R., Asten, P., Barrett, E., Lunt, M., et al. (2002). The prevalence of<br />
rheumatoid arthritis in the United Kingdom: new estimates for a new century. Rheumatology; 41:793-800.<br />
196
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Tecce, J.J. (1972). Contingent negative variation (cnv) <strong>and</strong> psychological processes in man. Psychol Bull,<br />
77:73–108, 1972.<br />
TechnoRobot – Rapidly Deployable Remotely Operated Less-Lethal Support Robots. [online] Available<br />
at:< http://www.army-technology.com/contractors/unmanned_vehicles/technorobot/> [Accessed 15 July<br />
2011].<br />
Tee, K. P. et al. (2008). Augmenting cognitive processes in robot-assisted motor rehabilitation. In<br />
Proceedings of the 2nd Biennial IEEE/RAS-EMBS International Conference on Biomedical Robotics <strong>and</strong><br />
Biomechatronics, Scottsdale, USA.<br />
Telerob. [online] Available at: < http://www.army-technology.com/contractors/mines/telerob> [Accessed<br />
16 July 2011].<br />
Teplan, M. (2002). Fundamentals of eeg measurement. Measurement Science Review, 2(2).<br />
Teplan, M. et al. (2006). Eeg responses to long-term audio-visual stimulation. Int. J. Psychophysiol,<br />
59:81–90.<br />
Teplan, M., Krakovsk, A., <strong>and</strong> tolc ,S. (2009). Eeg characterization of psycho-physiological rest <strong>and</strong><br />
relaxation. In Measurement Conferece. Proceedings of the 7th International Conference.<br />
The O&P EDGE, http://www.o<strong>and</strong>p.com/articles/news_2007-09-28_01.asp September 28, 2007<br />
Thórisson, K. R., List, Di Pirro, T., J., Pennock, C. (2005). A Framework for A.I. Integration, Reykjavik<br />
University Department of Computer Science Technical Report, RUTR-CS05001.<br />
Thórisson, K.R., List, T., Pennock, C., DiPirro, J. (2005). Whiteboards: Scheduling Blackboards for<br />
Semantic Routing of Messages & Streams, AAAI-05 Workshop on Modular Construction of Human-Like<br />
Intelligences, Twentieth Annual Conference on Artificial Intelligence, Pittsburgh, PA, July 10, 16-23.<br />
Tishby, N. <strong>and</strong> Polani, D. (2011). Information theory of decisions <strong>and</strong> actions. In Cutsuridis, V., Hussain,<br />
A., <strong>and</strong> Taylor, J., editors, Perception-Action Cycle: Models, Architecture <strong>and</strong> Hardware, pages 601–636.<br />
Springer.<br />
Tishby, N., Pereira, F. C., <strong>and</strong> Bialek, W. (1999). The information bottleneck method. In Proc. 37th<br />
Annual Allerton Conference on Communication, Control <strong>and</strong> Computing, Illinois, Urbana-Champaign.<br />
Tononi, G. <strong>and</strong> Sporns, O. (2003). Measuring information integration. BMC Neuroscience, 4:31.<br />
Touchette, H. <strong>and</strong> Lloyd, S. (2000). Information-theoretic limits of control. Phys. Rev. Lett., 84:1156.<br />
Touchette, H. <strong>and</strong> Lloyd, S. (2004). Information-theoretic approach to the study of control systems.<br />
Physica A, 331:140–172.<br />
Tronstad, C., Gjein, G.E., Grimnes, S., Martinsen, Ø.G., Krogstad, A-L, <strong>and</strong> Fosse, E. (2008) Electrical<br />
measurement of sweat activity, Physiological Measurements, 29 pp. 407–415<br />
Valero, A., R<strong>and</strong>elli, G., Botta, F., Hern<strong>and</strong>o, M., Losada, D. R., (2011), Operator Performance in<br />
Exploration Robotics - A Comparison Between Stationary <strong>and</strong> Mobile Operators, J Intell Robot Syst,<br />
Springer<br />
Vallery, H., van Asseldonk, E., Buss, M., <strong>and</strong> van der Kooij, H. (2009). Reference trajectory generation<br />
for rehabilitation robots: complementary limb motion estimation, IEEE Transactions on Neural Systems<br />
<strong>and</strong> Rehabilitation Engineering, vol. 17, no. 1, pp. 23-30.<br />
Van Asseldonk, E. H. F., Veneman, J. F., Ekkelenkamp, R., Buurke, J. H., Van der Helm, F. C. T., <strong>and</strong><br />
Van der Kooij, H. (2008). The Effects on Kinematics <strong>and</strong> Muscle Activity of Walking in a Robotic Gait<br />
Trainer During Zero-Force Control, IEEE Transactions on Neural Systems <strong>and</strong> Rehabilitation<br />
Engineering, vol. 16, no. 4, pp. 360-370, August.<br />
Van Asseldonk, E., Ekkelenkamp, R., Veneman, J., Van der Helm, F., <strong>and</strong> Van der Kooij, H. (2007).<br />
Selective control of a subtask of walking in a robotic gait trainer(LOPES), in IEEE 10th International<br />
Conference on Rehabilitation Robotics, pp. 841-848.<br />
197
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
Van Brussel, H., Moreas, R., Zaatri, A., Nuttin, M. (1998). A behaviour-based blackboard architecture for<br />
mobile robots, Industrial Electronics Society, 1998. IECON '98. Proceedings of the 24th Annual<br />
Conference of the IEEE , vol.4, no., pp.2162-2167 vol.4.<br />
Van Dijk, S. <strong>and</strong> Polani, D. (2011a). Grounding subgoals in information transitions. In Proc. IEEE<br />
Symposium Series in Computational Intelligence 2011 — Symposium on Adaptive Dynamic<br />
Programming <strong>and</strong> Reinforcement Learning. Accepted.<br />
Van Dijk, S. <strong>and</strong> Polani, D. (2011b). Look-ahead relevant information: Reducing cognitive burden over<br />
prolonged tasks. In Proc. IEEE Symposium Series in Computational Intelligence 2011 — Symposium on<br />
Artificial Life. Accepted.<br />
Van Dijk, S. G., Polani, D., <strong>and</strong> Nehaniv, C. L. (2010). What do you want to do today? relevantinformation<br />
bookkeeping in goal-oriented behaviour. In Proc. Artificial Life, Odense, Denmark, pages<br />
176–183.<br />
Vanacker, G., Millan, J.d.R., Lew, E., Ferrez, P. W., Moles, F.G., Philips, J., Brussel, H. V., <strong>and</strong> Nuttin,<br />
M. (2007). Context-Based Filtering for Assisted Brain-Actuated Wheelchair Driving. Computational<br />
Intelligence <strong>and</strong> Neuroscience.<br />
Varshney, P. K. (1997). Multi-sensor data fusion. Electronics <strong>and</strong> Communication Engineering Journal,<br />
9(6): 245-25.<br />
Velliste, M. et al. (2008). Cortical control of prosthetic arm for self-feeding. Nature, pages 1098–1101, 19.<br />
Veneman, J., Ekkelenkamp, R., Kruidhof, R., Van der Helm, F., <strong>and</strong> Van der Kooij, H. (2005). Design of a<br />
series elastic- <strong>and</strong> Bowden cable-based actuation system for use as torque-actuator in exoskeleton-type<br />
training, in 9th International Conference on Rehabilitation Robotics (ICORR 2005), pp. 496-499.<br />
Veneman, J., Kruidhof, R., Hekman, E., Ekkelenkamp, R., Van Asseldonk, E., <strong>and</strong> Van der Kooij, H.<br />
(2007). Design <strong>and</strong> evaluation of the LOPES exoskeleton robot for interactive gait rehabilitation, IEEE<br />
Transactions on Neural Systems <strong>and</strong> Rehabilitation Engineering, vol. 15, no. 3, pp. 379-386.<br />
Vergassola, M., Villermaux, E., <strong>and</strong> Shraiman, B. I. (2007). ’infotaxis’ as a strategy for searching without<br />
gradients. Nature, 445:406–409.<br />
Vernon, D., Metta, G., <strong>and</strong> S<strong>and</strong>ini, G., (2006). A Survey of Artificial Cognitive Systems: Implications for<br />
the Autonomous Development of Mental Capabilities in Computational Agents, IEEE Transactions on<br />
Evolutionary Computation, Special Issue on Autonomous Mental Development.<br />
Versluys, R., Beyl, P., Van Damme, M., Desomer, A., Van Ham, R., <strong>and</strong> Lefeber, D. (2009). Prosthetic<br />
Feet: State-of-theart Review <strong>and</strong> the Importance of Mimicking Human Ankle-Foot Biomechanics,<br />
Disability <strong>and</strong> Rehabilitation: Assistive Technology, vol. 4, no. 2, pp. 65-75.<br />
Viviani, P. <strong>and</strong> Flash, T. (1995). Minimum-jerk, two-thirds power law, <strong>and</strong> isochrony: converging<br />
approaches to movement planning. Journal of experimental psychology. Human perception <strong>and</strong><br />
performance, 21(1):32–53.<br />
Waddell, G. (1998). The back pain revolution. Edinburgh: Churchill Livingstone.<br />
Wahlster, W., Reithinger, N., <strong>and</strong> Blocher, A. (2001). SmartKom: Multimodal Communication with a<br />
Life-Like Character, Proceedings of Eurospeech, Aalborg, Denmark.<br />
Walsh, C. J., Endo, K., <strong>and</strong> Herr, H. (2007). A Quasi-passive Legacy Exoskeleton for load-carrying<br />
augmentation, Inter- national Journal of Humanoid Robotics, vol. 4, no. 3, pp. 487-506.<br />
Walter, W.G., Cooper, R., Aldridge, V.J., McCallum, W.C. <strong>and</strong> Winter, A.L. (1964). Contingent negative<br />
variation: an electric sign of sensorimotor association <strong>and</strong> expectancy in the human brain. Nature,<br />
203:380–4.<br />
Wang, Y., Tan, T., Jain, A.K. (2003). Combining Face <strong>and</strong> Iris Biometrics for Identity Verification,<br />
Proceedings of 4th International Conference on Audio- <strong>and</strong> Video-Based Biometric Person Authentication,<br />
Guildford, UK.<br />
Weinberg, B., Nikitczuk, J., Patel, S., Patritti, B., Mavroidis, C., Bonato, P., <strong>and</strong> C. P. (2007). Design,<br />
Control <strong>and</strong> Human Testing of an Active Knee Rehabilitation Orthotic Device, in Proceedings of 2007<br />
IEEE International Conference on Robotics <strong>and</strong> Automation, pp. 4126-4133.<br />
198
<strong>D2.1</strong> <strong>Requirements</strong> <strong>and</strong> <strong>Specification</strong><br />
WHO MONICA Project [http://www.ktl.fi/monica/] (accessed on12.07.07).<br />
Wolff, G.J., Prasad, K.V., Stork, D.G., <strong>and</strong> Hennecke, M. (1994). Lipreading by neural networks: visual<br />
processing, learning <strong>and</strong> sensory integration, Proc. of Neural Information Proc. Sys. NIPS-6, Cowan, J.,<br />
Tesauro, G., <strong>and</strong> Alspector, J., eds., pp. 1027-1034.<br />
Wolpaw, J.R., Birbaumer, N., McFarl<strong>and</strong>, D.J., Pfurtscheller, G., <strong>and</strong> Vaughan, T.M. (2002).<br />
Braincomputer interfaces for communication <strong>and</strong> control. Clinical Neurophysiology, 113(6):767–91.<br />
Yamauchi, B. M. (2004). PackBot: A Versatile Platform for Military Robotics, Proceedings of 2004 SPIE,<br />
Orl<strong>and</strong>o, USA, pp. 228-237<br />
Yang, J. F. <strong>and</strong> Gorassini, M. (2006). Spinal <strong>and</strong> Brain Control of Human Walking: Implications for<br />
Retraining of Walking, Neuroscientist, vol. 12, no. 5, pp. 379-389.<br />
Yoon, J. <strong>and</strong> Ryu, J. (2006). A Novel Reconfigurable Ankle/Foot Rehabilitation Robot, in Proceedings of<br />
the 2005 IEEE international Conference on Robotics <strong>and</strong> Automation, pp. 2301-2306.<br />
Zachry, T., Wulf, G., Mercer, J. <strong>and</strong> Bezodis, N. (2005). Increased movement accuracy <strong>and</strong> reduced emg<br />
activity as the result of adopting an external focus of attention. Brain Research Bulletin, 67:304–309.<br />
Zengar Institute Inc., n.d. NeurOptimal® Neurofeedback System. [online] Available<br />
at [Accessed 13 May 2011].<br />
Zhai, J. <strong>and</strong> Barreto, A. (2006). Stress detection in computer users based on digital signal processing of<br />
noninvasive physiological variables. In Engineering in Medicine <strong>and</strong> Biology Society, 28th Annual<br />
International Conference of the IEEE, pages 1355–1358.<br />
Zhang, H-C., Zhu, M-L. (2004). Self-organized architecture for outdoor mobile robot navigation. Journal<br />
of Zhejiang University Science<br />
Zimmerli, L., Duschau-Wicke, A., Mayr, A., Riener, R., <strong>and</strong> Lunenburger, L. (2009). Virtual reality <strong>and</strong><br />
gait rehabilitation augmented feedback for the lokomat, in Proceedings of the 2009 IEEE Virtual<br />
Rehabilitation International Conference, pp. 150-153.<br />
199