Title: Context Learning Can Improve User Interaction
1Context Learning Can Improve User Interaction
- Sushil J. Louis, Anil K. Shankar
- Evolutionary Computing Systems Lab (ECSL)
- Department of Computer Science and Engineering
- University of Nevada, Reno
- http//www.cs.unr.edu/anilk
- anilk_at_cs.unr.edu
- sushil_at_cs.unr.edu
2Current UIs can be improved
- Hardware
- Keyboard, mouse, clock
- Software
- GUI
- Little personalization, no long term-memory
- Little use of context
- Advances in speech, vision, and text analysis
have not been well integrated
3Can extended context improve UI
- What sensors should we use?
- How do we use extended context to improve user
interaction? - Can we personalize interaction
- Personalized transportable UI
PC is a stationary robot
4Simple sensors provide context
- Good vision, speech recognition, and image or
speech understanding are hard AI problems - What can we do with simple sensors?
- Object recognition versus motion detection
- Speech recognition versus speech detection
- Keyboard activity
- Mouse activity
- Selected processes
5Simple context allows richer user interaction
But every user has different answers!...
- If there is no one in the room should I pop up a
scheduled appointment? - If there is someone in the room should I remind
Jane? - Should I turn down my music player when the
telephone rings? - Should I pause the current song when Jane leaves
the room?
6Sycophant uses ML techniques to learn context to
action mappings
- Sycophant is a calendaring application that
learns to predict preferred reminder actions - Sycophant stores user interaction and context
- Sycophant learns to predict reminder type
7Related Work
- Reba (Kulkarni 1992) PC is a stationary robot
- Bailey and Adamczyk, 2004 Interruptions
disrupts users emotional state and task
performance - Hudson, Fogarty, et al, 2003 predict
interruptibility from context. Wizard of Oz study
(simulated sensors) achieved 82.4 accuracy - Sycophant learns whether or not to interrupt the
user as well as how to interrupt the user - Sycophant uses real sensors
8Sycophant uses simple context to predict action
- Sensors for context
- Keyboard, mouse
- Motion http//motion.sourceforge.net and a cheap
logitech webcam - Speech http//www.speech.cs.cmu.edu the Sphinx
speech recognition engine. We only DETECT speech - Five processes java, bash, terminal,
xscreensaver, mozilla - Sycophant reminder actions (Four classes)
- Visual (Popup), Speech (TTS), Neither, Both
User has to provide feedback on action suitability
9(No Transcript)
10Sycophant stores sensor data
- For each sensor and process we store the
following data if the sensor was activated (15
sec intervals) - Any5 any in 5 minute interval
- All5 all 5 minutes
- Any1 any in 1 minute interval
- All1 all 1 minute
- Immed in the last 15 seconds
- Count number of times sensor active in last 5
minutes - User
((4 sensors 5 processes) X 6 derived values 1
user) 55 total features
11Sycophant uses WEKA ML tools
- Zero-R predicts majority class
- One-R one level decision tree testing one
attribute - J48 Decision tree like C4.5
- Bagging Voting over N decision trees
- LogitBoost Numerical model
- Naïve Bayes Bayes
12Results
- Performance of decision tree inducer with
different number of features - Run J48 on all features, then choose most
significant N features - Show performance on N features with J48
Not much difference in performance with fewer
features
13Results Predict user action
- Performance of different ML algorithms on 25
feature data set on four class problem
Small differences in performance
14Results Two class problemClass1 Remind,
Class 2 No reminder
- Significant increase in performance
- From 65 to 80
15Results
- Sycophant performs at 65 on four class problem
- Sycophant performs at 80 on two class problem
- Removing motion and speech detectors results in a
statistically significant decrease in performance
- Sample Rules
- IF Keyboard Any5 speech count gt 2
no motion in last 1min appoint time gt
1220 THEN generate Speech AND
Popup reminders - IF Keyboard Any5 speech count gt 2
keyboard Any1 THEN generate
Speech only
16Summary
- Sycophant uses machine learning tools to learn a
mapping from user context to user actions - Simple context provides good features
- Motion and speech sensors leads to statistically
significant performance improvement - 65 accuracy on four class problem
- 80 accuracy on two class problem
17Future work
- We are developing a general architectural
framework for a context learning layer for all
applications - Improve performance
- We need more studies with other users and
different types of users - Feature subset selection
- Classifier systems
18Acknowledgements
- Office of Naval Research Contract Number
N00014030104 - Evolutionary Computing System Lab (ECSL)
- Chris Miles
- Kai Xu
- Ryan Leigh
- http//ecsl.cs.unr.edu
- Anil K. Shankar
- http//www.cs.unr.edu/anilk
- Code, other papers