Clean Slate Ubicomp Device System Architecture - PowerPoint PPT Presentation

About This Presentation
Title:

Clean Slate Ubicomp Device System Architecture

Description:

Give it to me, I have 1G bytes phone flash. I have 100M bytes of data, who can ... Data: SQL, In-Memory, (SQLite) Name, Resource, Forwarding: 'Good defaults' ... – PowerPoint PPT presentation

Number of Views:113
Avg rating:3.0/5.0
Slides: 79
Provided by: SimonPey2
Category:

less

Transcript and Presenter's Notes

Title: Clean Slate Ubicomp Device System Architecture


1
Clean Slate Ubicomp Device System Architecture
  • Jon Crowcroft,
  • http//www.cl.cam.ac.uk/jac22

2
Thank you but you are in the opposite direction!
I can also carry for you!
I have 100M bytes of data, who can carry for me?
Give it to me, I have 1G bytes phone flash.
Dont give to me! I am running out of storage.
Reach an access point.
There is one in my pocket
Internet
Search La Bonheme.mp3 for me
Finally, it arrive
Search La Bonheme.mp3 for me
Search La Bonheme.mp3 for me
3
System Architecture
  • Radical Device Stuff
  • Cloud Stuff
  • Research split/migrate/cache/proxy/dynamics

4
Motivation 1
  • Mobile users currently have a very bad experience
    with networking
  • Applications do not work without networking
    infrastructure such as 802.11 access points or
    cell phone data coverage
  • Local connectivity is plentiful (WiFi, Bluetooth,
    etc), but very hard for end users to configure
    and use
  • Example Train/plane on the way to London
  • How to send a colleague sitting opposite some
    slides to review?
  • How to get information on restaurants in London?
    (Clue someone else is bound to have it cached on
    their device)

5
Underlying Problem
  • Applications tied to network details and
    operations via use of IP-based socks interface
  • What interface to use
  • How to route to destination
  • When to connect
  • Apps survive by using directory services
  • Address book maps names to email addresses
  • Google maps search keywords to URLs
  • DNS maps domain names to IP addresses
  • Directory services mean infrastructure

6
Phase transitions and networks
  • Solid networks wired, or fixed wireless mesh
  • Long lived end-to-end routes
  • Liquid networks Mobile Ad-Hoc Networking (MANET)
  • Short lived end-to-gateway routes
  • Gaseous networks Delay Tolerant Networking
    (DTN), Pocket Switched Networking (PSN)
  • No routes at all!
  • Opportunistic, store and forward networking
  • One way paths, asymmetry, node mobility carries
    data
  • Haggle targets all three, so must work in most
    general case, i.e. gaseous

7
Haggle Overview
  • Clean-slate redesign of mobile node
  • Spans MAC to application layers (inclusive), but
    is itself layerless uses six managers
  • Key Features
  • Store-and-forward architecture with data
    persisting inside Haggle not separate file sys
  • App-layer protocols (SMTP, HTTP, etc) moved into
    Haggle rather than apps themselves
  • Forwarding decisions made on name graphs
    allowing just-in-time binding
  • Resource management integrated
  • API is asynchronous and data-centric

8
Overview of D3N Architecture
// Registering a proximity event
listenerEvent.register( Event.OnEncounter, fun
ddevice -gt if d.nID B distance(self,d)
lt 3 then dispatch NodeEncountered(d) )
  • Each node is responsible for storing, indexing,
    searching, and delivering data
  • Primitive functions associated with core D3N
    calculus syntax are part of the runtime system
  • Prototype on MS Mobile .Net Haggle Runtime

8
9
Data-Driven Declarative Networking (D3N)
  • How to program distributed computation?
  • Declarative is new idea in networking
  • Ex. P2 Building overlay Network properties
    specified declaratively
  • Use of Functional Programming Functions are
    first class values and can be both the input and
    the result of other functions
  • FP Simple/clean semantics, expressive, inherent
    parallelism
  • Queries/Filer etc. can be expressed as
    higher-order functions that are applied in a
    distributed setting
  • Runtime system provides the necessary native
    library functions that are specific to each
    device
  • Prototype F .NET for mobile devices
  • Similar approach as LINQ project
  • Extends .NET Framework with language integrated
    operations for querying, storing and transforming
    data (target to .NET)

10
  • Example Query to Networks
  • Queries are part of source level syntax
  • Distributed execution (single node programmer
    model)
  • Familiar syntax

D3N
select name from poll() where institute
Computer Laboratory
poll() gt filter (fun r -gt r.institute
Computer Laboratory) gt map (fun r -gt r.name)
F
E
C
A
B
Message
  • (code, nodeid, TTL, data)

D
11
Trust, the Cloud Society-Cloud Atlas
  • Anil Madhavapeddy (Cambridge/Imperial) and
  • Daniele Quercia (UCL/MIT)

12
Who do you trust?
  • Your phone
  • lost/stolen/broken/hacked
  • The cloud
  • Unreachable, goes broke, spy on you
  • Solution spaces
  • Migrate cloud state on/off fone
  • Need encapsulation of this (social vm)?
  • P2P soln too (nearby devices)
  • Resource (sensor) pooling
  • Social Sign on Scales/Usability?

13
Empirical Stuff
14
Why measure human mobility?
  • Mobility increases capacity of dense mobile
    network tse/grossglauser
  • Also create dis-connectivities
  • Human mobility patterns determine communication
    opportunities
  • And discover social groupings - see later for
    resource allocation (e.g. spectrum)

15
Experimental setup
  • iMotes
  • ARM processor
  • Bluetooth radio
  • 64k flash memory
  • Bluetooth Inquiries
  • 5 seconds every 2 minutes
  • Log MAC address, start time, end time tuple of
    each contact

16
Experimental devices
17
Contact and Inter-contact time
  • Inter-contact is important
  • Affect the feasibility of opportunistic network
  • Nature of distribution affects choice of
    forwarding algorithm
  • Rarely studied

18
Infocom 2005 experiment
  • 54 iMotes distributed
  • Experiment duration 3 days
  • 41 yielded useful data
  • 11 with battery or packaging problem
  • 2 not returned
  • data on crawdad, have several other datasets
    from Hong Kong, Barcelona, Cambridge, Lisbon, and
    of course other people have done this too-

19
Brief summary of data
  • 41 iMotes
  • 182 external devices
  • 22459 contacts between iMotes
  • 5791 contacts between iMote/external device
  • External devices are non-iMote devices in the
    environment, e.g. BT mobile phone, Laptop.

20
Contacts seen by an iMote
iMoites
External Devices
21
Analysis of Conference Mobility Patterns
22
Contact and Inter-contact Distribution
Contacts
Inter-contacts
23
What do we see?
  • Power law distribution for contact and
    Inter-contact time
  • Both iMotes and external nodes
  • Does not agree with currently used mobility
    model, e.g. random way point
  • Power law coefficient lt 1

24
K-clique Communities in Cambridge Dataset
25
K-clique Communities in Infocom06 Dataset
K4
26
Backup Architecture
27
Data Objects (DOs)
Message
  • DO set of attributes type, value pairs
  • Exposing metadata facilitates search
  • Can link to other DOs
  • To structure data that should be kept together
  • To allow apps to categorise/organise
  • Apps/Haggle managers can claim DOs to assert
    ownership

Attachment
28
DO Filters
  • Queries on fields of data objects
  • E.g. content-type EQUALS text/html AND
    keywords INCLUDES news AND timestamp gt
    (now() 1 hour)
  • DO filters are also a special case of DOs
  • Haggle itself can match DOFilters to DOs apps
    dont have to be involved
  • Can be persistent or be sent remotely

29
DO Filter is a powerful mechanism
30
Layerless Naming
  • Haggle needs just-in-time binding of user level
    names to destinations
  • Q when messaging a user, should you send to
    their email server or look in the neighbourhood
    for their laptops MAC address?
  • A Both, even if you already reached one. E.g.
    you can send email to a server and later pass
    them in the corridor, or you could see their
    laptop directly, but they arent carrying it
    today so youd better email it too
  • Current layered model requires ahead-of-time
    resolution by the user themselves in the choice
    of application (e.g. email vs SMS)

31
Name Graphs comprised of Name Objects
  • Name Graph represents full variety of ways to
    reach a user-level name
  • NO special class of DO
  • Used as destinations for data in transit
  • Names and links between names obtained from
  • Applications
  • Network interfaces
  • Neighbours
  • Data passing through
  • Directories

32
Forwarding Objects
  • Special class of DO used for storing metadata
    about forwarding
  • TTL,expiry, etc
  • Since full structure of naming and data is sent,
    intermediate nodes are empowered to
  • Use data as they see fit
  • Use up-to-date state and whole name graph to make
    best forwarding decision

FO
DO
DO
DO
DO
NO
NO
NO
NO
33
Connectivities and Protocols
  • Connectivities (network interfaces) say which
    neighbours are available (including Internet)
  • Protocols use this to determine which NOs they
    can deliver to, on a per-FO basis
  • P2P protocol says it can deliver any FO to
    neighbour-derived NOs if corresponding neighbour
    is visible
  • HTTP protocol can deliver FOs which contain a
    DOFilter asking for a URL, if Internet
    neighbour is present
  • Protocols can also perform tasks directly
  • POP protocol creates EmailReceiveTask when
    Internet neighbour is visible

34
Forwarding Algorithms
Protocol, Name, Neighbour
x
x
x
algorithm 1
FOs
algorithm 2 x scalar benefit of forwarding
task
x
x
x
x
x
  • Forwarding algorithms create Forwarding Tasks to
    send data to suitable next-hops
  • Can also create Tasks to perform signalling
  • Many forwarding algs can run simultaneously

35
Resource Management Tasks and Cost/Benefit
  • Task( Benefit getBenefit(), Cost getCost() )
  • Cost Energy, Time on network X, Money
  • Benefit App, User, Forwarding
  • Resource manager does cost/benefit comparison
    using some utility function
  • Owners preferences must also be applied
  • E.g. dont spend my money on others traffic

36
Implications of using Tasks
  • Tasks can come from all managers http fetch,
    email receive, neighbour discovery, etc
  • Key illustration of layerless architecture
  • Tasks are executed at dynamic times/intervals (or
    not done at all) based on
  • Current resource costs
  • Other tasks
  • User priorities/preferences
  • Too many tasks is fine, even encouraged
    unlike with IP where network operations are
    queued
  • E.g. it is hard for a current web client to
    express predictively download these pages, if
    energy and bandwidth are plentiful and free

37
NeighbourDiscovery
Applications (messaging, web, etc)
Haggle Application Interface
Protocol
Resource
Data
Name
Connectivity
  • Connectivities have responsibility for neighbour
    discovery
  • Protocols use neighbours to mark NOs nearby
  • Resource management controls frequency of
    neighbour discovery

Forwarding
1. Set task
2. Execute
5. Insert new names
4. Neighbour list
3 Discovery
Connectivities (WiFi, BT, GPRS, etc)
38
SendingData
Applications (messaging, web, etc)
3. Call send
1. Insert data
2. Insert names
Haggle Application Interface
Protocol
Resource
Data
Name
Connectivity
5. Set task send via X
  • API for send split into three (sets of) calls
  • FO can be sent to many nodes using many protocols
  • Asynchronous
  • Benefits of send change with time and context

4. Decide next hop X
6. Execute (when worth it)
Forwarding
8. Get encode data
7. Send!
10. Connect transmit
9. Raw data
Connectivities (WiFi, BT, GPRS, etc)
39
ReceivingData
Applications (messaging, web, etc)
7. Notify interested apps
Haggle Application Interface
8. Mine names
Protocol
Resource
Data
Name
Connectivity
  • Incoming data is still processed using tasks
  • Eventually inserted into Data Manager
  • Apps listen by registering interests (DOFilters)

5. Incoming data
6. Insert data objects
Forwarding
2. Incoming connection
1. Bind to networks
4. Query resource use
3. Connection
Connectivities (WiFi, BT, GPRS, etc)
40
Aside on security etc
  • Security was left out for version 1 in this
    4-year EU project, but threats were considered
  • Data security can reuse existing solutions of
    authentication/encryption
  • With proviso that it is not possible to rely on a
    synchronously available trusted third party
  • Some new threats to privacy
  • Neighbourhood visibility means trackability
  • Name graphs could include quite private
    information
  • Incentives to cooperate an issue
  • Why should I spend any bandwidth/energy on your
    stuff?

41
Implementation Status
  • Implemented in Java for XP and Linux
  • Ported to Windows Mobile using C (Java also
    runs)
  • Code at http//haggle.cvs.sourceforge.net/
  • Connectivity WiFi, GPRS, (Bluetooth, SMS)
  • Protocol P2P, SMTP, POP, HTTP, (Search)
  • Data SQL, In-Memory, (SQLite)
  • Name, Resource, Forwarding Good defaults
  • Forwarding Algs Direct, Epidemic, ()
  • Apps Email, Web (both via proxies)

42
D3N Programming Distributed Computation in
Pocket Switched NetworksExtra slides
  • Eiko Yoneki, Ioannis Baltopoulos and Jon
    Crowcroft
  • University of Cambridge Computer Laboratory
  • Systems Research Group

Data Driven Declarative Networking
43
Declarative Networking
  • Declarative is new idea in networking
  • e.g. Search what to look for rather than how
    to look for
  • Abstract complexity in networking/data processing
  • P2 Building overlay using Overlog
  • Network properties specified declaratively
  • LINQ extend .NET with language integrated
    operations for query/store/transform data
  • DryadLINQ extends LINQ similar to Googles
    Map-Reduce
  • Automatic parallelization from sequential
    declarative code
  • Opis Functional-reactive approach in OCaml

44
D3N Data-Driven Declarative Networking
  • How to program distributed computation?
  • Use Declarative Networking
  • Use of Functional Programming
  • Simple/clean semantics, expressive, inherent
    parallelism
  • Queries/Filer etc. can be expressed as
    higher-order functions that are applied in a
    distributed setting
  • Runtime system provides the necessary native
    library functions that are specific to each
    device
  • Prototype F .NET for mobile devices

45
D3N and Functional Programming I
  • Functions are first-class values
  • They can be both input and output of other
    functions
  • They can be shared between different nodes (code
    mobility)
  • Not only data but also functions flow
  • Language syntax does not have state
  • Variables are only ever assigned once hence
    reasoning about programs becomes easier
  • (of course message passing and threads ? encode
    states)
  • Strongly typed
  • Static assurance that the program does not go
    wrong at runtime unlike script languages
  • Type inference
  • Types are not declared explicitly, hence programs
    are less verbose

46
D3N and Functional Programming II
  • Integrated features from query language
  • Assurance as in logical programming
  • Appropriate level of abstraction
  • Imperative languages closely specify the
    implementation details (how) declarative
    languages abstract too much (what)
  • Imperative predictable result about performance
  • Declarative language abstract away many
    implementation issues

47
Overview of D3N Architecture
  • Each node is responsible for storing, indexing,
    searching, and delivering data
  • Primitive functions associated with core D3N
    calculus syntax are part of the runtime system
  • Prototype on MS Mobile .NET

47
48
D3N Syntax and Semantics I
  • Very few primitives
  • Integer, strings, lists, floating point numbers
    and other primitives are recovered through
    constructor application
  • Standard FP features
  • Declaring and naming functions through
    let-bindings
  • Calling primitive and user-defined functions
    (function application)
  • Pattern matching (similar to switch statement)
  • Standard features as ordinary programming
    languages (e.g. ML or Haskell)

48
49
D3N Syntax and Semantics II
  • Advanced features
  • Concurrency (fork)
  • Communication (send/receive primitives)
  • Query expressions (local and distributed select)

49
50
D3N Language (Core Calculus Syntax)
// Registering a proximity event
listenerEvent.register( Event.OnEncounter, fun
ddevice -gt if d.nID B distance(self,d)
lt 3 then dispatch NodeEncountered(d) )
50
51
Runtime System
  • Language relies on a small runtime system
  • Operations implemented in the runtime system
    written in F
  • Each node is responsible on data
  • Storing
  • Indexing
  • Searching
  • Delivering
  • Data has Time-To-Live (TTL)
  • Each node propagates data to the other nodes.
  • A search query w/TTL travels within the network
    until it expires
  • When the node has the matching data, it forwards
    the data
  • Each node gossips its own metadata when it meets
    other nodes

51
52
Kernel Event Handler
  • Kernel maintains
  • An event queue (queue)
  • A list of functions for each event (fenc, fdep)
  • Kernel processes
  • It removes an event from the front of the queue
    (e)
  • Pattern matches against the event type
  • Calls all the registered functions for the
    particular event

52
53
Example Query to Networks
  • Queries are part of source level syntax
  • Distributed execution (single node programmer
    model)
  • Familiar syntax

D3N
select name from poll() where institute
Computer Laboratory
poll() gt filter (fun r -gt r.institute
Computer Laboratory) gt map (fun r -gt r.name)
F
E
C
A
B
Message
(code, nodeid, TTL, data)
D
54
Example Vote among Nodes
// Registering a proximity event
listenerEvent.register( Event.OnEncounter, fun
ddevice -gt if d.nID B distance(self,d)
lt 3 then dispatch NodeEncountered(d) )
  • Voting application implements a distributed
    voting protocol of choosing location for dinner
  • Rules
  • Each node votes once
  • A single node initiates the application
  • Ballots should not be counted twice
  • No infrastructure-base communication is available
    or it is too expensive
  • Top-level expression
  • Node A sends the code to all nodes
  • Nodes map in parallel (pmap) the function
    voteOfNode to their local data, and send back the
    result to A
  • Node A aggregates (reduce) the results from all
    nodes and produces a final tally

54
55
Sequential Map function (smap)
// Registering a proximity event
listenerEvent.register( Event.OnEncounter, fun
ddevice -gt if d.nID B distance(self,d)
lt 3 then dispatch NodeEncountered(d) )
  • Inner working
  • It sends the code to execute on the remote node
  • It blocks waiting for a response waiting from the
    node
  • Continues mapping the function to the rest of the
    nodes in a sequential fashion
  • An unavailable node blocks the entire computation

55
56
Parallel Map Function (pmap)
  • Inner working
  • Similar to the sequential case
  • The send/receive for each node happen in a
    separate thread
  • An unavailable node does not block the entire
    computation

A
pmap
C
E
F
G
B
D
56
57
Reduce Function
// Registering a proximity event
listenerEvent.register( Event.OnEncounter, fun
ddevice -gt if d.nID B distance(self,d)
lt 3 then dispatch NodeEncountered(d) )
  • Inner working
  • The reduce function aggregates the results from a
    map
  • The reduce gets executed on the initiator node
  • All results must have been received before the
    reduce can proceed

57
58
Voting Application Code
58
59
Cascaded Map Function
// Registering a proximity event
listenerEvent.register( Event.OnEncounter, fun
ddevice -gt if d.nID B distance(self,d)
lt 3 then dispatch NodeEncountered(d) )
  • Social Graph can be exploited for map function
  • Logical topology extracted from social networks
  • Construct a minimum spanning tree with node A
  • Use tree as navigation of task

D
G
B
E
F
A
C
(a) Social Graph
D
D
B
B
E
E
A
F
C
(b) Nodes for Map at A
(c) Nodes for Map at B
59
60
Outlook and Future Work
// Registering a proximity event
listenerEvent.register( Event.OnEncounter, fun
ddevice -gt if d.nID B distance(self,d)
lt 3 then dispatch NodeEncountered(d) )
  • Current reference implementation
  • F targeting .NET platform taking advantage of a
    vast collection of .NET libraries for
    implementing D3N primitives
  • Future work
  • Security issues are currently out of the scope of
    this paper. Executable code migrating from node
    to node
  • Validate and verify the correctness of the design
    by implementing a compiler targeting various
    mobile devices
  • Disclose code in public domain
  • http//www.cl.cam.ac.uk/ey204
  • Email eiko.yoneki_at_cl.cam.ac.uk

61
Prototype Application Email
  • Standard email client
  • Haggle provides localhost SMTP and POP servers
  • Emails become FO/DO/NOs
  • Protocols
  • SMTP marks email addrs nearby when Internet is
    available
  • P2P marks MAC addrs nearby when visible
  • POP creates task to receive email when Internet
    visible (allows dynamic email check interval)

62
Prototype Application Web
  • HTTP request becomes DO filter for URL
  • Standard web clients used
  • Haggle acts as web proxy
  • Handled via
  • Local DOs (including data being routed for
    others)
  • HTTP protocol to Internet
  • P2P protocol to neighbours
  • Extension search
  • Search page translated into DO filter on content
    not URL
  • Search protocol indirects via search engine and
    gets top N links

Object request
Web client
http
Haggle
Haggle
Internal object cache
Internal object cache
http
http
Internet
Web server
63
Email Performance
  • Haggle/infrastructure and w/out-Haggle are
    comparable (i.e. proxying etc adds acceptably low
    overhead)
  • Haggle/ad hoc wins not just in elapsed time but
    also because email account would not accept gt5Mb
    attachments
  • Haggle/both is in between, with more variability
  • Due to neighbour sensing, email checking, etc
    causing 802.11 to switch between AP and adhoc
    mode often
  • Being addressed with MultiNet approach
    Chandra,Bahl2
  • Also exploring gt1 network type (e.g. add GPRS)

64
Future work ideaResource-aware media sharing
  • Devices could do a lot of media sharing
    proactively
  • Downloading public content predictively (e.g.
    webpages)
  • Backing up/keeping multiple caches of personal
    content (e.g. photos, media)
  • Sharing media with friends/family/public
  • But resource management is crucial
  • Contrast laptop in docking station to laptop on
    plane
  • Proactive tasks can exhaust battery life easily
  • Current architecture does not suffice
  • IP stack is difficult to configure with user
    priorities (e.g. when should smart phone use GPRS
    and when WiFi?)
  • Current file system makes it hard for apps to
    fill disk with predictive content

65
Using Haggle forresource-aware media sharing
  • Haggle can store application data predictively
    and evict it if more important data comes along
  • Keep your photo albums in empty laptop space
  • Keep public media in TiVo-like fashion
  • Haggle makes forwarding decisions with knowledge
    of resource consumption and application
    priorities
  • Doesnt ship holiday snaps over GPRS from
    Australia
  • Haggle can share data with neighbours,
    facilitating use of free fast local connectivity
    wherever possible
  • Syncing without ActiveSync

66
Backup Empirical Stuff
67
Implication of Power Law Coefficient
  • Large coefficient gt Smaller delay
  • Consider 2-hops relaying tse/grossglauser
    analysis TechRep
  • Denote power law coefficient as a
  • For a gt 2
  • Any stateless algorithm achieves a finite
    expected delay.
  • For a gt (m1)/m and nodes 2m
  • There exist a forwarding algorithm with m copies
    and a finite expected delay.
  • For a lt 1
  • No stateless algorithm (even flooding) achieve a
    bounded delay (Oreys theorem).

68
Frequency of sightings and pairwise contact
69
What do we see?
  • Nodes are not equal, some active and some not
  • Does not agree with current mobility model,
    equally distributed.
  • iMotes are seen more often than external address
  • More iMotes pair contact
  • Identify Sharing Communities to improve
    forwarding algorithm

70
Influence of time of day
71
What do we see?
  • Still a power law distribution for any three-hour
    period of the day
  • Different power law coefficient for different
    time
  • Maybe different forwarding algorithm for
    different time of the day

72
Consequences for mobile networking
  • Mobility models needs to be redesigned
  • Exponential decay of inter contact is wrong
  • Mechanisms tested with that model need to be
    analyzed with new mobility assumptions
  • Stateless forwarding does not work
  • Can we benefit from heterogeneity to forward by
    communities ?
  • Should we consider different algorithm for
    different time of the day?

73
Human Hubs Popularity
Reality
Cambridge
Infocom06
HK
74
Forwarding Scheme Design Space
Explicit Social Structure
Bubble
Label
Human Dimension
Structure in Cohesive Group
Clique Label
Network Plane
Rank, Degree
Structure in Degree
75
Social Structure Based Forwarding in DTNs
MANETs
  • BUBBLE RAP Use of social hubs (e.g. celebrities
    and postman) as betweenness centrality and
    combining community structure for improved
    forwarding efficiency.
  • Implementation as a forwarding component of
    Haggle (http//www.haggleproject.org) framework.
    Haggle provides data centric opportunistic
    networking with persistence of search, delay
    tolerance and opportunism.
  • Metrics and Forwarding
  • Preferred neighbors can be
  • clique based on distributed detection, or
    explicit label (e.g. same squad).
  • based on centrality in detected graph.
  • Bubble combines both.
  • Majority of DTN routing based on epidemic,
    optimized flooding algorithms
  • Limited number of copies
  • Context-based PROPHET
  • Use of mobility FERRY

Social Structure Based Forwarding
BUBBLE
RANK Centrality based forwarding. Each node has
global and local ranking of popularity LABEL
Community based forwarding BUBBLE RAP
Combination of centrality and community
75
76
Social Structure Based Forwarding in DTNs
MANETs (Cont)
BUBBLE performance versus flood or wait or pure
LABEL Cost reduction in BUBBLE is significant.
BUBBLE will be effective for hierarchical
community structure. To be evaluated. Scaled
Haggle experiments (up to 200 devices) in the
university campus is in plan. This will
demonstrate BUBBLE-LABEL performance in real
world.
Social Based Forwarding (e.g. use of community
structure and centrality) shows significant
improvement for forwarding efficiency
76
77
Use affiliationhubs to fwd interintra cliques
78
The End!
  • Some NOs for you
  • jamesscott_at_acm.org
  • http//www.haggleproject.org/
  • Thanks for listening ?
  • Any questions?
Write a Comment
User Comments (0)
About PowerShow.com