Title: "Toward a PeertoPeer Shared Virtual Reality" J. KELLER,G. SIMON IRISA INRIARennes, France Telecom R
1"Toward a Peer-to-Peer Shared Virtual Reality"
J. KELLER,G. SIMON IRISA / INRIA-Rennes, France
Telecom RD, IEEE Workshop on Resource Sharing in
Massively Distributed Systems (RESH02), 2002
- Louis Launoy
- January 29, 2007
2The SOLIPSIS project
- Unlimited number of users
- Real time interaction and co presence
- Different end devices and connections
- P2P collaborative design
Why P2P ?
- Besides omniscient game, locality only matters
- Scalability VS multicast
- Scalability VS client server design
- Heterogeneous devices VS multicast
3SOLIPSIS dynamic peer network
- Each node (also called entity) user or object
- Each entity e can communicate
- Coordinate (xe ye)
- Awareness radius r
Consistency as long as that awareness principle
is true everywhere
4Maintaining the topology
- Geometrical approach of Global Connectivity
- Have neighbors in every 180 sector
- If not, recursively ask neighbors for some
- world without any hedge (Taurus, sphere)
- Dynamic neighbors
- Rule for all e1 in k(e),
- if e0 enters k(e1)
- send e0s data to e1
5Dynamic resources
- Connection drop if no neighbor found, increase a
resource radius to maintain awareness principle. - Connection recovery Tele transportation
- Find the nearest node Nj to destination.
- From Nj get the other neighbors of the node.
- Then the node awareness is achieved.
6Entities Interactions and Resource Sharing
- They quote
- Coarse-grain partition VS Perception-based
approach. - They distinguish between
- User related flows (media used between entities)
- System flows (heartbeat rate, movement
notification threshold, awareness radius)
7Field computation dynamiques
- Expected features
- Negotiation in one round.
- Restriction by media.
- Discontinuous fields.
- Each optimize its resources.
- But some less accurate than others
8Conclusion and future works
- P2P networks - unlimited number of users.
- Consistency ? awareness principle.
- Design the application and a simulator.
- Problem
- Users shifts delay before awareness
- Different accuracy depending on machines
- SOLIPSIS intends to be a place to meet and
communicate, we expect users to stay steadily
chatting together and not to move all around all
the time.
9"Peer-to-Peer Support for Massively Multiplayer
Games" B Knutsson, H Lu, W Xu, B Hopkins, IEEE
Infocom, Dec 2003
10Objectives
- P2P for MMG (ex Everquest Ultima Online)
- Performance
- Availability
- Security
11Why P2P
- Centralized server clusters VS scalability.
- Locality of interests.
- Players CPU and memory contribution.
- Resources naturally scale with players.
12Categorization of Shared States
- Player profile and game invariants
- Transactional consistency.
- Centralized on 1 node (server).
- Common shared objects (food)
- Managed by peer-servers.
- Interest management (chest).
- But game dependant
- Single point updates (position)
- Dissemination only.
13Mapping Games States on to peers
- Region and player randomly get unique ID
- (Distributed Hash Table)
- Region n managed by player m / Min( Id-n )
- No semantic closeness
- BUT - reduces cheating opportunities
- - improve robustness
14Game stated distribution
A
B
C
9
3
14
- Locate peer server with DHT routing.
- Regions Player Machines mapped to key space.
- Region managed by successor machine.
15Node join
1
B
14
3
A
C
12
5
A
B
C
D
9
3
14
7
9
7
8
- Player D on node 7 joins
- Rely on DHT to relocate peer-server
16Peer-to-Peer Infrastructure
- P2P layout Pastry
- - ring of 2m nodes, m128
- - each node has finger table of size m
- finger(n)k(n2(k-1))mod(2m).
- - to contact node g, if not in table, contact
nearest - - overlay to find best routes.
Chord
17Peer-to-Peer Infrastructure
- Scalable application level multicast
infrastructure Scribe (on top of Chord) - Sub tree communication via multicast
- Subscribe done accordingly to ID.
- Multicast of positions at pre fixed interval.
- AND dead reckoning other extrapolating
algorithms in case of message failure. - Or multicast only changed positions.
18State replication for fault tolerance
1
B
14
D
3
A
C
12
5
A
B
C
D
9
3
14
7
9
7
8
- States are Replicated (at least 1 replica)
- Rely on DHT routing to locate replica
-
19Fault detection and take over
- Layer routing rule message for j sent to j if
connected or to the next connected machine. - If 14 receives message (not replica) whose
destination was 9, 14 takes over 9. - If new node to be coordinator, forward to current
coordinator while downloading whole replica. Then
take over. - If coordinator and replica lost, just have cache
of each involved machine. But consistency lost.
20Large scale outage
- Parallel worlds within the game can
existParadoxes when merging - Consistency prevails on Availability
- Coordinator blessing by the node server.
Flexibility and scalability of P2P Availability
no worse than in client server networks
21Implementation
- Zone array of space array of objects.
- Zone mapped to the Pastry Key space.
- UDP for inter user communication.
- Fairness of user-user events via symmetric
computation coordinator as referee. - Object time stamped if cyclic behavior
prohibited.
22Experimentation
- Done on FreePastry network emulator.
- Simulates up to 4000 nodes.
- Players eat and fight every 20s and maintain in a
given region during 40s. - Multicast of position update every 150ms
- (// 50ms in Quake 2)
- A region data 120 kb.
23Test criteria
- Population density.
- Total population.
- Network dynamics.
- Improvement due to their message aggregation
methods.
24Av 200ms
But for 4000/400 1 trail with 50 hops s
25Population growth and Message Aggregation
- Routing delay in O(log(n)) hops.
- Multicast 1-2 worst trail unacceptable.
- Aggregation to the root before multicast.
26Effects of network dynamics
If per node failure rate 6 per minute
27Possibility of Catastrophic failure
- 1 coordinator out of 10 players and only one
replica. - Vulnerability window 10s (2s 2 or 32).
- 1000 players playing in average 2,3h with ends
node failure - 7 node failure per minute
Catastrophic rate failure 20 hours
Catastrophic rate failure 121 days
28Conclusion Future work
- P2P for MMG is possible
- Bottleneck node capacities (CPU memories) to
develop a more complex game. - Problem of end leaves of multicast trees
- Increase cheating detection
- Optimization of state transfer and replication
management.
29(No Transcript)
30(No Transcript)