ScaleNet: A Platform for Scalable Network Emulation - PowerPoint PPT Presentation

About This Presentation
Title:

ScaleNet: A Platform for Scalable Network Emulation

Description:

An emulation platform for creating large scale networks using limited resources ... Emulates the effects of finite queues, bandwidth limitations and delays ... – PowerPoint PPT presentation

Number of Views:27
Avg rating:3.0/5.0
Slides: 48
Provided by: sridharkum
Category:

less

Transcript and Presenter's Notes

Title: ScaleNet: A Platform for Scalable Network Emulation


1
ScaleNet A Platform for Scalable Network
Emulation
  • by
  • Sridhar Kumar Kotturu
  • Under the Guidance of
  • Dr. Bhaskaran Raman

2
Outline
  • Introduction
  • Related Work
  • Design and Implementation of ScaleNet
  • Experimental Results
  • Conclusions
  • Future Work

3
Introduction
  • Why protocol development environments
  • Rapid growth of the Internet and evolution of
    network protocols
  • Types of environments
  • Simulation, Real deployed networks and Emulation
  • Why emulation
  • In Simulation, we may not exactly model the
    desired setting.
  • In real deployed networks, it is difficult to
    reconfigure and its behaviour is not easily
    reproducible
  • Existing emulation platforms
  • Dummynet, NIST Net, Netbed etc..
  • ScaleNet
  • An emulation platform for creating large scale
    networks using limited resources
  • Created several virtual hosts on a physical
    machine

4
Why Emulate?
Requires real deployed network Hard to reconfigure real network Real network behavior not easily or reliably reproducible Only needs a software model Easy to vary emulated network configuration Emulated network behavior easily reproduced at will

5
Challenges in building ScaleNet
  • Creating multiple virtual hosts and assigning
    routing tables to each virtual host and
    associating applications with virtual hosts
  • Routing between different IP aliases

6
Challenges in building ScaleNet
  • Routing is not possible between IP aliases.
  • Let there are three IP aliases say 1,2 and 3. Now
    I want to route packets from IP alias 1 to 3
    through 2.
  • I added an entry in routing table for this.
    route add 3 gw 2 dev eth0           Which means
    send packets destined for 3 to gateway 2.
  • But routing is not being done. The reason is that
    since the IP addresses are local aliases, the
    routing table is not consulted to send packets
    destined for local aliases.

7
Challenges in building ScaleNet (Contd..)
Loopback of packets between Ethernet Device
Aliases
8
Outline
  • Introduction
  • Related Work
  • Design and Implementation of ScaleNet
  • Experimental Results
  • Conclusions
  • Future Work

9
Related Work
  • Dummynet
  • Built with modifications to FreeBSD network stack
  • Emulates the effects of finite queues, bandwidth
    limitations and delays
  • Can not emulate complex topologies
  • Implementation exists only between TCP and IP
  • Can not apply the effects for selected data flows
  • Can not apply effects such as packet duplication,
    delay variation
  • Exists only for FreeBSD

10
Related Work (Cont..)
  • NIST Net
  • Emulates the behavior of any network, at a
    particular router. It applies that network
    behavior on the packets passing through it
  • Wont create any virtual hosts and it wont do
    any routing
  • Designed to be scalable in terms of the
    emulation entries and amount of bandwidth it can
    support

11
Related Work (Cont..)
  • FreeBSD Jail Hosts
  • Creates several virtual hosts on a PM
  • Routing is not possible
  • Theoretical upper limit of 24 jail hosts per PM
  • Netbed
  • Extension of Emulab
  • Automatically maps virtual resources onto
    available physical resources
  • Uses FreeBSD jail functionality for creating
    several virtual hosts
  • Scales upto 20 virtual hosts per physical machine

12
Related Work (Cont..)
  • User Mode Linux
  • It is a Linux kernel that can be run as a normal
    user process
  • Useful for kernel development and debugging
  • Can create arbitrary network topology
  • Runs applications inside itself at 20 slowdown
    compared to the host system
  • Lot of extra overhead in creating a virtual host,
    since entire kernel image is used for creating
    virtual host

13
Related Work (Cont..)
  • Alpine
  • Moves unmodified FreeBSD network stack into a
    userlevel library
  • Uses libpcap for receiving packets and raw socket
    for sending outgoing packets and prevents the
    kernel from processing of packets destined for
    Alpine using firewall
  • If the network is too busy or machine is slow,
    this wont work. Kernel allocates fixed buffer
    for queueing the received packets. If the
    application is not fast enough in processing
    these packets, the queue overflows and subsequent
    packets are dropped

14
Comparison of Emulated Platforms
Performance Many VMs per PM Hardware Resources Scalable OS
Dummynet NIST Net FreeBSD jail hosts Netbed User Mode Linux Alpine ScaleNet High High Low High Low Low High No No Yes Yes Yes No Yes Low Low High High High Low Low - - No Partly No No Yes FreeBSD Linux FreeBSD FreeBSD Linux FreeBSD Linux
15
Outline
  • Introduction
  • Related Work
  • Design and Implementation of ScaleNet
  • Experimental Results
  • Conclusions
  • Future Work

16
Netfilter Hooks
Available hooks Hook Called...
NF_IP_PRE_ROUTING NF_IP_LOCAL_IN NF_IP_FORWARD NF_IP_LOCAL_OUT NF_IP_POST_ROUTING After sanity checks, before routing decisions After routing decisions if packet is for this host If the packet is destined for another interface For packets coming from local processes on their way out Just before outbound packets "hit the wire"
17
Netfilter Hooks (Cont..)
18
Design and Implementation of ScaleNet
  • NIST Net
  • Applies bandwidth limitation, delay etc.
  • It is designed to be scalable in terms of the
    number of emulation entries and the amount of
    bandwidth it can support.
  • Linux
  • NIST Net exists only for Linux
  • It is so popular and good documentation is
    available
  • Kernel Modules
  • Modules can be loaded and unloaded dynamically
  • No need to rebuild and reboot the kernel

19
ScaleNet Architecture
Bind call
Userlevel
route command
pidip_ioctl.c
rt_init.c
ioctl.c
Pid_ip
syscall_ hack
Routing Tables
chardev
PID-IP values
Kernel level
IP-IP-out
IP-IP-in
Routing Tables
NIST Net
dst_entry_ export
dst_entry object
Kernel Data
User-level program
Kernel Module
20
Illustration of packet passing between virtual
hosts
Source IP 1 Dest IP 3 Data
Source IP 1 Dest IP 2 Source IP 1 Dest IP 3 Data
Source IP 2 Dest IP 3 Source IP 1 Dest IP 3 Data
Packet after changes
Kernel Module
IP-IP-in
IP-IP-in
IP-IP-out
Source IP 1 Dest IP 2 Source IP 1 Dest IP 3 Data
Source IP 2 Dest IP 3 Source IP 1 Dest IP 3 Data
Source IP 1 Dest IP 3 Data
Original Packet
1
Virtual Host
NIST Net
2
3
NIST Net
21
Processing of Outgoing Packets
Capture Packet at netfilter hook
NF_IP_LOCAL_OUT
Pkt Src IP belongs to local virtual host
No
return NF_ACCEPT
Yes
Nexthop available
No
Return NF_DROP
Yes
Create extra IP header Dst IP lt- nexthop Src IP
lt- Current V.H.
Nexthop on same machine
Dst MAC lt- nexthop MAC
No
Yes
22
Processing of Outgoing Packets (Cont..)
Space available at beg. of sk_buff
Add extra IP header at the beg. of sk_buff
Yes
No
Create new sk_buff with extra space. Add extra IP
header followed by rest of the packet.
Space available at end. of sk_buff
No
Yes
Copy original IP header to the end of
sk_buff. place extra IP header at the beg. of
sk_buff.
return NF_ACCEPT
23
Processing of Incoming Packets
Capture packet at netfilter hook NF_IP_PRE_ROUTING

pkt. dst. belongs to local V.H.
return NF_ACCEPT
No
Yes
Remove NIST Net marking
Packet reaches final destination
Remove outer IP header
Yes
No
nexthop available
return NF_DROP
No
Yes
24
Processing of Incoming Packets (Cont ..)
Change fields of extra IP header. Dst. IP lt-
nexthop Src. IP lt- current V.H.
Nexthop on same m/c
Dst. MAC lt- nexthop MAC
No
Yes
Call dev_queue_xmit()
return NF_STOLEN
25
Virtual Hosts
  • Creating multiple virtual Hosts.
  • Assign different IP aliases to the Ethernet card
    and treat each IP alias as a Virtual Host.
  • ifconfig eth01 10.0.0.1
  • Assign routing tables to each Virtual Host.
    (According to the topology of the network)

26
Association between Applications and Virtual Hosts
  • A wrapper program is associated with a virtual
    host. It acts just like a shell.
  • All the Application programs belong to a virtual
    host are executed in the corresponding wrapper
    shell.
  • To know the virtual host that a process belongs
    to, we traverse the parent, grand parent process
    etc. until we reach some wrapper program, which
    corresponds to a virtual host.

27
System Call Redirection
  • bind and route system calls are hacked
  • Whenever a process tries to access/modify a
    routing table, first we find the virtual host of
    the process as explained in the previous section
    and the system call is redirected to act on that
    virtual hosts routing table instead of systems
    routing table

28
Outline
  • Introduction
  • Related Work
  • Design and Implementation of ScaleNet
  • Experimental Results
  • Conclusions
  • Future Work

29
Experimental Results
  • NIST Net bandwidth limitation tests
  • Tests on the emulation platform consisting of 20
    virtual hosts per physical machine
  • Tests on the emulation platform consisting of 50
    virtual hosts per physical machine.

30
NIST Net bandwidth limitation tests
  • Tests are performed using both TCP and UDP
    packets
  • Tests are performed in two cases.
  • Client and Server are running on the same machine
  • Client and Server are running on the different
    machines.

31
TCP packets. Client and Server on the same machine
Bandwidth applied using NIST Net (Bytes/Sec) Throughput obtained using 50 packets (Bytes/Sec) Throughput obtained using 100 packets (Bytes/Sec) Throughput sending packets continuously (Bytes/Sec)
1000 2000 4000 5000 6000 8000 10000 20000 40000 100000 131072 262144 524288 1048576 1468 2935 5868 7335 8802 11903 14931 29000 (50000, 60000) (139000, 174000) 249036 (655360, 724828) (5767168, 6946816) (5636096, 7864320) 1001 2001 4000 5001 6001 8000 10002 20136 40536 105312 146800 (314572, 340787) (655360, 1966080) (5242880, 6553600) - - 3276 - - (6553, 9830) 9830 19661 49152 - 133693 271319 737935 4949278
  • Excess throughput is coming than the applied one

32
TCP Packets. Client and Server on the same
machine. MTU of loopback packets changed to 1480
bytes.
Bandwidth applied using NIST Net (Bytes/Sec) Throughput obtained using 100 packets (Bytes/Sec) Throughput Sending packets continuously (Bytes/Sec)
1000 2000 4000 8000 16000 32000 64000 131072 262144 524288 1048576 2097152 4194304 866 1915 3818 7637 15275 30551 60784 131334 261488 507248 983040 1867776 3377725 925 1856 3855 7568 15200 30416 60900 130809 261619 522977 1045954 2093219 4182507
  • Expected results are coming

33
UDP Packets
Packets Bandwidth applied using NIST Net (Bytes/Sec) Throughput (Bytes/Sec)
50 50 50 50 50 50 17300 1000 2000 4000 8000 16000 20000 20000 972 1945 3831 7786 15562 19461 19650
  • Expected results are coming

34
UDP Packets (Contd..)
  • Sending 17400 packets, each packet of size 1000
    bytes.

Bandwidth applied using NIST Net (Bytes/Sec) Throughput (Bytes/Sec) (For every 100 packets rcvd)
10000 10000 10000 10000 22109 9824 9825 9825
  • Excess throughput is coming than the applied one

35
Creating 20 Virtual Hosts per System
A network topology consisting of 20 nodes per
system
36
Creating 20 Virtual Hosts per System (Cont..)
  • Sending 40000 TCP packets from 10.0.1.1 to
    10.0.4.10. Each link has 10ms delay.

Bandwidth applied using NIST Net (Bytes/Sec) Throughput (Bytes/Sec)
4096 8192 16384 32768 65536 131072 262144 393216 3892 7784 15572 31136 62222 123933 154125 154361
37
Creating 20 Virtual Hosts per System (Cont..)
  • TCP window size is 65535 bytes.
  • There are 39 links between 10.0.1.1 and
    10.0.4.10. Each link has 10ms delay in the
    forward direction and no delay in the backward
    direction. For 100 Mbps link, the transmit time
    for 65535 bytes is around 5ms. So RTT is 395ms.
  • The maximum possible data transferred is
    65535bytes/395ms, i.e. 165911 bytes/sec.
  • We are getting throughput around 154000
    bytes/sec. If we add headers size(9240 bytes), it
    is 163240. So we are getting expected results. So
    the emulation platform scales well for 20 virtual
    hosts per physical machine

38
Creating 20 Virtual Hosts per System (Cont..)
  • Sending 40000 UDP packets from 10.0.1.1 to
    10.0.4.10. Each link has 10ms delay.

Bandwidth applied using NIST Net (Bytes/Sec) Throughput (Bytes/Sec) Client Send rate (Bytes/Sec)
2048 4096 8192 16384 32768 65536 131072 262144 393216 524288 655360 786432 1048576 1179648 1954 3908 7816 15634 31268 62537 125069 250146 375278 500321 625504 750347 1001276 11264645 2500 4500 8500 16500 33000 66000 132000 262500 393500 524500 655500 786500 1048600 1180000
  • Expected results are coming

39
Creating 20 Virtual Hosts per System (Cont..)
  • Sending 40000 TCP packets from 10.0.1.1 to
    10.0.4.10. Each link has 5ms delay.

Bandwidth applied using NIST Net (Bytes/Sec) Throughput (Bytes/Sec)
4096 8192 16384 32768 65536 131072 262144 393216 524288 3892 7784 15572 31141 62262 124395 248082 305150 305366
  • Expected results are coming

40
Creating 20 Virtual Hosts per System (Cont..)
  • Sending 40000 UDP packets from 10.0.1.1 to
    10.0.4.10. Each link has 5ms delay.

Bandwidth applied using NIST Net (Bytes/Sec) Throughput (Bytes/Sec) Client send rate (Bytes/Sec)
2048 4096 8192 16384 32768 65536 131072 262144 393216 524288 655360 786432 1048576 1179648 1954 3908 7816 15633 31268 62534 125064 250140 375260 500293 625454 750276 1001152 Not all packets reached destination 2500 4500 8500 16500 33000 66000 132000 262500 393500 524500 655500 786500 1048600 1180000
  • Receive buffer at the destination drops some
    packets in case of b/w 1179648

41
Creating 50 Virtual Hosts per System
A network topology consisting of 50 nodes per
system
42
Creating 50 Virtual Hosts per System (Cont.. )
  • Sending 40000 TCP packets from 10.0.1.1 to
    10.0.4.25. Each link has 10ms delay.

Bandwidth applied using NIST Net (Bytes/Sec) Throughput (Bytes/Sec)
4096 8192 16384 32768 65536 131072 262144 393216 524288 3892 7782 15561 31079 61024 61921 61995 62011 62013
  • Expected results are coming

43
Creating 50 Virtual Hosts per System (Cont..)
  • Sending 40000 UDP packets from 10.0.1.1 to
    10.0.4.25. Each link has 10ms delay.

Bandwidth applied using NIST Net (Bytes/Sec) Throughput (Bytes/Sec) Client send rate (Bytes/Sec)
4096 8192 16384 32768 65536 131072 262144 393216 524288 786432 917504 1048576 3908 7816 15634 31268 62537 125069 250146 375279 500325 750346 875913 Not all packets reached destination 4500 8500 16500 33000 66000 131500 262500 393500 524500 786500 918000 1049000
  • Receive buffer at the destination drops some
    packets in case of b/w 1048576

44
Creating 50 Virtual Hosts per System (Cont.. )
  • Sending 40000 TCP packets from 10.0.1.1 to
    10.0.4.25. Each link has 5ms delay.

Bandwidth applied using NIST Net (Bytes/Sec) Throughput (Bytes/Sec)
4096 8192 16384 32768 65536 131072 262144 393216 524288 3892 7784 15569 31122 62158 120066 122024 122155 122270
  • Expected results are coming

45
Creating 50 Virtual Hosts per System (Cont..)
  • Sending 40000 UDP packets from 10.0.1.1 to
    10.0.4.25. Each link has 5ms delay.

Bandwidth applied using NIST Net (Bytes/Sec) Throughput (Bytes/Sec) Client send rate (Bytes/Sec)
4096 8192 16384 32768 65536 131072 262144 393216 524288 786432 917504 1048576 3908 7816 15633 31267 62537 125066 250138 375259 500293 750277 875775 Not all packets reached destination 4500 8500 16500 33000 66000 131500 262500 393500 524500 786500 918000 1049000
  • Receive buffer at the destination drops some
    packets in case of b/w 1048576

46
Outline
  • Introduction
  • Related Work
  • Design and Implementation of ScaleNet
  • Experimental Results
  • Conclusions
  • Future Work

47
Conclusions
  • Created an emulation platform which emulates
    large-scale networks using limited physical
    resources
  • Several virtual hosts are created in each
    physical machine and applications are associated
    with virtual hosts
  • Routing tables are setup for each virtual host
  • With this emulation platform any kind of network
    protocol may be tested
  • Performance analysis and debugging can be done
  • In F. Hao et. al. 2003, BGP simulation is done
    using 11806 AS nodes. In ScaleNet, this can be
    done using about 240 systems. Similarly OSPF
    protocol and peer-to-peer networks can be
    studied.

48
Outline
  • Introduction
  • Related Work
  • Design and Implementation of ScaleNet
  • Experimental Results
  • Conclusions
  • Future Work

49
Future Work
  • Automatic mapping of user specified topology to
    the physical resources
  • Identifying and redirecting other system calls
  • Locking of shared data structures in case of SMP
    machine
  • Avoid changing MAC header
  • Analyzing the memory and processing requirements
    by running a networking protocol
  • System is crashing sometimes during the
    initialization of the emulation platform
  • Graphical user interface

50
  • Thank You
Write a Comment
User Comments (0)
About PowerShow.com