Principles and Lessons for a New Internet and 4G Wireless Networks - PowerPoint PPT Presentation

1 / 75
About This Presentation

Principles and Lessons for a New Internet and 4G Wireless Networks


Principles and Lessons for a New Internet and 4G Wireless Networks Prof. Henning Schulzrinne Dept. of Computer Science Columbia University Overview Interest in ... – PowerPoint PPT presentation

Number of Views:111
Avg rating:3.0/5.0
Slides: 76
Provided by: Henni7


Transcript and Presenter's Notes

Title: Principles and Lessons for a New Internet and 4G Wireless Networks

Principles and Lessons for a New Internet and 4G
Wireless Networks
  • Prof. Henning Schulzrinne
  • Dept. of Computer Science
  • Columbia University

  • Interest in revising Internet architecture
  • What didnt we think of the first time
  • What has made the Internet successful
  • Built-in vs. bolted on
  • User issues what bothers users
  • Internet infrastructure the plumbing
  • Network management
  • Talking meta research and standardization

  • Applications and upper layers
  • Internet protocol infrastructure
  • Network management
  • Network standards
  • Networking research

Philosophy transition
PC era cell phone era
One computer/phone, many users
One computer/phone, one user
mainframe era home phone party line
Many computers/phones, one user
ubiquitous computing
anywhere, any time any media
right place (device), right time, right media
The three Cs of Internet applications
grossly simplified...
what users care about
what users care about
research focus
Killer Application
  • Carriers looking for killer application
  • justify huge infrastructure investment
  • video conferencing (1950 2000)
  • ?
  • There is no killer application
  • Network television block buster ? YouTube hit
  • Army of one
  • Users create their own custom applications that
    are important to them
  • Little historical evidence that carriers (or
    equipment vendors) will find that application if
    it exists
  • Killer app application that kills the carrier

Internet transition applications
  • Moving analog applications to Internet
  • digitization of communication largely completed
  • Extending reach of application
  • mobile devices
  • vehicles
  • Broadening access
  • Minitel SNCF had train schedule service
  • web anybody can have a blog
  • Allowing customization and creation
  • web pages to code modules

Completing the migration of comm. applications
Migration of applications
text, still images audio video
synchronous IM VoIP video conferencing
asynchronous email email, voicemail YouTube
Building Internet applications
80 care about this level
extensible CMS, Wiki (Drupal, Mambo, Joomla, ...)
Ruby on Rails, Spring, ... Ajax, SOAP
PHP, Java w/libraries Java RMI, HTTP
taught in Networking 101
C/C with sockets custom protocols on UDP, TCP
User issues (guesses)
  • Lack of trust
  • small mistakes ? identity gone
  • cant tell when one has lost the wallet
  • waste time on spam, viruses, worms, spyware,
  • Lack of reliability
  • 99.5 instead of 99.999
  • even IETF meeting cant get reliable 802.11
  • Lack of symmetry
  • asymmetric bandwidth ADSL
  • asymmetric addressing NAT, firewalls ?
    client(-server) only, packet relaying via TURN or
  • Users as Internet mechanics
  • why does a user need to know whether to use IMAP
    or POP?
  • navigate circle of blame

Left to do glue protocols
  • Lots of devices and services
  • cars
  • household
  • environment
  • Generally, stand-alone
  • e.g., GPS cant talk to camera
  • Home
  • home control networks have generally failed
  • cost, complexity
  • Environment
  • Internet of things
  • tag bus stops, buildings, cars, ...

Left to do event notification
  • notify (small) group of users when something of
    interest happens
  • presence change of communications state
  • email, voicemail alerts
  • environmental conditions
  • vehicle status
  • emergency alerts
  • kludges
  • HTTP with pending response
  • inverse HTTP --gt doesnt work with NATs
  • Lots of research (e.g., SIENA)
  • IETF efforts starting
  • SIP-based
  • XMPP

GEOPRIV and SIMPLE architectures
rule maker
XCAP (rules)
location server
location recipient
notification interface
publication interface
presence agent
SIP presence
SIP call
Configuration complexity
  • 65 of attacks exploit mis-configured systems
  • Human error accounts for 48 of wide area network
  • Yankee Group 2002 operator error is the largest
    cause of failures and largest contributor to time
    to repair in two of the three (surveyed) ISPs,
    configuration errors are the largest category of
    operator errors.
  • 45 WAN operations cost attributed to component
  • Yankee Group, 1998
  • Although setup (of the trusted computing base) is
    much simpler than code, it is still complicated,
    it is usually done by less skilled people, and
    while code is written once, setup is different
    for every installation. So we should expect that
    its usually wrong, and many studies confirm this
    expectation. (B. Lampson)

Open issues Configuration
  • Ideally, applications should only need a user
    name and some credential
  • password, USB key, host identity (MAC address),
  • More than DHCP device needs to get
  • application services
  • SMTP, IMAP, POP, ...
  • security variations
  • policy information (sorry, no video)
  • user data (address book)
  • Multiple sources of configuration information
  • local network
  • application service provider
  • Configuration information may change dynamically
  • event notification
  • Needs to allow no-touch deployment of thousands
    of devices

Mobile systems - reality
  • idea special purpose (phone) --gt universal
  • idea is easy...
  • mobile equipment laptop phone
  • sufficiently different UI and capabilities
  • we all know the ideal (converged) cell phone
  • difficulty is not technology, but integration and
  • (almost) each phone has a different flavor of OS
  • doesnt implement all functionality in Java APIs
  • no dominant vendor (see UNIX/Linux vs. Microsoft)
  • external interfaces crippled or unavailable
  • e.g., phone book access

location data
  • Applications and upper layers
  • Internet protocol infrastructure
  • Network management
  • Network standards
  • Network research

What has made the Internet successful?
  • 36 years ? approaching mid-life crisis ? time for
  • ? next generation suddenly no longer finds it hip
  • Transparency in the core
  • new applications (web, VoIP, games)
  • Narrow interfaces
  • socket interface, resolver
  • HTTP and SMTP messaging as applications
  • prevent change leakage
  • Low barrier to entry
  • L2 minimalist assumptions
  • technical basic connectivity is within
  • economical below 20?
  • Commercial off-the-shelf systems
  • scale compare 802.11 router vs. cell base station

What has gone wrong?
  • Familiar to anybody who has an old house
  • Entropy
  • as parts are added, complexity and interactions
  • Changing assumptions
  • trust model research colleagues ? far more
    spammers and phishers than friends
  • AOL 80 of email is spam
  • internationalization internationalized domain
    names, email character sets
  • criticality email research papers ? transfers B
    and dial 9-1-1
  • economics competing providers
  • Internet does not route money (Clark)
  • Backfitting
  • had to backfit security, I18N, autoconfiguration,
  • ? Tear down the old house, gut interior or more
    wall paper?

In more detail
  • Deployment problems
  • loss of transparency (NATs, deep-packet
    inspection, ...)
  • Layer creep
  • Simple and universal wins
  • Scaling in human terms
  • Cross-cutting concerns, e.g.,
  • CPU vs. human cycles
  • we optimize the 100 component, not the 100/hour
  • introspection
  • graceful upgrades
  • no policy magic

Internet design principles
did not anticipate cyber attack, exfiltration,
malicious control
Original DARPA Internet Design principle
The Internet must support multiplexed utilization of existing interconnected networks.
Internet communications must continue despite loss of networks or gateways.
The Internet architecture must accommodate a variety of networks.
The Internet must permit distributed management of its resources.
The Internet architecture must be cost effective.
The Internet architecture must permit host attachment with low level of effort.
The resources used in the Internet architecture must be accountable.
cooperative protocols --gt insider threat
permit-by-default instead of deny-by-default
malicious use of resources remains anonymous
J Christopher Ramming, IAMANET briefing, April
2006, based on D. Clark, SIGCOMM 1998
Core goals for new networks
  • reliability
  • diagnosability
  • sustainability
  • adaptability
  • survivability

  • Real world locks keep out the honest
  • until the police arrives
  • Internet assumption of lawlessness
  • no global law enforcement
  • carriers have little interest in policing their
  • bots, spam
  • must survive extended periods of attacks
  • From permit-by-default to deny-by-default

  • From secondary medium to core
  • Office without phone vs. without Internet
  • Applications 2 servers exponentially better than
  • costs only doubles
  • but most application protocols dont fail over
  • examples HTTP, IMAP, POP, NFS
  • exceptions SMTP, SIP
  • Networks 2 networks exponentially better than 1
  • most (US) residences are served by 3 networks
    DSL, cable, cellular
  • no good multi-homing technology that scales
  • economics dubious - via neighbors?

The transformation of protocol stacks
Cause of death for the next big thing
QoS multi- cast mobile IP active networks IPsec IPv6
not manageable across competing domains ? ? ? ?
not configurable by normal users (or apps writers) ? ? ?
no business model for ISPs ? ? ? ? ? ?
no initial gain ? ? ? ? ?
80 solution in existing system ? ? ? ? ? ? (NAT)
increase system vulnerability ? ? ? ?
Simple wins (mostly)
  • Examples
  • Ethernet vs. all other L2 technologies
  • HTTP vs. HTTPng and all the other hypertext
  • SMTP vs. X.400
  • SDP vs. SDPng
  • TLS vs. IPsec (simpler to re-use)
  • no QoS MPLS vs. RSVP
  • DNS-SD (Bonjour) vs. SLP
  • SIP vs. H.323 (but conversely SIP vs. Jabber,
    SIP vs. Asterisk)
  • the failure of almost all middleware
  • future demise of 3G vs. plain SIP
  • Efficiency is not important
  • BitTorrent, P2P searching, RSS,

Measuring complexity
  • Traditional O(.) metrics rarely helpful
  • except maybe for routing protocols
  • Focus on parsing, messaging complexity
  • marginally helpful, but no engineering metrics
    for trade-offs
  • No protocol engineering discipline, lacking
  • guidelines for design
  • learning from failures
  • we have plenty to choose from but hard to look
    at our own (communal) failures
  • re-usable components
  • components not designed for plug-and-play
  • we dont do APIs ? we dont worry about whether
    a simple API can be written that can be taught in
    Networking 101

Possible complexity metrics
  • new code needed (vs. reuse) ? less likely to be
    buggy or have buffer overflows
  • e.g., new text format almost the same
  • numerous binary formats
  • security components
  • necessary transition bespoke ? off-the-rack
  • new identities and identifiers needed
  • number of configurable options parameters
  • must be configured can be configured (with
    interop impact)
  • discoverable vs. manual/unspecified
  • SIP experience things that shouldnt be
    configurable will be
  • RED experience parameter robustness
  • mute programmer interop test two
    implementations, no side channel
  • number of left-to-local policy
  • DSCP confusion
  • start-up latency (protocol boot time)
  • IPv4 DAD, IGMP

Democratization of protocol engineering
  • Traditional Internet application protocols (IETF
    et al.)
  • one protocol for each type of application
  • SMTP for email, ftp for file transfer, HTTP for
    web access, POP for email retrieval, NNTP for
  • slow protocol development process
  • re-do security (authentication) for each
  • each new protocol has its own text encoding
  • similarity across protocols SMTP-style headers
  • Content-Type text/plain charset"us-ascii"
  • large parsing exposure ? new buffer overflows for
    each protocol
  • Separate worlds
  • most of the new protocols in the real world based
    on WS
  • IETF stuck in bubble of one-off protocols ? more
  • re-use considered a disadvantage
  • insular protocols that have local cult following

The transformation of protocol design
  • One application, one protocol ? common
    infrastructure for new application
  • Old model
  • RPC for corporate one-off applications
  • custom protocols for common Internet-scale
  • Far too many new applications
  • and not enough protocol engineers
  • network specialist ? application specialist
  • new IETF application protocol design takes 5
  • Many of the applications (email to file access)
    could be modeled as RPC

custom text protocol (ftp)
RFC 822 protocol (SMTP, HTTP, RTSP, SIP, )
use XML for protocol bodies (IETF IM presence)
SOAP and other XML protocols
ASN.1-based (SNMP, X.400)
Why are web services succeeding() after RPC
  • SOAP just another remote procedure call
  • plenty of predecessors SunRPC, DCE, DCOM, Corba,
  • client-server computing
  • all of them were to transform (enterprise)
    computing, integrate legacy applications, end
    world hunger,
  • Why didnt they?
  • Speculation
  • no web front end (no three-tier applications)
  • few open-source implementations
  • no common protocol between PC client (Microsoft)
    and backend (IBM mainframes, Sun, VMS)
  • corporate networks local only (one site), with
    limited backbone bandwidth

() we hope
Time for a new protocol stack?
  • Now add x.5 sublayers and overlay
  • Doesnt tell us what we could/should do
  • or where functionality belongs
  • use of upper layers to help lower layers
    (security associations, authorization)
  • what is innate complexity and what is entropy?
  • Examples
  • Applications do we need ftp, SMTP, IMAP, POP,
    SIP, RTSP, HTTP, p2p protocols?
  • Network can we reduce complexity by integrating
    functionality or re-assigning it?
  • e.g., should e2e security focus on transport
    layer rather than network layer?
  • probably need pub/sub layer currently kludged
    locally (email, IM, specialized)

NSF Green Field approach
  • US National Science Foundation (NSF) working on
    new funding thrust ? next-generation Internet
  • idea incremental components ? new architecture
  • vs. traditional brown field approach
  • Two major components
  • GENI large-scale experimental testbed for
    testing next-generation ideas
  • building on PlanetLab (hundreds of public-access
    servers) ? p2p, CDN, measurement infrastructures
  • probably offers circuits (optical or virtual with
    bandwidth guarantees)
  • 300M (not yet allocated)
  • FIND
  • regular research program within NETS (15m)
  • prepare architecture designs

NSF FIND and GENI, contd
  • Fundamental notions
  • not constrained by existing Internet architecture
  • Difficulties
  • not coordinated ? too many moving pieces?
  • no single research team can do everything
  • point optimization Internet for
  • benchmarks missing
  • how do you compare architectures?
  • are there quantifiable requirements?
  • are there metrics to compare different
  • user-related metrics?
  • Cynics prediction based on the past
  • IPv6 youll get security, QoS,
    autoconfiguration, mobility,
  • IPv4 good ideas, Ill do those, too

(My) guidelines for a new Internet
  • Maintain success factors, such as
  • service transparency
  • low barrier to entry
  • narrow interfaces
  • New guidelines
  • optimize human cycles, not CPU cycles
  • design for symmetry
  • security built-in, not bolted-on
  • everything can be mobile, including networks
  • sending me data is a privilege, not a right
  • reliability paramount
  • isolation of flows
  • New possibilities
  • another look at circuit switching?
  • knowledge and control (signaling) planes?
  • separate packet forwarding from control
  • better alignment of costs and benefit
  • better scaling for Internet-scale routing
  • more general services

More network services
  • Currently, very specialized and limited
  • packet forwarding
  • DNS for identifier lookup
  • DHCP for configuration
  • New opportunities
  • packet forwarding with control
  • general identifier storage and lookup
  • both server-based and peer-to-peer
  • SLP/Jini/UDDI service location ? ontology-based
    data store
  • network file storage ? for temporarily-disconnecte
    d mobiles
  • network computation ? translation, relaying
  • trust services (? IRT trust paths work)

  • More than just encryption!
  • Need identity and role-based certificates
  • May want reverse-path reachability (bank ?

asking user network
user do I know this person? is he a likely sender of spam? is this really a bank? am I connected to a real network or an impostor?
network is this a customer? is this BGP route advertisement legitimate?
NGN or whats wrong with 3G
  • NGN next-generation network
  • roughly, 3G on landline
  • really, ISDN 2.0 on packets
  • SIP for signaling, IPv6
  • Problems
  • complexity gateways to old world
  • coupling link-layer properties only available to
    certain applications
  • e.g., voice-specific link behavior (FEC, delay)
  • closed may be difficult to integrate with
    enterprise systems
  • regular SIP phones may not work properly

Whats (my) 4G?
  • Definition fixes all the things that 3G got
  • voice legacy (3G B-ISDN)
  • high cost
  • system complexity
  • Wireless as access network
  • already happening 3G-802.11 bridges
  • application shouldnt care about access mode or
    carrier --gt more applications
  • But with discoverable and configurable path
  • not a wireless-specific issue!
  • May be less a technical than economics problem
  • same bits, different value --gt capture consumer

Internet (re)engineering summing up
  • Traditional protocol engineering
  • must do congestion control
  • must do security
  • must be efficient
  • New module engineering
  • must reduce operations cost
  • out-of-the-box experience
  • re-usable components
  • most protocol design will be done by domain
    experts (cf. PHP vs. C)
  • What would a clean-room design look like?
  • keep what made Internet successful
  • generalize adjust to new conditions

  • User perspective
  • Internet protocol infrastructure
  • Network management
  • Network standards
  • Networking research

Network Management ? Transition in cost balance
  • Total cost of ownership
  • Ethernet port cost ? 10
  • about 80 of Columbia CSs system support cost is
    staff cost
  • about 2500/person/year ? 2 new PCs/year
  • much of the rest is backup license for spam
    filters ?
  • Does not count hours of employee or son/daughter
  • PC, Ethernet port and router cost seem to have
    reached plateau
  • just that the 10 now buys a 100 Mb/s port
    instead of 10 Mb/s
  • All of our switches, routers and hosts are
    SNMP-enabled, but no suggestion that this would
    help at all

Circle of blame
probably packet loss in your Internet connection
? reboot your DSL modem
probably a gateway fault ? choose us as provider
must be a Windows registry problem ?
re-install Windows
app vendor
must be your software ? upgrade
Diagnostic undecidability
  • symptom cannot reach server
  • more precise send packet, but no response
  • causes
  • NAT problem (return packet dropped)?
  • firewall problem?
  • path to server broken?
  • outdated server information (moved)?
  • server dead?
  • 5 causes ? very different remedies
  • no good way for non-technical user to tell
  • Whom do you call?

VoIP user experience
  • Only 95-99.5 call attempt success
  • Keynote was able to complete VoIP calls 96.9 of
    the time, compared with 99.9 for calls made over
    the public network. Voice quality for VoIP calls
    on average was rated at 3.5 out of 5, compared
    with 3.9 for public-network calls and 3.6 for
    cellular phone calls. And the amount of delay the
    audio signals experienced was 295 milliseconds
    for VoIP calls, compared with 139 milliseconds
    for public-network calls. (InformationWeek, July
    11, 2005)
  • Mid-call disruptions common
  • Lots of knobs to turn
  • Separate problem manual configuration

Traditional network management model
management from the center
Old assumptions, now wrong
  • Single provider (enterprise, carrier)
  • has access to most path elements
  • professionally managed
  • Problems are hard failures elements operate
  • element failures (link dead)
  • substantial packet loss
  • Mostly L2 and L3 elements
  • switches, routers
  • rarely 802.11 APs
  • Problems are specific to a protocol
  • IP is not working
  • Indirect detection
  • MIB variable vs. actual protocol performance
  • End systems dont need management
  • DMI SNMP never succeeded
  • each application does its own updates

what causes the most trouble?
network understanding
fault location
weve only succeeded here
element inspection
Managing the protocol stack
protocol problem authorization asymmetric conn
echo gain problems VAD action
protocol problem playout errors
TCP neg. failure NAT time-out firewall policy
no route packet loss
Types of failures
  • Hard failures
  • connection attempt fails
  • no media connection
  • NAT time-out
  • Soft failures (degradation)
  • packet loss (bursts)
  • access network? backbone? remote access?
  • delay (bursts)
  • OS? access networks?
  • acoustic problems (microphone gain, echo)

Examples of additional problems
  • ping and traceroute no longer works reliably
  • WinXP SP 2 turns off ICMP
  • some networks filter all ICMP messages
  • Early NAT binding time-out
  • initial packet exchange succeeds, but then TCP
    binding is removed (web-only Internet)
  • policy intent vs. failure
  • broken by design
  • we dont allow port 25 vs. SMTP server
    temporarily unreachable

Proposal Do You See What I See?
  • Each node has a set of active and passive
    measurement tools
  • Use intercept (NDIS, pcap)
  • to detect problems automatically
  • e.g., no response to HTTP or DNS request
  • gather performance statistics (packet jitter)
  • capture RTCP and similar measurement packets
  • Nodes can ask others for their view
  • possibly also dedicated weather stations
  • Iterative process, leading to
  • user indication of cause of failure
  • in some cases, work-around (application-layer
    routing) ? TURN server, use remote DNS servers
  • Nodes collect statistical information on failures
    and their likely causes

not working (notification)
request diagnostics
orchestrate tests contact others
inspect protocol requests (DNS, HTTP, RTCP, )
ping can buddy reach our resolver?
DNS failure for 15m
notify admin (email, IM, SIP events, )
Failure detection tools
  • STUN server
  • what is your IP address?
  • ping and traceroute
  • Transport-level liveness and QoS
  • open TCP connection to port
  • send UDP ping to port
  • measure packet loss jitter
  • TBD Need scriptable tools with dependency graph
  • initially, well be using make
  • TBD remote diagnostic
  • fixed set (do DNS lookup) or
  • applets (only remote access)

Failure statistics
  • Which parts of the network are most likely to
    fail (or degrade)
  • access network
  • network interconnects
  • backbone network
  • infrastructure servers (DHCP, DNS)
  • application servers (SIP, RTSP, HTTP, )
  • protocol failures/incompatibility
  • Currently, mostly guesses
  • End nodes can gather and accumulate statistics

  • User perspective
  • Internet protocol architecture
  • Network management
  • Network standards
  • Networking research

The role of standards
  • Most users wont see network improvements until
    standard emerges
  • gatekeeper
  • Exceptions
  • De-facto standards (Microsoft)
  • TCP enhancements -- via OS
  • Some new tools (Skype)

Standards Work
  • Old approach
  • standards group goes to Geneva
  • Input dinners
  • Output PowerPoint
  • software groups convert finished standard into
    products (maybe)
  • New approach
  • standards contributors directly develop (or
    supervise) libraries, prototypes and other tools
  • possibly in conjunction with academic research
  • early, pre-completion feedback
  • rapid early release ? possible early
    implementation IPR
  • train development staff
  • participate in interop testing

includes draft-ietf--00 and draft-personal--00
RFC publication
Trouble in Standards Land
  • Proliferation of transition standards 2.5G,
    2.6G, 3.5G,
  • true even for emergency calling
  • Splintering of standardization efforts across
  • primary
  • architectural
  • PacketCable, ETSI, 3GPP, 3GPP2, OMA, UMA, ATIS,
  • specialized
  • NENA
  • operational, marketing
  • SIP Forum, IPCC,

data formats
data exchange
L2.5-L7 protocols
IETF issues
  • SIP WGs small number (dozen?) of core authors
  • some now becoming managers
  • or moving to other topics
  • IETF research ? engineering ? maintenance
  • many groups are essentially maintaining standards
    written a decade (or two) ago
  • constrained by design choices made long ago
  • often dealing with transition to hostile
    random network
  • network ossification
  • Stale IETF leadership
  • often from core equipment vendors, not software
    vendors or carriers
  • fair amount of not-invented-here syndrome
  • late to recognize wide usage of XML and web
  • late to deal with NATs
  • security tends to be per-protocol (silo)
  • some efforts such as SAML and SASL
  • tendency to re-invent the wheel in each group

IETF issue timeliness
  • Most drafts spend lots of time in 90-complete
  • lack of energy (moved on to new -00)
  • optimizers vs. satisfiers
  • multiple choices that have non-commensurate
  • Notorious examples
  • SIP request history Feb. 2002 May 2005 (RFC
  • Session timers Feb. 1999 May 2005 (RFC 4028)
  • Resource priority Feb. 2001 Feb 2006 (RFC
  • New framework/requirements phase adds 1-2 years
    of delay
  • Three bursts of activity/year, with silence
  • occasional interim meetings
  • IETF meetings are often not productive
  • most topics gets 5-10 minutes ? lack context,
    focus on minutiae
  • no background ? same people as on mailing list
  • 5 people discuss, 195 people read email
  • No formal issue tracking
  • some WGs use tools, haphazardly
  • Gets worse over time
  • dependencies increase, sometimes undiscovered
  • backwards compatibility issues
  • more background needed to contribute

IETF issues timeliness
  • WG chairs run meetings, but are not managing WG
  • very little control of deadlines
  • e.g., all SIMPLE deadlines are probably a year
  • little push to come to working group last call
  • limited timeliness accountability of authors and
  • chairs often provide limited editorial feedback
  • IESG review can get stuck in long feedback loop
  • author AD WG chairs
  • sometimes lack of accountability (AD-authored
  • RFC editor often takes 6 months to process
  • dependencies IANA editor queue author delays
  • e.g., session timer Aug. 2004 May 2005

  • User perspective
  • Internet protocol infrastructure
  • Network management
  • Network standards
  • Networking research

Lifecycle of technologies
COTS (e.g., GPS)
traditional technology propagation
IM, digital photo
opex/capex doesnt matter expert support
capex/opex sensitive, but amortized expert
capex sensitive amateur
Can it be done?
Can I afford it?
Can my mother use it?
Science vs. Engineering
  • Computer Science has identity crisis applied
    math, experimental science or engineering?
  • Applied math
  • general abstractions elegant models
  • reality only a distant motivator
  • metric can it be published in J Applied
  • Experimental science
  • emphasis on general insights
  • measurements models
  • often reflective analyze Gnutella structure
  • point solutions
  • metric does it fit Small World and is it
    self-similar? is it optimal?
  • Engineering
  • emphasis on real-world impact
  • constrained by existing large systems
  • system solutions needs to play nice with rest of
    the world
  • metrics scalability, cost, maintainability,
  • Honesty about what were doing

Traditional research
  • Inspired by physics or chemistry
  • Physics Theory ? experiment ? lab bench ?
    prototype ? (semiconductor) product
  • Communications Research ? advanced development ?
  • Necessary for hardware
  • Dubious for software-intensive systems
  • rewrite several times (if not forgotten)
  • less qualified each time
  • BL example Unix

Whos the customer?
  • Goals may not be identical
  • Equipment vendors preserve investment, confirm
    earlier choices
  • ATM, SS7
  • Carrier preserve product differentiation,
    business model, customer lock-in, monopoly rent,
  • walled gardens, WAP, AAA, DRM, IMS,
  • Consumer fashion, functionality, cost
  • search engines, WiFi, MP3, Skype, web hosting,
  • Easier for some organizations
  • e.g., Google direct customer is advertiser, but
    revenue driven by page views ? consumer

Good ideas
  • Myth Good ideas will win
  • Build a better mousetrap and the world will beat
    a path to your door. (Ralph Waldo Emerson)
  • modern version IEEE 802.11 will dig through IEEE
    Infocom proceedings to find your master paper
  • even most Sigcomm papers have had no
    (engineering) impact
  • Myth Just ahead of its time it will take 10
    years to have impact
  • reality most papers either have immediate impact
    or none, ever
  • Mediocre ideas with commitment win over brilliant
    ideas without
  • particularly if part of a larger system
  • cost of understanding ideas
  • possible encumbrances (patents)
  • ? researchers need to accompany their children
    through teenage years

Translation into Practice
  • Relay model
  • research ? advanced development ? product
  • information loss rate of 95?
  • lack of sense of ownership
  • hand-off original owners have moved on to next
  • Google model
  • repeated, continuous refinement
  • public beta
  • no separate research
  • still has problems with polish completion

Impact of networking research
  • Very low publication-to-impact ratio
  • Brilliant idea, magically transformed into
  • by somebody else
  • Research as point scoring
  • publication count
  • citation by other papers, also without impact
  • read mostly by other researchers
  • goal graduate/get tenure

Why do good ideas fail?
  • Research O(.), CPU overhead
  • per-flow reservation (RSVP) doesnt scale ? not
    the problem
  • at least now -- routinely handle O(50,000)
    routing states
  • Reality
  • deployment costs of any new L3 technology is
    probably billions of
  • coordination costs
  • The QoS problem is a lawyer problem, not an
    engineering problem
  • Cost of failure
  • conservative estimate (1 grad student year 2
  • 10,000 QoS papers _at_ 20,000/paper ? 200 million

QoS quality-of-service
IEEE 10,377 12,876
ACM 3,487 4,388
Aside The Hockeystick Problem
complexity security risks bandwidth
  • 2nd systems economics problems, not just
  • Challenges are in reliability and
    maintainability, rather than performance or
    packet-loss jitter QoS
  • Is networking research becoming like civil
    engineering large, important infrastructure, but
    resistant to fundamental change?
  • Existing management tools of limited use to most
    enterprises and end users
  • Transition to self-service networks
  • support non-technical users
  • As a community, need to learn more from our
    collective and individual mistakes
  • Need series The design mistakes in (formerly)
    popular system or protocol
Write a Comment
User Comments (0)