Using webpagetest and nanovisor.js to Measure Web Performance and Quality

About This Presentation
Title:

Using webpagetest and nanovisor.js to Measure Web Performance and Quality

Description:

Complexity of devices, websites, and web applications has increased dramatically. Instart Logic takes a different approach to deliver the web, using technologies like big data and client-side virtualization. All these factors pose challenge in measuring the performance of web delivery systems. Read more to know more how Instart Logic effectively measures web performance and quality using webpagetest and nanovisor.js. To optimize your web performance, visit: – PowerPoint PPT presentation

Number of Views:53

less

Transcript and Presenter's Notes

Title: Using webpagetest and nanovisor.js to Measure Web Performance and Quality


1
USING WEBPAGETEST AND NANOVISOR.JS TO MEASURE WEB
PERFORMANCE AND QUALITY IN TERMS OF USER
EXPERIENCE
BY KARAN KUMAR AND GIRISH VAITHEESWARAN
2
The web of 10 years ago was massively simpler
with a narrow set of browsers and devices. These
days the complexity of the devices, websites, and
web applications has increased dramatically. As
you can imagine, that makes for a challenge in
measuring the performance of web delivery
systems. Now add in the very different approach
Instart Logic takes to deliver the web, using
technologies like big data and client-side
virtualization, and the measurement challenge
grows again. As a result, we had to come up with
a new type of framework for measuring web
performance.
3
Two unique things about our Web Application
Streaming approach are the dual-sided
architecture and the way the Instart Logic
platform sends information to the browser we
stream objects, rather than downloading the full
objects the way that most web content has
traditionally been delivered.
EXISTING FRAMEWORKS TEND TO SIMULATE CLIENTS,
RATHER THAN MEASURE ON REAL BROWSERS
We found that many existing performance
frameworks are tailored to measure web
performance characteristics on the server side
and simulate clients rather than use real
browsers. The dual-sided architecture of Instart
Logic includes a client-side virtualization
component built in JavaScript called NanoVisor.js
that resides in the browser and demands a
special-purpose performance framework that allows
real browsers and devices to be used. The second
unique aspect is our streaming technology, which
is enabled by the dual-sided architecture. Most
performance and testing frameworks are designed
to measure the download time of full objects on a
web page. This metric does not accurately model
the performance characteristics of streaming web
application objects and it fails to credit the
earlier time to interaction with the content
which web application streaming makes
possible. So those were a few of the challenges.
Now lets talk a little bit about how we do it.
4
A FRAMEWORK FOR MEASURING WEB PERFORMANCE THAT
LOOKS AT IT FROM THE END USER PERSPECTIVE
At Instart Logic, as computer scientists and
performance geeks with years of experience in big
data, large-scale distributed systems, and even
traditional web delivery systems, we always look
to deliver a faster web experience based on
data-driven decisions. To this end, we have
architected a performance framework called
Phoenix that is built on top of our favorite open
source web performance testing system,
WebPagetest.org (WPT), of which we have a big
internal instance set up. Phoenix uses our
client-side virtualization library NanoVisor.js,
which is a hypervisor for web applications in the
browser. We leverage the core capabilities of WPT
around instrumenting to provide a ton of detailed
information from real browsers running on real
devices. We couple all that information with some
very low-level data which our NanoVisor.js
library can collect as web pages load up in a
browser. Around the core capabilities of those
systems we have built a big data solution to
understand and analyze the large amounts of data
that our system generates. With such a framework
in place, we can point to any website or web
application and gather relevant data to
understand its performance in depth. Similarly we
can run any such application with Instart Logics
network, thus giving us a comparative and
competitive view. We routinely pull many
thousands of websites and applications through
the Instart Logic Web Application Streaming
Network and compare it to the performance
without our service to measure quality and
performance from the standpoint of the end user.
5
  • We define such views by various observation
    points, with some examples below
  • Platform Desktop vs. Mobile vs. Tablet
  • Devices Samsung Galaxy S4 vs. iPhone 5s vs. iPad
    2
  • Browsers Chrome vs. Firefox vs. Safari vs. IE
  • Network 3G, 4G, LTE, Cable, FIOS
  • Protocol HTTP vs. HTTPS

MEASURING PERFORMANCE GENERATES BIG DATA AND
ALLOWS MORE INSIGHT
By collecting and measuring information from the
instrumented browsers via WPT and then also
capturing detailed client-side performance data
reported by the NanoVisor.js, we get invaluable
insights into the user experience across devices,
platforms and networks beyond what you can get
from the basics provided by a browsers built-in
performance APIs. Such amounts of raw data from
these many different observation points are
collected in a big data solution we designed
internally to achieve deeper insights. With the
Instart Logic core team coming from the big data
world this part was actually the easiest to
tackle. Not only is this system invaluable for
testing the performance and quality of our
systems, but we have already leveraged it to
provide deeper insights into how the modern web
is evolving.
6
  • Here are a few examples.
  • We track changes in major trends on the web such
    as changes to when resources are being loaded
    such as before or after onload, via static
    references in HTML or dynamic references in
    JavaScript.
  • We constantly provide the Instart Logic RD teams
    with feedback on existing product features and
    how they impact performance across a broad set of
    sites.
  • The data we collect also enables Instart Logic
    RD teams to estimate how impactful a new feature
    idea would be. For example we might want to know
    how many SaaS apps are using multiple versions of
    common JavaScript libraries, and whether those
    libraries loaded up in the header or down in the
    page body.
  • Thats just a quick overview of what we have
    built and some of the things we can do with our
    performance and testing system. In future posts
    we will drill down in a few areas and also share
    some of the insights we have learned.
  • We wanted to close with a big thank you to
    Patrick Meenan and the web performance community
    for WebPagetest.org, which laid the groundwork
    upon which we build!

7
Visit our Blog to learn more
Write a Comment
User Comments (0)