Grids%20and%20Software%20Engineering%20Test%20Platforms - PowerPoint PPT Presentation

View by Category
About This Presentation



The Grid as a Test Tool. Conclusions. Panel discussion on Grid QA and ... making too much debugging on your test machines (are testers supposed to do that? ... – PowerPoint PPT presentation

Number of Views:226
Avg rating:3.0/5.0
Slides: 47
Provided by: alber145
Learn more at:


Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Grids%20and%20Software%20Engineering%20Test%20Platforms

Grids and Software Engineering Test Platforms
  • Alberto Di Meglio
  • CERN

  • Setting the Context
  • A Typical Grid Environment
  • Challenges
  • Test Requirements
  • Methodologies
  • The Grid as a Test Tool
  • Conclusions
  • Panel discussion on Grid QA and industrial

Setting the Context
  • What is a distributed environment?
  • The main characteristic of a distributed
    environment that affects how test are performed
  • Many things happen at all times in the same or
    different places and can have direct or indirect
    and often unpredictable effects on each other
  • The main goal of this discussion is to show you
    what are the consequences of this on testing the
    grid and how the grid can (must) be used as a
    tool to test itself and the software running on it

A Typical Grid Environment
  • Non-determinism
  • Infrastructure dependencies
  • Distributed and partial failures
  • Time-outs
  • Dynamic nature of the structure
  • Lack of mature standards (interoperability)
  • Multiple heterogeneous platforms
  • Security

  • Distributed systems like the grid are inherently
    non deterministic
  • Noise is introduced in many places (OS
    schedulers, network time-outs, process
    synchronization, race conditions, etc)
  • Changes in the infrastructure not controlled by a
    test have an effect on the test and on the
    sequence of tests
  • Difficult to exactly reproduce a test run

Infrastructure dependencies
  • Operating systems and third-party applications
    interact with the objects to be tested
  • Different versions of OSs and applications may
    behave differently
  • Software updates (especially security patches)
    cannot be avoided
  • Network topologies and boundaries may be under
    someone else control (routers, firewalls,

Distributed and Partial Failures
  • In a distributed systems also failures are
  • A test or sequence of tests may fail because part
    of the system (a node, a service) fails or is
  • The nature of the problem can be anything
    hardware, software, local network policy changes,
    power failures
  • In addition, since this is expected, middleware
    and applications should cope with that and their
    behaviour should be tested for it

  • Not necessarily due to a failure, but also to
    excessive load
  • They may be infrastructure-related (network),
    system-related (OS, service containers) or
  • Services may react differently when time-outs
    occur they may plainly fail, raise exceptions,
    have retry strategies
  • There are consequences of the tests sequence
    (non-determinism again)

Dynamic nature of the structure
  • The type and number of actors and objects
    participating to the workflow change with time
    and location (concurrent users, different
    processes on the same machine, different machines
    across the infrastructure)
  • Middleware and applications may dynamically
    (re)configure themselves depending on local or
    remote conditions (for example load balancing or
    service fail-over)
  • Actual execution paths may change with load
  • How to reproduce and track such configurations?

Moving Standards
  • Lack of or rapidly changing standards make it
    difficult for grid services to interoperate
  • Service-oriented architectures should make life
    easier, but which standard should be adopted?
  • Failures may be due to incorrect/incomplete/incomp
    atible implementations
  • Ex 1 plain web services, WSRF, WS-?
  • Ex 2 axis (j/c), gsoap, gridsite, zsi?
  • Ex 3 SRM, JSDL
  • How to test the potential combinations?

Multiple Heterogeneous Platforms
  • Distributed software, especially grid software,
    runs on a variety of platforms (combinations of
    OS, architecture and compilers)
  • Software is often written on a specific platform
    and only later ported on other platforms
  • OS and third-party dependencies may change across
    platforms in version and type
  • Different compilers usually do not compile the
    same code in the same way (if at all)

  • Security and security testing are huge issues
  • Sometimes there is a tendency to consider
    security an add-on of the middleware or
  • Software behaves in completely different ways
    with and without security for the same
  • Ex consider the simple example of a web service
    running on http or https, with or without client
  • Sometimes software is developed on individual
    machines without taking into account the
    constraints imposed by running secure network

Test Requirements
  • Where to start from?
  • Test Plans
  • Life-cycle testing
  • Reproducibility
  • Archival and analysis
  • Interactive Vs. automated testing

Test Plans
  • Test plans should be the mandatory starting point
    of all test activities. This point is often
  • It is a difficult task
  • You need to understand thoroughly your system and
    the environment where it must be deployed
  • You need to spell out clearly what you want to
    test and how and what are the expected results
  • Write it together with domain experts to make
    sure as many system components and interactions
    as possible are taken into account
  • Revise it often

Life-cycle Testing
  • When designing the test plan, dont think only
    about functionality, but also about how the
    system will have to be deployed and maintained
  • Start with explicit design of installation,
    configuration and upgrade tests it is easy to
    see that a large part of the bugs of a system
    fall in the installation and configuration

gLite bugs categories
  • This requirement addresses the issue of
  • Invest in tools and processes that makes your
    tests and your test environment reproducible
  • Install your machines using scripts or system
    management tools, but disable automated
    APT/YUM/up2date updates
  • Store the tests together with all information
    needed to run them (environment variables,
    properties, support files, etc) and use version
    control tools to keep the tests in synch with
    software releases

Reproducibility (2)
  • Resist the temptation of making too much
    debugging on your test machines (are testers
    supposed to do that?)
  • If you can afford it, think of using parallel
    testbeds for test runs and debugging
  • Try and write a regression test immediately after
    the problem is found, record it in the test or
    bug tracking system and feed it back to the
  • Then scratch the machine and restart

Archival and Analysis
  • Archive as much information as possible about
    your tests (output, errors, logs, files, build
    artifacts, even an image of the machine itself if
  • If possible use a standard test output schema
    (the xunit schema is quite standard and can be
    used for many languages and for unit, functional
    and regression tests)
  • Using a common schema helps in correlating
    results, creating tests hierarchies, performing
    trend analysis (performance and stress tests)

Interactive Vs. Automated Tests
  • This is a debated issue (related to the
    reproducibility and debugging issues)
  • Some people say that the more complex a system
    and the less automated meaningful tests you can
  • Other people say that the more complex a system
    and the more necessary it is to do automated
  • The truth is probably in between you need both
    and whatever test tools you use should allow you
    to do both
  • A sensible approach is to run distributed
    automated tests using a test framework and freeze
    the machines where problems occur in order to do
    more interactive tests if the available output is
    not enough

  • Unit testing
  • Metrics
  • Installation and configuration
  • Hello grid world tests and Grid Exercisers
  • Functional and non-functional tests

Unit Testing
  • Unit tests are tests performed on the code during
    or immediately after a build
  • They should be independent from the environment
    and the test sequence
  • They are not used to test functionality, but the
    nominal behaviour of functions and methods
  • Unit tests are a responsibility of the developers
    and in some models (test-driven development) they
    should be written before the code
  • It is proven that up to 75 of the bugs of a
    system can in principle be stopped by doing
    proper unit tests
  • It is also proven than they are the first thing
    that is skipped as soon as a project is late
    (which normally happens within the initial 20 of
    its life)

  • Another controversial point
  • Metrics by themselves are not extremely useful
  • However, used together with the other test
    methodologies they can provide some interesting
    information about the system

gLite bug trends examples
Installation and Configuration
  • As mentioned, dedicate some time to test
    installation and configuration of the services
  • Use automated systems for installing and
    configuring the services (system management
    tools, APT, YUM, quattor, SMS, etc). No manual
  • Tests upgrade scenarios from one version of a
    service to another
  • Many interoperability and compatibility issues
    are immediately discovered when restarting a
    service after an upgrade

Hello, grid world tests and Grid Exercisers
  • Now you have an installed and configured service.
    So what?
  • A good way of starting the tests is to have a set
    of nominal Hello, grid world tests and Grid
  • Such tests should perform a number of basic,
    black-box tests, like submitting a simple job
    through the chain, retrieving a file from
    storage, etc
  • The tests should be designed to exercise the
    system from end to end, but without focusing too
    much on the internals of the system
  • No other tests should start until the full set of
    exercisers runs consistently and reproducibly in
    the testbed

Functional and Non-Functional Tests
  • At this point you can fire the full complement
  • Regression tests (verify that old bugs have not
  • Functional tests (black and white box)
  • Performance tests
  • Stress tests
  • End-to-end tests (response times, auditing,
  • Of course this should be done
  • for all services and their combinations
  • on as many platforms as possible
  • with full security in place
  • using meaningful tests configurations and

The Grid as a Test Tool
  • Intragrids
  • Certification and Pre-Production environments
  • Virtualization and the Virtual Test Lab
  • Grid Test Frameworks
  • State of the Art

  • Intragrids are becoming more common especially in
    commercial companies
  • An intragrid is a grid of computing resources
    entirely owned by a single company/institute, not
    necessarily in the same geographical location
  • Often they use very specific (enhanced) security
  • They are often used as tools to increase the
    efficiency of a company internal processes
  • But there are also cases of intragrids used as
    test tools
  • A typical example is the intragrid used by CPUs
    manufactures like Intel to simulate their
    hardware or test the compilers on multiple

Certification and Pre-Production
  • In order to test grid middleware and applications
    in meaningful contexts, the testbeds should be as
    close a reproductions as possible of real grid
  • A typical approach is to have Certification and
    Pre-Production environments designed as
    smaller-scale, but full-featured grids with
    multiple participating sites
  • A certification testbed is typically composed of
    a complete, but limited set of services, usually
    within the same network. It is used to test
    nominal functionality
  • A pre-production environment is a full-fledged
    grid, with multiple sites and services, used by
    grid middleware and application providers to test
    their software
  • A typical example is the EGEE pre-production
    environment where gLite releases and HEP or
    biomed grid applications are tested before they
    are released to production

  • As we have seen, the Grid must embrace diversity
    in terms of platforms, development languages,
    deployment methods, etc
  • However, testing all resulting combinations is
    very difficult and time consuming, not to mention
    the manpower required
  • Automation tools can help, but providing and
    especially maintaining the required hardware and
    software resources is not trivial
  • In addition running tests on clean resources is
    essential for enforcing reproducibility
  • A possible solution is the use of virtualization

The Standard Test Lab
Each test platform has to be preinstalled and
maintained. Elevated-privileges tests cannot be
easily done (security risks). Required for
performance and stress tests
The Virtual Test Lab
Images can contain preinstalled OSs in fixed,
reproducible configurations
The testbed is only composed of a limited number
of officially supported platforms
Virtualization Software (XEN, MS Virtual Server,
It allows performing elevated-privileges tests.
Security risks are minimized, the image is
destroyed when the test is over. But it can also
be archived for later offline analysis of the
Grid Test Frameworks
  • A test framework is a program or a suite of
    programs that helps managing and executing tests
    and collecting the results
  • They go from low level frameworks like xunit
    (junit, pyunit, cppunit, etc) to full fledged
    grid-based tools like NMI, Inca and ETICS (more
    on this later)
  • It is recommended to use such tools to make the
    tests execution reproducible, to automate or
    replicate tasks across different platforms, to
    collect and analyse results over time
  • But remember one of the previous tenets make
    sure your tests can be run manually and that the
    test framework doesnt prevent that

State of the Art
  • NMI
  • Inca
  • OMII-Europe

  • NMI is a multi-platform facility designed to
    provide (automated) software building and testing
    services for a variety of (grid) computing
  • NMI is a layer on the top of Condor to abstract
    the typical complexity of the Build and Test
  • Condor is offering mechanisms and policies that
    support High Throughput Computing (HTC) on large
    collections of distributed computing resources

NMI (2)
NMI (3)
NMI (4)
  • Currently used by
  • Condor
  • Globus
  • VDT

  • Inca is a flexible framework for the automated
    testing, benchmarking and monitoring of Grid
    systems. It includes mechanisms to schedule the
    execution of information gathering scripts and to
    collect, archive, publish, and display data
  • Originally developed for the TeraGrid project
  • It is part of NMI

INCA (2)
Via browser
Build/Test Artefacts
Report DB
Project DB
Via command- Line tools
NMI Client
ETICS Infrastructure
  • Web Application layout (project structure)

  • Currently used or being evaluated by
  • EGEE for the gLite middleware
  • DILIGENT (digital libraries on the grid)
  • CERN IT FIO Team (quattor, castor)
  • Open discussion ongoing with HP, Intel, Siemens
    to identify potential commercial applications

  • Testing for the grid and with the grid is a
    difficult task
  • Overall quality (ease-of-use, reliable
    installation and configuration, end-to-end
    security) is not always at the level that
    industry would find viable or cost-effective for
    commercial applications
  • It is essential to dedicate efforts to testing
    and improving the quality of grid software by
    using dedicated methodologies and facilities and
    sharing resources
  • It is also important to educate developers to
    appreciate the importance of thinking in terms of
  • However the prize for this effort would be a
    software engineering platform of unprecedented
    potential and flexibility

Panel discussion
  • ?