eLBA - PowerPoint PPT Presentation

About This Presentation
Title:

eLBA

Description:

Compute processes (Core) Incoming data stream is time divided and copied round ... 5 node, dual processor dual core. 4 (Intel) NICs for network flexibility ... – PowerPoint PPT presentation

Number of Views:42
Avg rating:3.0/5.0
Slides: 25
Provided by: chris1130
Category:
Tags: elba

less

Transcript and Presenter's Notes

Title: eLBA


1
eLBA Current Developments
  • Chris Phillips
  • eVLBI Project Scientist
  • 16 June 2008

2
The LBA
ASKAP
ATCA
Mopra
Ceduna
Parkes
Sydney
Tidbinbilla
Hobart
3
LBADR LBA Data Recorder
  • Cousin of MRO/EVN-PC
  • Commodity PC with VSIB input card
  • Primarily record onto Apple Xserve RAID
  • Control software highly modified from original
    MRO
  • Mark5b emulation mode
  • eVLBI with TCP or UDP
  • Very flexible
  • Data written to normal Linux filesystem
  • Realtime sampler statistics
  • Flexible realtime fringe checking

4
LBA status
  • Using DiFX software correlator for all LBA
    operations
  • All disk correlation has been done at Swinburne
    University of Technology (Melbourne)
  • eVLBI correlated at Parkes
  • Disk correlation will transfer to Curtin
    University (Perth) by August 2008

5
DiFX
  • All LBA correlation now runs on DiFX
  • Distributed FX
  • MPI parallelization on Beowulf style cluster
  • Written by Adam Deller at Swinburne University of
    Technology
  • Active development also from Walter Brisken
  • Available free of charge for scientific research
  • Supports LBADR, Mark5a, Mark5b, Mark5c
  • Crucial part of VLBA sensitivity upgrade
  • Supports mixed mode correlation
  • Easy to add support for new formats

6
DiFX Architecture
  • Based on MPI
  • Written in C
  • Uses IPP libraries extensively
  • 2 types of processes
  • I/O processes (DataStream)
  • Compute processes (Core)
  • Incoming data stream is time divided and copied
    round robin from DataStream to Core
  • Asynchronous sends mitigate swarming effect
  • Supports disk and eVLBI operation
  • TCP UDP, filesystem and native Mark5

7
APSR Cluster
  • ATNF Parkes Swinburne Recorder
  • Backend for Pulsar processing at Parkes
  • 18 node Dell cluster
  • Dual processor quad core
  • VLBI front end I/O cluster PAMHELA
  • 5 node, dual processor dual core
  • 4 (Intel) NICs for network flexibility
  • Job control from PAMHELA, APSR used as slave

Image Credit Shaun Amy
8
APSR
PAMHELA
Telescope Connections
9
Case Study How not to implement a cluster
  • PAMHELA running 32bit Debian Etch
  • APSR running 64bit CentOS
  • No shared filesystem
  • MPI/DiFX does not like mixed 32 and 64bit
    binaries
  • DataStream sending 4 times more data than
    receiving
  • Many issues getting MPI
  • Mpich would not scale to use full cluster
  • Openmpi seemed to get confused by multiple
    interfaces

10
ATNF Observatory Network
  • Provided by AARNet
  • Pair of dark fiber, carrying multiple 10 Gbps SDH
  • 2 x 1 Gbps connection to each observatory
  • Used for commodity traffic (web, email, ftp, ssh,
    observing etc) and eVLBI
  • Each link effectively limited to 512 Mbps for
    eVLBI traffic
  • Sydney to observatory can be run at line rate
    using bog standard PCs
  • 989 Mbps user data (evlbi software)
  • 20 Mbps typical for scp

11
ATCA
Mopra
To Seattle 10 Gbps
Parkes
Sydney
10 Gbps AARNET3 To Perth
To Swinburne
12
Current eVLBI status
  • eVLBI being offered via normal call for
    proposals and actively encouraged
  • 3x512 Mbps ATCA-Parkes-Mopra
  • 2x1 Gbps ATCA-Parkes (or Parkes-Mopra)
  • Implemented as 2x 2x512 Mbps
  • Exclusively layer 2
  • Hand crafted traffic engineering for routing,
    using multiple VLANS
  • Connection to Hobart problematic
  • Effectively single 155 Mbps shared IP network
  • Cannot sustain 64 Mbps TCP reliably

13
http//noc.atnf.csiro.au/
http//noc.atnf.csiro.au
14
Network Upgrades
  • 1 Gbps link to Swinburne imminent
  • Multi-gigabit access to 10 Gbps AARNet3 backbone
    in progress
  • Implementing VPLS Layer2 over Layer3
  • eVLBI to Curtin
  • New cluster for ATCA
  • 3x1 Gbps using distributed approach
  • Still waiting on upgraded bandwidth to Hobart

15
EXPRES-Oz
  • AARNet and ATNF Partners on EXPReS project
  • Demonstrate realtime eVLBI from Australia to JIVE
  • AARNet provided 1 Gbps light paths from Parkes,
    Mopra and ATCA to JIVE
  • CENIC, Pacific Wave, CANARIE, SurfNet
  • Science observation of SN1987a at 1.6 GHz
  • 11hr observation at 512 Mbps sustained
  • Using Mark5b UDP
  • 340msec RTT

16
Image Credit Paul Boven,
Image created by Paul Boven Satellite image Blue
Marble Next Generation, courtesy of NASA
Visibible Earth
17
(No Transcript)
18
Contours ATCA 9 GHz super resolved image (0.4
FWHM)
19
e-APT Demo
  • Connecting Shanghai and Kashima to Parkes at 512
    Mbps
  • AARNet have provisioned 3x 622 Mbps circuits
  • Mixture of SDH and layer 2/3
  • AARNet, CENIC, JGN2, CSTNet, Pacific Wave, HKOEP
  • Dedicated Mopra-Parkes link
  • Presented as gigabit Ethernet

20
e-APT Demo Implementation
  • ATNF telescopes using LBADR data format and TCP
  • Kashima and Shanghai using JIVE UDP format
  • Raw Mark5 data stream with 64bit sequence
  • Shanghai using jive5a control on Mark5a
  • Kashima written realtime Mark5b converter and
    also generate JIVE UDP packets
  • 1500 MTU used exclusively

21
Image created by Paul Boven Satellite image Blue
Marble Next Generation, courtesy of NASA
Visibible Earth
22
(No Transcript)
23
Internet2 Wave of the Future
  • Winners of inaugural award!
  • 10 Gbps connection on Internet2 (continental USA)
    for 1 year
  • Possible usage
  • eVLBI to Haystack
  • eVLBI to JIVE
  • Details still being worked through

24
Thank you
ATNF Chris Phillips eVLBI Project
Scientist Phone 61 2 93724608 Email
Chris.Phillips_at_csiro.au Web www.atnf.csiro.au/vlb
i
Write a Comment
User Comments (0)
About PowerShow.com