Where The Rubber Meets the Sky Giving Access to Science Data - PowerPoint PPT Presentation

1 / 68
About This Presentation
Title:

Where The Rubber Meets the Sky Giving Access to Science Data

Description:

Where The Rubber Meets the Sky Giving Access to Science Data Jim Gray Microsoft Research Gray_at_Microsoft.com Http://research.Microsoft.com/~Gray Alex Szalay – PowerPoint PPT presentation

Number of Views:93
Avg rating:3.0/5.0
Slides: 69
Provided by: researchM6
Category:
Tags: access | data | giving | meets | rubber | science | sky

less

Transcript and Presenter's Notes

Title: Where The Rubber Meets the Sky Giving Access to Science Data


1
Where The Rubber Meets the SkyGiving Access to
Science Data
  • Jim Gray
  • Microsoft Research
  • Gray_at_Microsoft.com
  • Http//research.Microsoft.com/Gray
  • Alex SzalayJohns Hopkins University
  • Szalay_at_JHU.edu
  • Talk at
  • U. Waterloo U. Penn
  • Fall 2004

2
  • Promised Abstract
  • On-Line Science
  • The World-Wide Telescope as a Prototype for the
    New Computational Science.
  • Computational science has historically meant
    simulation but, there is an increasing role for
    analysis and mining of online scientific data. As
    a case in point, half of the world's astronomy
    data is public. The astronomy community is
    putting all that data on the Internet so that the
    Internet becomes the world's best telescope it
    has the whole sky, in many bands, and in detail
    as good as the best 2-year-old telescopes. It is
    useable by all astronomers everywhere. This is
    the vision of the Virtual Observatory -- also
    called the World Wide Telescope (WWT). As one
    step along that path I have been working with the
    Sloan Digital Sky Survey and CalTech to federate
    their data in web services on the Internet, and
    to make it easy to ask questions of the database
    (see http//skyserver.sdss.org and
    http//skyquery.net). This talk explains the
    rationale for the WWT, discusses how we designed
    the database, and talks about some data mining
    tasks. It also describes computer science
    challenges of publishing, federating, and mining
    scientific data, and argues that XML web services
    are key to federating diverse data sources.
  • Actual Abstract
  • I have been working with some astronomers
  • for the last 6 years trying to apply DB
    technology to science problems.
  • These are some lessons I learned.

3
Outline
  • New Science
  • X-Info for all fields X
  • WWT as an example
  • Big Picture
  • Puzzle
  • Hitting the wall
  • Needle in haystack
  • Mohamed and the mountain
  • Working cross disciplines
  • Data Demographics and Data Handling
  • Curation

4
New Science Paradigms
  • Thousand years ago science was empirical
  • describing natural phenomena
  • Last few hundred years theoretical branch
  • using models, generalizations
  • Last few decades a computational branch
  • simulating complex phenomena
  • Today data exploration (eScience)
  • unify theory, experiment, and simulation
  • using data management and statistics
  • Data captured by instrumentsOr generated by
    simulator
  • Processed by software
  • Scientist analyzes database / files

5
The Big Picture
The Big Problems
  • Data ingest
  • Managing a petabyte
  • Common schema
  • How to organize it?
  • How to reorganize it?
  • How to coexist with others?
  • Data Query and Visualization tools
  • Support/training
  • Performance
  • Execute queries in a minute
  • Batch (big) query scheduling

6
Experiment Budgets ¼½ Software
  • Millions of lines of code
  • Repeated for experiment after experiment
  • Not much sharing or learning
  • Lets work to change this
  • Identify generic tools
  • Workflow schedulers
  • Databases and libraries
  • Analysis packages
  • Visualizers
  • Software for
  • Instrument scheduling
  • Instrument control
  • Data gathering
  • Data reduction
  • Database
  • Analysis
  • Visualization

7
What X-info Needs from us (cs)(not drawn to
scale)
8
Data Access Hitting a Wall
  • Current science practice based on data download
    (FTP/GREP)Will not scale to the datasets of
    tomorrow
  • You can GREP 1 MB in a second
  • You can GREP 1 GB in a minute
  • You can GREP 1 TB in 2 days
  • You can GREP 1 PB in 3 years.
  • Oh!, and 1PB 5,000 disks
  • At some point you need indices to limit
    search parallel data search and analysis
  • This is where databases can help
  • You can FTP 1 MB in 1 sec
  • You can FTP 1 GB / min (1)
  • 2 days and 1K
  • 3 years and 1M

9
Information Avalanche
  • In science, industry, government,.
  • better observational instruments and
  • and, better simulations
  • producing a data avalanche
  • Examples
  • BaBar Grows 1TB/day 2/3 simulation Information
    1/3 observational Information
  • CERN LHC will generate 1GB/s .10 PB/y
  • VLBA (NRAO) generates 1GB/s today
  • Pixar 100 TB/Movie
  • New emphasis on informatics
  • Capturing, Organizing, Summarizing, Analyzing,
    Visualizing

Image courtesy C. Meneveau A. Szalay _at_ JHU
BaBar, Stanford
PE Gene Sequencer From http//www.genome.uci.edu/
Space Telescope
10
Next-Generation Data Analysis
  • Looking for
  • Needles in haystacks the Higgs particle
  • Haystacks dark matter, dark energy, turbulence,
    ecosystem dynamics
  • Needles are easier than haystacks
  • Global statistics have poor scaling
  • Correlation functions are N2, likelihood
    techniques N3
  • As data and computers grow at Moores Law, we
    can only keep up with N logN
  • A way out?
  • Relax optimal notion (data is fuzzy, answers are
    approximate)
  • Dont assume infinite computational resources or
    memory
  • Requires combination of statistics computer
    science

11
Smart Data Unifying DB and Analysis
  • There is too much data to move aroundDo data
    manipulations at database
  • Build custom procedures and functions into DB
  • Unify data Access Analysis
  • Examples
  • Statistical sampling and analysis
  • Temporal and spatial indexing
  • Pixel processing
  • Automatic parallelism
  • Auto (re)organize
  • Scalable to Petabyte datasets

Move Mohamed to the mountain, not the mountain to
Mohamed.
12
Outline
  • New Science
  • Working cross disciplines
  • How to help?
  • 20 questions
  • WWT example
  • Alternative CS Process Models
  • Data Demographics and Data Handling
  • Curation

13
How to Help?
  • Cant learn the discipline before you
    start(takes 4 years.)
  • Cant go native you are a CS person not a
    bio, person
  • Have to learn how to communicateHave to learn
    the language
  • Have to form a working relationship with domain
    expert(s)
  • Have to find problems that leverage your skills

14
Working Cross-Culture A Way to Engage With
Domain Scientists
  • Communicate in terms of scenarios
  • Work on a problem that gives 100x benefit
  • Weeks/task vs hours/task
  • Solve 20 of the problem
  • The other 80 will take decades
  • Prototype
  • Go from working-to-working, Always have
  • Something to show
  • Clear next steps
  • Clear goal
  • Avoid death-by-collaboration-meetings.

15
Working Cross-Culture -- 20 Questions A Way to
Engage With Domain Scientists
  • Astronomers proposed 20 questions
  • Typical of things they want to do
  • Each would require a week or more in old way
    (programming in tcl / C/ FTP)
  • Goal, make it easy to answer questions
  • This goal motivates DB and tools design

16
The Virtual Observatory
  • Premise most data is (or could be online)
  • The Internet is the worlds best telescope
  • It has data on every part of the sky
  • In every measured spectral band optical, x-ray,
    radio..
  • As deep as the best instruments (2 years ago).
  • It is up when you are up
  • The seeing is always great
  • Its a smart telescope links objects and
    data to literature
  • Software is the capital expense
  • Share, standardize, reuse..

17
Why Is Astronomy Special?
  • Almost all literature online and public ADS
    http//adswww.harvard.edu/ CDS
    http//cdsweb.u-strasbg.fr/
  • Data has no commercial value
  • No privacy concerns, freely share results with
    others
  • Great for experimenting with algorithms
  • It is real and well documented
  • High-dimensional (with confidence intervals)
  • Spatial, temporal
  • Diverse and distributed
  • Many different instruments from many different
    places and many different times
  • The community wants to share the data World
    Wide Telescope federate all data.
  • There is a lot of it (soon petabytes)

18
The 20 Queries
  • Q11 Find all elliptical galaxies with spectra
    that have an anomalous emission line.
  • Q12 Create a grided count of galaxies with u-ggt1
    and rlt21.5 over 60ltdeclinationlt70, and 200ltright
    ascensionlt210, on a grid of 2, and create a map
    of masks over the same grid.
  • Q13 Create a count of galaxies for each of the
    HTM triangles which satisfy a certain color cut,
    like 0.7u-0.5g-0.2ilt1.25 rlt21.75, output it in
    a form adequate for visualization.
  • Q14 Find stars with multiple measurements and
    have magnitude variations gt0.1. Scan for stars
    that have a secondary object (observed at a
    different time) and compare their magnitudes.
  • Q15 Provide a list of moving objects consistent
    with an asteroid.
  • Q16 Find all objects similar to the colors of a
    quasar at 5.5ltredshiftlt6.5.
  • Q17 Find binary stars where at least one of them
    has the colors of a white dwarf.
  • Q18 Find all objects within 30 arcseconds of one
    another that have very similar colors that is
    where the color ratios u-g, g-r, r-I are less
    than 0.05m.
  • Q19 Find quasars with a broad absorption line in
    their spectra and at least one galaxy within 10
    arcseconds. Return both the quasars and the
    galaxies.
  • Q20 For each galaxy in the BCG data set
    (brightest color galaxy), in 160ltright
    ascensionlt170, -25ltdeclinationlt35 count of
    galaxies within 30"of it that have a photoz
    within 0.05 of that galaxy.
  • Q1 Find all galaxies without unsaturated pixels
    within 1' of a given point of ra75.327,
    dec21.023
  • Q2 Find all galaxies with blue surface
    brightness between and 23 and 25 mag per square
    arcseconds, and -10ltsuper galactic latitude (sgb)
    lt10, and declination less than zero.
  • Q3 Find all galaxies brighter than magnitude 22,
    where the local extinction is gt0.75.
  • Q4 Find galaxies with an isophotal surface
    brightness (SB) larger than 24 in the red band,
    with an ellipticitygt0.5, and with the major axis
    of the ellipse having a declination of between
    30 and 60arc seconds.
  • Q5 Find all galaxies with a deVaucouleours
    profile (r¼ falloff of intensity on disk) and the
    photometric colors consistent with an elliptical
    galaxy. The deVaucouleours profile
  • Q6 Find galaxies that are blended with a star,
    output the deblended galaxy magnitudes.
  • Q7 Provide a list of star-like objects that are
    1 rare.
  • Q8 Find all objects with unclassified spectra.
  • Q9 Find quasars with a line width gt2000 km/s and
    2.5ltredshiftlt2.7.
  • Q10 Find galaxies with spectra that have an
    equivalent width in Ha gt40Å (Ha is the main
    hydrogen spectral line.)

Also some good queries at http//www.sdss.jhu.edu
/ScienceArchive/sxqt/sxQT/Example_Queries.html
19
Two kinds of SDSS data in an SQL DB(objects and
images all in DB)
  • 300M Photo Objects 400 attributes
  • 10B rows overall

400K Spectra with 30
lines/ Spectrum 100 M rows
20
An easy one Q7 Provide a list of star-like
objects that are 1 rare.
  • Found 14,681 buckets, first 140 buckets have
    99 time 104 seconds
  • Disk bound, reads 3 disks at 68 MBps.

Select cast((u-g) as int) as ug, cast((g-r) as
int) as gr, cast((r-i) as int) as ri,
cast((i-z) as int) as iz, count()
as Population from stars group by cast((u-g) as
int), cast((g-r) as int), cast((r-i) as int),
cast((i-z) as int) order by count()
21
Then What?
  • 20 Queries were a way to engage
  • Needed spatial data library
  • Needed DB design
  • Built website to publish the data
  • Data Loading (workflow scheduler).
  • Pixel web service evolved to
  • SkyQuery federation evolved to
  • Now focused on spatial data library.
    Conversion to OR DB (put analysis in DB).

22
Alternate Model
  • Many sciences are becoming information
    sciences
  • Modeling systems needs new and better
    languages.
  • CS modeling tools can help
  • Bio, Eco, Linguistic,
  • This is the process/program centric view rather
    than my info-centric view.

23
Outline
  • New Science
  • Working cross disciplines
  • Data Demographics and Data Handling
  • Exponential growth
  • Data Lifecycle
  • Versions
  • Data inflation
  • Year 5
  • Overprovision by 6x
  • Data Loading
  • Regression Tests
  • Statistical subset
  • Curation

24
Information Avalanche
  • In science, industry, government,.
  • better observational instruments and
  • and, better simulations
  • producing a data avalanche
  • Examples
  • BaBar Grows 1TB/day 2/3 simulation Information
    1/3 observational Information
  • CERN LHC will generate 1GB/s .10 PB/y
  • VLBA (NRAO) generates 1GB/s today
  • Pixar 100 TB/Movie
  • New emphasis on informatics
  • Capturing, Organizing, Summarizing, Analyzing,
    Visualizing

Image courtesy C. Meneveau A. Szalay _at_ JHU
BaBar, Stanford
PE Gene Sequencer From http//www.genome.uci.edu/
Space Telescope
25
Q Where will the Data Come From?A Sensor
Applications
  • Earth Observation
  • 15 PB by 2007
  • Medical Images Information Health Monitoring
  • Potential 1 GB/patient/y ? 1 EB/y
  • Video Monitoring
  • 1E8 video cameras _at_ 1E5 MBps ? 10TB/s ? 100
    EB/y ? filtered???
  • Airplane Engines
  • 1 GB sensor data/flight,
  • 100,000 engine hours/day
  • 30PB/y
  • Smart Dust ?? EB/y

http//robotics.eecs.berkeley.edu/pister/SmartDus
t/
http//www-bsac.eecs.berkeley.edu/shollar/macro_m
otes/macromotes.html
26
Instruments CERN LHCPeta Bytes per Year
  • Looking for the Higgs Particle
  • Sensors 1 GB/s ( 20 PB/y)
  • Events 100 MB/s
  • Filtered 10 MB/s
  • Reduced 1 MB/s Data pyramid 100GB
    1TB 100TB 1PB 10PB

27
Like all sciences, Astronomy Faces an
Information Avalanche
  • Astronomers have a few hundred TB now
  • 1 pixel (byte) / sq arc second 4TB
  • Multi-spectral, temporal, ? 1PB
  • They mine it looking for new (kinds of) objects
    or more of interesting ones (quasars),
    density variations in 400-D space correlations
    in 400-D space
  • Data doubles every year
  • Data is public after 1 year
  • So, 50 of the data is public
  • Same access for everyone

28
Moores Law in Proteomics
Courtesy of Peter Berndt, Roche Center for
Medical Genomics (RCMG)
  • Roche Center for Medical Genomics (RCMG) number
    of mass-spectra acquired for proteomics doubled
    every year since first mass spectrometer
    deployed.

R20.96
29
Data Lifecycle
  • Raw data ? primary data ? derived data
  • Data has bugs
  • Instrument bugs
  • Pipeline bugs
  • Data comes in versions
  • later versions fix known bugs
  • Just like software (indeed data is software)
  • Cant un-publish bad data.

30
Data Inflation Data Pyramid
  • Level 2Derived data products 10x smaller But
    there are many. L2L1
  • Publish new edition each year
  • Fixes bugs in data.
  • Must preserve old editions
  • Creates data pyramid
  • Store each edition
  • 1, 2, 3, 4 N N2 bytes
  • Net Data Inflation L2 L1
  • Level 1AGrows X TB/year .4X TB/y
    compressed (level 1A in NASA terms)

31
The Year 5 Problem
  • Data arrives at R bytes/year
  • New Storage Processing
  • Need to buy R units in year N
  • Data inflation means N2R
  • Need to buy NR units
  • Depreciate over 3 years
  • After year 3 need to buy N2R (N-3)2R
  • Moores law 60/year price decline
  • Capital expense peaks at year 5
  • See 6x Over-Power slide next

32
6x Over-Power Ratio
  • If you think you need X raw capacity, then you
    probably need 6X
  • Reprocessing
  • Backup copies
  • Versions
  • Hardware is cheap, Your time is precious.

33
Data Loading
  • Data from outside
  • Is full of bugs
  • Is not in your format
  • Advice
  • Get it in a Universal Format (e.g. Unicode
    CSV)
  • Create Blood-Brain barrier Quarantine in a
    load database
  • Scrub the data
  • Cross check everything you can
  • Check data statistics for sanity
  • Reject or repair bad data
  • Generate detailed bug reports(needed to send
    rejection upstream)
  • Expect to reload many times Automate everything!

34
Performance Prediction Regression
  • Database grows exponentially
  • Set up response-time requirements
  • For load
  • For access
  • Define a workload to measure each
  • Run it regularly to detect anomalies
  • SDSS uses
  • one-week to reload
  • 20 queries with response of 10 sec to 10 min.

35
Data Subsets For Science and Development
  • Offer 1GB, 10GB, , Full subsets
  • Wonderful tool for you
  • Debug algorithms
  • Good tool for scientists
  • Experiment on subset
  • Not for needle in haystack, but good for global
    stats
  • Challenge How make statistically valid subsets?
  • Seems domain specific
  • Seems problem specific
  • But, must be some general concepts.

36
Outline
  • New Science
  • Working cross disciplines
  • Data Demographics and Data Handling
  • Curation
  • Problem statement
  • Economics
  • Astro as a case in point

37
Problem Statement
  • Once published, scientific data needs to be
    available forever,so that the science can be
    reproduced/extended.
  • What does that mean?
  • Data can be characterized as
  • Primary Data could not be reproduced
  • Derived data could be derived from primary data.
  • Meta-data how the data was collected/derivedis
    primary
  • Must be preserved
  • Includes design docs, software, email, pubs,
    personal notes, teleconferences,

NASA level 0
38
The Core Problem No Economic Model
  • The archive user is not yet born. How can he
    pay you to curate the data?
  • The Scientist gathered data for his own
    purposeWhy should he pay (invest time) for your
    needs?
  • Answer to both thats the scientific method
  • Curating data (documenting the design, the
    acquisition and the processing)Is difficult and
    there is little reward for doing it.Results are
    rewarded, not the process of getting them.
  • Storage/archive NOT the problem (its almost
    free)
  • Curating/Publishing is expensive,MAKE IT
    EASIER!!! (lower the cost)

39
Publishing Data
Roles Authors Publishers Curators Archives Consume
rs
Traditional Scientists Journals Libraries Archives
Scientists
Emerging Collaborations Project web site DataDoc
Archives Digital Archives Scientists
40
Changing Roles
  • Exponential growth
  • Projects last at least 3-5 years
  • Project data online during project lifetime.
  • Data sent to central archive only at the end of
    the project
  • At any instant, only 1/8 of data is in central
    archives
  • New project responsibilities
  • Becoming Publishers and Curators
  • Larger fraction of budget spent on software
  • Standards are needed
  • Easier data interchange, fewer tools
  • Templates are needed
  • Much development duplicated, wasted

41
Schema (aka metadata)
  • Everyone starts with the same schema
    ltstuff/gtThen the start arguing about semantics.
  • Virtual Observatory http//www.ivoa.net/
  • Metadata based on Dublin Corehttp//www.ivoa.net
    /Documents/latest/RM.html
  • Universal Content Descriptors (UCD)
    http//vizier.u-strasbg.fr/doc/UCD.htxCaptures
    quantitative concepts and their unitsReduced
    from 100,000 tables in literature to 1,000
    terms
  • VOtable a schema for answers to
    questionshttp//www.us-vo.org/VOTable/
  • Common QueriesCone Search and Simple Image
    Access Protocol, SQL
  • Registry http//www.ivoa.net/Documents/latest/RME
    xp.htmlstill a work in progress.

42
What SDSS is Doing Capture the Bits(preserve
the primary data)
  • Best-effort documenting data and
    processDocuments and data are hyperlinked.
  • Publishing data often by UPS( 5TB today and so
    5k for a copy)
  • Replicating data on 3 continents.
  • EVERYTHING online (tape data is dead data)
  • Archiving all email, discussions, .
  • Keeping all web-logs query logs.
  • Now we need to figure out how to organize/search
    all this metadata.

43
The OGIS model
Data Management
Producer
Ingest
Archive
Access
Consumer
Administer
44
Ingest Challenges
  • Push vs Pull
  • What are the gold standards?
  • Automatic indexing, annotation, provenance.
  • Auto-Migration (Format conversion)
  • Version management
  • How capture time varying sources
  • Capture dark matter (encapsulated data)
  • Bits dont rust but applications do.

45
Access Challenges
  • Archived information rusts if it is not
    accessed. Access is essential.
  • Access costs money who pays?
  • Access sometimes uses IP, who pays?
  • There are also technical problems
  • Access formats are different from the storage
    formats.
  • migration?
  • emulation?
  • Gold Standards?)

46
Archive Challenges
  • Cost of administering storage
  • Presently 10x to 100x the hardware cost.
  • Resist attack geographic diversity
  • At 1GBps it takes 12 days to move a PB
  • Store it in two (or more) places online (on
    disk). A geo-plex
  • Scrub it continuously (look for errors)
  • On failure,
  • use other copy until failure repaired,
  • refresh lost copy from safe copy.
  • Can organize the copies differently (e.g.
    one by time, one by space)

47
The Midrange Paradox
  • Large archives are curated
  • Curated by projects
  • Small archives are appendices to papers
  • Curated by journals
  • Medium-sized archives are in limbo
  • No place to register them
  • No one has mandate to preserve them
  • Example
  • Your website with your data files
  • Small scale science projects
  • Genbank gets the sequence but not the software
    or analysis that produced it.

48
Summary
  • New Science
  • X-Info for all fields X
  • WWT as an example
  • Big Picture
  • Puzzle
  • Hitting the wall
  • Needle in haystack
  • Move queries to data
  • Working cross disciplines
  • How to help?
  • 20 questions
  • WWT example
  • Alt CS Process Models
  • Data Demographics
  • Exponential growth
  • Data Lifecycle
  • Versions
  • Data inflation
  • Year 5 is peak cost
  • Overprovision by 6x
  • Data Loading
  • Regression Tests
  • Statistical subset
  • Curation
  • Problem statement
  • Economics
  • Astro as a case in point

49
Call to Action
  • X-info is emerging.
  • Computer Scientists can help in many ways.
  • Tools
  • Concepts
  • Provide technology consulting to the community
  • There are great CS research problems here
  • Modeling
  • Analysis
  • Visualization
  • Architecture

50
References http//SkyServer.SDSS.org/http//rese
arch.microsoft.com/pubs/ http//research.microsof
t.com/Gray/SDSS/ (download personal SkyServer)
  • Extending the SDSS Batch Query System to the
    National Virtual Observatory Grid, M. A.
    Nieto-Santisteban, W. O'Mullane, J. Gray, N. Li,
    T. Budavari, A. S. Szalay, A. R. Thakar,
    MSR-TR-2004-12, Feb. 2004
  • Scientific Data Federation, J. Gray, A. S.
    Szalay, The Grid 2 Blueprint for a New Computing
    Infrastructure, I. Foster, C. Kesselman, eds,
    Morgan Kauffman, 2003, pp 95-108.
  • Data Mining the SDSS SkyServer Database, J.
    Gray, A.S. Szalay, A. Thakar, P. Kunszt, C.
    Stoughton, D. Slutz, J. vandenBerg, Distributed
    Data Structures 4 Records of the 4th
    International Meeting, pp 189-210, W. Litwin, G.
    Levy (eds),, Carleton Scientific 2003, ISBN
    1-894145-13-5, also MSR-TR-2002-01, Jan. 2002
  • Petabyte Scale Data Mining Dream or Reality?,
    Alexander S. Szalay Jim Gray Jan vandenBerg,
    SIPE Astronomy Telescopes and Instruments, 22-28
    August 2002, Waikoloa, Hawaii, MSR-TR-2002-84
  • Online Scientific Data Curation, Publication,
    and Archiving, J. Gray A. S. Szalay A.R.
    Thakar C. Stoughton J. vandenBerg, SPIE
    Astronomy Telescopes and Instruments, 22-28
    August 2002, Waikoloa, Hawaii, MSR-TR-2002-74
  • The World Wide Telescope An Archetype for Online
    Science, J. Gray A. Szalay,, CACM, Vol. 45, No.
    11, pp 50-54, Nov. 2002, MSR TR 2002-75,
  • The SDSS SkyServer Public Access To The Sloan
    Digital Sky Server Data, A. S. Szalay, J. Gray,
    A. Thakar, P. Z. Kunszt, T. Malik, J. Raddick, C.
    Stoughton, J. vandenBerg, ACM SIGMOD 2002
    570-581 MSR TR 2001 104.
  • The World Wide Telescope, A.S., Szalay, J.,
    Gray, Science, V.293 pp. 2037-2038. 14 Sept 2001.
    MS-TR-2001-77
  • Designing Mining Multi-Terabyte Astronomy
    Archives Sloan Digital Sky Survey, A. Szalay,
    P. Kunszt, A. Thakar, J. Gray, D. Slutz, P.
    Kuntz, June 1999, ACM SIGMOD 2000, MS-TR-99-30,

51
How to Publish Data Web Services
  • Web SERVER
  • Given a url parameters
  • Returns a web page (often dynamic)
  • Web SERVICE
  • Given a XML document (soap msg)
  • Returns an XML document (with schema)
  • Tools make this look like an RPC.
  • F(x,y,z) returns (u, v, w)
  • Distributed objects for the web.
  • naming, discovery, security,..
  • Internet-scale distributed computing

Your program
Web Server
http
Web page
Your program
Web Service
soap
Data In your address space
objectin xml
52
Global Federations
  • Massive datasets live near their owners
  • Near the instruments software pipeline
  • Near the applications
  • Near data knowledge and curation
  • Each Archive publishes a (web) service
  • Schema documents the data
  • Methods on objects (queries)
  • Scientists get personalized extracts
  • Uniform access to multiple Archives
  • A common global schema

Federation
53
The Sloan Digital Sky Survey
  • Goal
  • Create the most detailed map of the Northern Sky
    to-date
  • 2.5m telescope
  • 3 degree field of view
  • Two surveys in one
  • 5-color images of ¼ of the sky
  • Spectroscopic survey of a million galaxies and
    quasars
  • Very high data volume
  • 40 Terabytes of raw data
  • 10 Terabytes processed
  • All data public

The University of Chicago Princeton
University The Johns Hopkins University The
University of Washington New Mexico State
University University of Pittsburgh Fermi
National Accelerator Laboratory US Naval
Observatory The Japanese Participation Group
The Institute for Advanced Study Max Planck
Inst, Heidelberg Sloan Foundation, NSF, DOE,
NASA
54
SkyServer
  • A multi-terabyte database
  • An educational website
  • More than 50 hours of educational exercises
  • Background on astronomy
  • Tutorials and documentation
  • Searchable web pages
  • Easy astronomer access to SDSS data.
  • Prototype eScience lab
  • Interactive visual tools for data exploration

http//skyserver.sdss.org/
55
Demo SkyServer
  • atlas
  • education project
  • Mouse in pixel space
  • Explore an object (record space)
  • Explore literature
  • Explore a set
  • Pose a new question

56
SkyQuery (http//skyquery.net/)
  • Distributed Query tool using a set of web
    services
  • Many astronomy archives from Pasadena, Chicago,
    Baltimore, Cambridge (England)
  • Has grown from 4 to 15 archives,now becoming
    international standard
  • Allows queries like

SELECT o.objId, o.r, o.type, t.objId FROM
SDSSPhotoPrimary o, TWOMASSPhotoPrimary t
WHERE XMATCH(o,t)lt3.5 AND AREA(181.3,-0.76,6.5)
AND o.type3 and (o.I - t.m_j)gt2
57
Demo SkyQuery Structure
  • Portal is
  • Plans Query (2 phase)
  • Integrates answers
  • Is itself a web service
  • Each SkyNode publishes
  • Schema Web Service
  • Database Web Service

58
MyDB eScience Workbench
  • Prototype of bringing analysis to the data
  • Everybody gets a workspace (database)
  • Executes analysis at the data
  • Store intermediate results there
  • Long queries run in batch
  • Results shared within groups
  • Only fetch the final results
  • Extremely successful matches work patterns

59
National Center Biotechnology Information (NCBI)
A good Example
  • PubMed
  • Abstracts and books and..
  • GenBank
  • All Gene sequences deposited
  • BLAST and other searches
  • Website to explore data and literature
  • Entrez
  • unifies many databases with literature (books,
    journals,..)
  • Organizes the data

60
Publishing Data
  • Exponential growth
  • Projects last at least 3-5 years
  • Data sent upwards only at the end of the project
  • Data will never be centralized
  • More responsibility on projects
  • Becoming Publishers and Curators
  • Often no explicit funding to do this (must
    change)
  • Data will reside with projects
  • Analyses must be close to the data (see later)
  • Data cross-correlated with Literature and Metadata

61
Making Discoveries
  • Where are discoveries made?
  • At the edges and boundaries
  • Going deeper, collecting more data, using more
    colors.
  • Metcalfes law quadratic benefit
  • Utility of computer networks grows as the number
    of possible connections O(N2)
  • Data Federation quadratic benefit
  • Federation of N archives has utility O(N2)
  • Possibilities for new discoveries grow as O(N2)
  • Current sky surveys have proven this
  • Very early discoveries from SDSS, 2MASS, DPOSS

62
The OGIS model
Data Management
Producer
Ingest
Archive
Access
Consumer
Administer
63
Jims Model of Library Science ?
  • Alexandria
  • Gutenberg
  • (Melvil) Dewey Decimal
  • MARC (Henriette Avram)
  • Dublin Core

Yes, I know there have been other things.
64
Dublin Core
  • Elements
  • Title
  • Creator
  • Subject
  • Description
  • Publisher
  • Contributor
  • Date
  • Type
  • Format
  • Identifier
  • Source
  • Language
  • Coverage
  • Rights
  • Elements
  • Audience
  • Alternative
  • TableOfContents
  • Abstract
  • Created
  • Valid
  • Available
  • Issued
  • Modified
  • Extent
  • Medium
  • IsVersionOf
  • HasVersion
  • IsReplacedBy
  • Replaces
  • IsRequiredBy
  • Requires
  • IsPartOf
  • Encoding
  • LCSH (Lb. Congress Subject Head)
  • MESH (Medical Subject Head)
  • DDC (Dewey Decimal Classification)
  • LCC (Lb. Congress Classification)
  • UDC (Universal Decimal Classification)
  • DCMItype (Dublin Core Meta Type)
  • IMT (Internet Media Type)
  • ISO639-2 (ISO language names)
  • RFC1766 (Internet Language tags)
  • URI (Uniform Resource Locator)
  • Point (DCMI spatial point)
  • ISO3166 (ISO country codes)
  • Box (DCMI rectangular area)
  • TGN (Getty Thesaurus of Geo Names)
  • Period (DCMI time interval)
  • W3CDTF (W3C date/time)
  • RFC3066 (Language dialects)

65
Ingest Challenges
  • Push vs Pull
  • What are the gold standards?
  • Automatic indexing, annotation, provenance.
  • Auto-Migration (Format conversion)
  • Version management
  • How capture time varying sources
  • Capture dark matter (encapsulated data)
  • Bits dont rust but applications do.

66
Access Challenges
  • Archived information rusts if it is not
    accessed. Access is essential.
  • Access costs money who pays?
  • Access sometimes uses IP, who pays?
  • There are also technical problems
  • Access formats are different from the storage
    formats.
  • migration?
  • emulation?
  • Gold Standards?)

67
Archive Challenges
  • Cost of administering storage
  • Presently 10x to 100x the hardware cost.
  • Resist attack geographic diversity
  • At 1GBps it takes 12 days to move a PB
  • Store it in two (or more) places online (on
    disk). A geo-plex
  • Scrub it continuously (look for errors)
  • On failure,
  • use other copy until failure repaired,
  • refresh lost copy from safe copy.
  • Can organize the copies differently (e.g.
    one by time, one by space)

68
The Midrange Paradox
  • Large archives are curated
  • Curated by projects
  • Small archives are appendices to papers
  • Curated by journals
  • Medium-sized archives are in limbo
  • No place to register them
  • No one has mandate to preserve them
  • Example
  • Your website with your data files
  • Small scale science projects
  • Genbank gets the sequence but not the software
    or analysis that produced it.
Write a Comment
User Comments (0)
About PowerShow.com