Compact Binary Coalescence Search in the LIGO Scientific Collaboration - PowerPoint PPT Presentation

Loading...

PPT – Compact Binary Coalescence Search in the LIGO Scientific Collaboration PowerPoint presentation | free to download - id: 92640-N2NjY



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

Compact Binary Coalescence Search in the LIGO Scientific Collaboration

Description:

Compact Binary Coalescence Search in the LIGO Scientific Collaboration – PowerPoint PPT presentation

Number of Views:70
Avg rating:3.0/5.0
Slides: 100
Provided by: AlanWei7
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Compact Binary Coalescence Search in the LIGO Scientific Collaboration


1
  • Compact Binary Coalescence Search in the LIGO
    Scientific Collaboration

Colliding Black Holes, National Center for
Supercomputing Applications
2
Outline
  • The signal we are trying to detect
  • How far we can detect
  • How many sources we can detect
  • establishing an upper limit on the sources
  • How to suppress noise
  • The analysis pipeline
  • detection confidence
  • parameter estimation
  • software

3
Compact Binary Coalescence
  • LIGO, GEO, Virgo and TAMA search for GW signals
    from the last few minutes of coalescence of a
    compact binary system with component masses
    between 1 and 100 solar masses, within a few
    100 Mpc of the Earth.
  • Inspiral ?? Merger ? Ringdown

4
The inspiral signal in the detector
5
Inspiral signal in t-f
  • time-frequency spectrogram
    q-scan

6
Signal parameters
  • For non-spinning binaries, we have 7 parameters
    that affect the amplitude of the signal but not
    its general form (extrinsic parameters)
  • source sky location (ra, dec) or, in detector
    frame, (?,??)
  • source physical distance r (sometimes written d)
  • orientation of orbital plane inclination angle ?
    and polarization ?
  • orbital phase at coalescence ?0, and time of
    coalescence tc
  • And two parameters that do affect the form of the
    signal (the evolution of its amplitude and phase
    intrinsic parameters)
  • (m1, m2),
  • or chirp mass Mc Mtot ?(2/5) and symmetric mass
    ratio ? m1 m2 / mtot2
  • or total mass Mtot m1 m2 and reduced mass ?
    Mtot

7
Signal parameters
  • For spinning binaries, add three more parameters
    each for the spin vector of each component
    relative to the orbital plane of the binary
    (total 15 parameters).
  • Late in the inspiral, the orbits have
    circularized due to radiation back-reaction, but
    early on, orbits are in general elliptical add
    initial ellipticity ??, and two angles for the
    orientation of the semi-major axis.
  • Current LIGO searches use non-spinning templates
    even in a 2-D intrinsic parameter space, we have
    tens of thousands of templates.
  • Much effort is now going in to searches using
    spinning templates, but tricks are required to
    make the problem computationally manageable.
  • BCV and BCV-spin detection template families
  • Physical template family developed by Pan et el

8
Signal templates in frequency space, in
Stationary Phase Approximation (SPA)
Normalization
Effective distance
9
Evolution of Binary System
  • This is just the inspiral phase, well predicted
    by post-Newtonian PT, expanding in x (v/c)2
  • What about the merger and ringdown? Lots of GWs!
  • In the last few years, breakthroughs in numerical
    relativity are giving us quantitative results
    on these phases!

10
Evolution of amplitude and frequency through
merger and ringdown
comparison of analytic and numerical coalescing
binary waveforms nonspinning case
arXiv0704.1964v2 (2007), Pan, Buonanno, et al
11
Energy radiated through IMR
Inspiral, merger and ring-down of equal-mass
black-hole binaries Buonanno, Cook, Pretorius,
arXivgr-qc/0610122v2 2007
12
Inspiral-Merger-Ringdown
  • NR waveforms covering the full parameter space
    are still under development, and trust in their
    veracity is still pending
  • Generation of NR waveforms for a given set of
    parameters is slow, and generating families of
    templates covering the intrinsic parameter space
    would take too long
  • Much work is in progress to match NR waveforms
    with analytical models (PN, EOB, ringdowns) to
    enable hybrid IMR waveforms for template banks
    and testing of detection pipelines
  • Meanwhile, the LSC has chosen to search for
    inspiral, merger, and ringdown phases in separate
    search pipelines
  • Inspiral and ringdown searches use matched
    filtering with template banks
  • merger phase is the burst search
  • Bringing them together in IMR coincident
    trigger analyses is only now under development,
    and not yet in place.
  • today, focus on inspiral search analysis.

13
Science Runs
A Measure of Progress
Milky Way
Andromeda
Virgo Cluster
NN Binary Inspiral Range
4/02 E8 5 kpc
10/02 S1 100 kpc
4/03 S2 0.9Mpc
1103 S3 3 Mpc
Design 18 Mpc
14
Best Performance to Date .
Current all three detectors are at design
sensitivity from 60 Hz up!
h 2?10-23 /rtHz ?x 8 ?10-20 m/rtHz
15
Inspiral horizon distance
  • Much of the accumulated SNR is in the last few
    cycles, so the horizon distance depends on where
    we (our templates) take the inspiral phase to end
    (ie, at what component separation r or velocity
    v/c (GM/c2r)(1/2)
  • Innermost stable circular orbit r 6M
  • Equivalent-one-body (EOB) light-ring orbit r
    2.8M(circular orbit of photons in the
    Schwarzschild metric)
  • Where you think the perturbation expansion in v/c
    breaks down
  • Geometric units Rsun GMsun/c2 1477 m Tsun
    GMsun/c3 4.9 µs
  • GW Frequency at r bM fGW 2 forb
    (GMr3)(1/2)/?? (Kepler)
    c3/(?GMb(3/2))

16
Signal templates in frequency space, in
Stationary Phase Approximation (SPA)
Normalization
Effective distance
17
Inspiral Horizon Distance
  • Distance to optimally located and oriented
    1.4,1.4 solar mass BNS, at SNR 8using
    templates that end at ISCO

S3 Science Run Oct 31, 2003 - Jan 9, 2004
18
Inspiral Horizon Distance
  • Distance to optimally oriented 1.4,1.4 solar mass
    BNS at SNR 8

S4 Science Run Feb 22, 2005 - March 23, 2005
19
Inspiral Horizon Distance
  • Distance to optimally oriented 1.4,1.4 solar mass
    BNS at SNR 8

First Year S5 Science Run Nov 4, 2005 - Nov 14,
2006
20
Inspiral horizon distance
  • SNR depends strongly on the ending frequency
    (relative to the noise curve bucket, which in
    turn depends strongly on the mass of the system

21
Horizon Distance vs. Mass S5
Binary Neutron Stars
22
Inspiral duration
  • In-band duration of inspiral, in seconds and in
    cycles, also depends strongly on mass t
    (5MTsun/256??) (?MTsun flow)-8/3

Higher-order effects!
23
Inspiral duration
  • Initial LIGO (flow 40 Hz) BNS 15 s BBH
    100Msun a few msec, lt 3 cycles burst!
  • Advanced LIGO (flow 15 Hz) BNS 300 s or
    more, requiring new filtering techniques
    MultiBandTemplateApprox MBTA)

Higher-order effects!
24
Matched Filtering
  • Assume the signal we are searching for is known,
    up to unknown arrival time, constant phase and
    amplitude
  • Construct matched filter statistic for this signal

25
Matched Filtering
  • Choose templates to be normalized to strain at 1
    Mpc
  • Cutoff flow is determined by detector noise
    curve, fmax by template
  • Effective distance to signal is given by

26
Triggers threshold on peaks in the matched
filter output time series
27
Mismatch
  • What if the template is incorrect?
  • Loss in signal to noise ratio is given by the
    mismatch

28
Mismatch and Event Rate
  • Any mismatch between signal and template reduces
    the distance to which we can detect inspiral
    signals
  • Loss in signal-to-noise ratio is loss in detector
    range
  • Loss in event rate (Loss in range)3
  • We must be careful that the mismatch between the
    signal and our templates does not unacceptably
    reduce our rate

29
Mismatch for Low Mass Signals
30
Inspiral Template Banks
  • To search for signals in the mass region of
    interest, we must construct a template bank
  • Lay down grid of templates so that loss in SNR
    between signal in space and nearest template is
    no greater than 3

31
Overview of S1 - S4 Searches
40.0
BBH Search S2 - S4 Detection Templates
Ringdowns S4
3.0
BNS S1-S4 PN
NS/BH S3 Spin is important Detection Templates
1.0
1.0 3.0
40.0 100.0 150.0
32
Overview of S5 Searches
40.0
Ringdowns
Burst
EOB
3.0
PN Templates
1.0
1.0 3.0
40.0 100.0 150.0
33
Astrophysical source distribution
  • Our primary goal is to detect GWs from compact
    binary coalescences and study the properties of
    individual systems.
  • Once we observe many such systems, we wish to
    constrain the astrophysical source distribution.
  • Until we make detections, we wish to bound the
    CBC rate in the universe.
  • To do this, we need a model of the astrophysical
    source distribution spatial distribution and
    mass distribution.

34
Astrophysical source distribution
  • Population synthesis provides limited guidance
  • models of stellar formation and evolution, and
    the formation and evolution of compact binaries,
    contain many uncertainties.
  • The only real observational constraints on these
    models come from the handful of relativistic
    pulsar binary systems (BNS) observed in our
    galaxy.
  • There are essentially no observational
    constraints on systems containing 10 or 100 solar
    mass black holes yet these are some of the most
    promising sources for LIGO!
  • The astrophysical distribution of CB mass beyond
    BNS is hardly constrained at all we choose to
    measure the rate as a function of CB total mass.

35
Binary Neutron Star Inspiral Rate Estimates
  • Base on observed systems, or on population
    synthesis Monte Carlo
  • Kalogera et al., 2004 ApJ 601, L179
  • Statistical analysis of the 3 known systems with
    short merger times
  • Simulate population of these 3 types
  • Account for survey selection effects

For reference population model (Bayesian
95 confidence) Milky Way rate 180477144 per
Myr LIGO design 0.0150.275 per year Advanced
LIGO 801500 per year Binary black holes,
BH-NS No known systems must Monte Carlo
36
Source Distribution Beyond the MW
  • Pop.synth. and general astrophysical wisdom says
    compact binary systems exist in galaxies.
    Specifically, young galaxies with lots of star
    formation (spirals) less so for older galaxies
    like ellipticals.
  • Logic (as far as I understand it) CBCs
    represent (one path for) the death of stars in a
    steady-state situation, it should be proportional
    to the stellar birth rate.
  • Star-birth involves young, massive, hot stars,
    emitting blue light. Hence, the CBC rate is, in
    this simplest model, proportional to blue-light
    luminosity (Phinney, 1991).

37
Astrophysical source distribution
  • This cant be strictly true the time scale for
    coalescence can be of the same order as the age
    of the universe. Some component of the CBC rate
    must be proportional to the total number of stars
    (mass), not the stellar birth rate. Older
    galaxies (eg, ellipticals) must contribute.
  • Nonetheless, we stick with the simplest model
    CBC rate is proportional to blue-light
    luminosity.
  • Our only astrophysical constraints on CBC rate
    are from BNS progenitors in the Milky Way, so we
    can estimate the rate per Milky Way Equivalent
    Galaxy (MWEG).

38
Astrophysical source distribution
  • Problem we dont know the blue-light luminosity
    of the MW very well! It is directly measurable
    for our sun (Lsun-BL) and for other galaxies,
    only estimated for MW (MWEG 1.5 2.0 ?? 1010
    Lsun-BL with a best estimate of 1.7 ?? 1010
    Lsun-BL 1.7 L10).
  • In LIGO S2, we were mainly sensitive to the MW,
    so normalizing the astrophysical rate to MWEG was
    appropriate.
  • By S3 we are touching M31 and beyond. But
    although the MW is overdense with sources, the
    empty space between the MW and M31, and between
    out local cluster and the Virgo cluster, is
    underdense.
  • Fortunately, its not distances that matter so
    much as effective distance sources that are not
    optimally located (detector zenith) or oriented
    (face-on) have a correspondingly larger effective
    distance that is always larger than the physical
    distance. This smooths out the fluctuations in
    source density.

39
Astrophysical source distribution
  • By S4 and S5, our source population is dominated
    by galaxies beyond the MW, so we have switched to
    normalizing the CBC rate to L10 1/1.7 MWEG.
  • Beyond about 20 Mpc, the source density is
    more-or-less uniform at ?? 0.01 L10 per Mpc3 ,
    and number of sources grows like horizon distance
    cubed.
  • The LSC now quote rate upper limits in units of
    1 / L10 / yr .
  • Some of the most significant systematic errors
    associated with the search upper limits are due
    to uncertainties in the astrophysical source
    distribution! Nearby source distances and
    luminosities.
  • By S5, we will be well into the uniform regime
    where we only need to know ?? 0.01 L10 / Mpc3
    with, say, 10 error.

40
Catalog of nearby galaxies
41
(No Transcript)
42
(No Transcript)
43
Loudest event statistic
  • This is a novel, but rather natural method for
    establishing a Bayesian confidence interval or
    upper limit on the rate for a process
    characterized by events with a loudness (eg,
    coincident inspiral triggers characterized by a
    combined SNR).
  • Loudness measures how signal-like an event is
    loud events are more likely to be signal than
    noise (background). It is also referred to as a
    detection statistic ?.
  • It can be more complicated than an SNR for
    example, it could contain information about how
    signal-like the signal is, eg, from a chisq
    test.

44
Loudest event statistic
  • Traditionally, one sets a fixed threshold on
    event loudness ? louder events are called
    signal (although they may be contaminated by
    background) and less loud events are ignored as
    background.
  • One then assumes a certain parent (source)
    distribution, and uses Monte Carlo simulation to
    estimate the probability that an event drawn from
    that distribution would be louder than the
    threshold (the search efficiency).
  • The confidence interval or upper limit on the
    rate is then established by combining the number
    of events observed above threshold, the estimated
    background, and the efficiency for detecting an
    event from the assumed source distribution. Eg,
    Rate (Nobs Nbcknd)/(efficiency time).
  • In absence of detection (and background), with no
    events observed, Rate lt -ln(1-CL)/(efficiency
    time) 2.3/(efftime) _at_ 90 CL.

45
Loudest event statistic
  • For Initial LIGO, this traditional procedure is
    dangerous
  • The expected detection rate is below our current
    sensitivity (we expect to see less than one event
    in S5).
  • Our background rate is difficult to determine,
    because our detectors are novel and not very well
    understood, and glitchier than we would like.
  • The stakes are high! We dont want to declare a
    detection just because weve accidentally
    underestimated our background.
  • The traditional procedure has already bitten us
    The LIGO burst search established a fixed
    threshold where we believed that the false alarm
    rate was ltlt 1 per observation time. Then we
    observed an event above that threshold in S2! It
    was caused by acoustic pickup from a low-flying
    airplane.
  • Moral setting a priori fixed thresholds can be
    dangerous!

46
Loudest event statistic
  • Better approach set a threshold just above the
    loudest observed event.
  • Then, by definition, there will be no signal
    events to sweat over!
  • Set a (conservative) upper limit on the rate,
    based on the efficiency for detecting events from
    the source distribution with loudness above that
    threshold.
  • If we have true signal below our threshold, the
    true rate should be below the resulting upper
    limit (with a specified statistical confidence)
    the upper limit is conservative.
  • We can still detect! Examine each of the loudest
    events in the search (all below the loudest event
    threshold), and use a variety of tests (the
    detection checklist) to establish detection
    confidence.

S4 PBH
S4 BBH
S4 BNS
Loudest event
47
Loudest event statistic
  • We wish to measure the rate R of events/yr/L10
  • Predicted number of events expected above a given
    detection statistic threshold (eg, that of the
    loudest observed event) Np RTCL(? )
  • We can compute an estimate of the cumulative
    luminosity CL (in units of L10) to which we are
    sensitive. Crudely, CL ??L10 (4??/3) D(?
    )3where ? is our SNR (or detection statistic)
    thresholdand D is the (average over sky location
    and orientation) distance below which we can
    detect a signal with detection statistic ? lt ? .
  • This will depend on the signal parameters
    (mainly, chirp mass).
  • More precisely, we perform a convolution integral
    over our source density model (as a function of
    physical distance) and our detection efficiency
    vs distance (as measured using simulated signal
    injections, run through the full detection
    pipeline).

48
Loudest event statistic
  • Detection efficiency is a gentle function of
    physical distance but its a sharper function of
    effective distance. More efficient to use that.
  • But effective distance depends on the detector
    its different at LHO and LLO. Need to do the
    convolution in 2D for LIGO-only analysis.


49
Loudest event statistic
  • Predicted number of events expected above a given
    detection statistic threshold (eg, that of the
    loudest observed event) Np RTCL(?m )
  • Probability of observing no events above x (in
    absence of background) is P(?m) exp(-Np)
  • If we estimate that the background has a
    probability Pb(?m) of having (at least) one event
    above x, thenP(?m) Pb(?m) exp(-Np(?m))
  • Applying a Bayesian analysis with uniform prior
    on R, we can turn this into a probability
    distribution for R, and set an upper limit (at
    xome confidence level) on R, R

50
Loudest event statistic
  • Here, in the presence of background, ? measures
    the lilelihood that the loudest event is real, as
    opposed to background
  • Limiting casesloudest event is unlikely to be
    bgloudest event is most likely bg
  • ? is pretty small for the S4 searches.

51
Loudest event statistic
  • This procedure can be repeated for different
    parameters on which it can strongly depend, eg,
    upper limits vs total mass

LIGO S4 upper limits on compact binary
coalescence (gr-qc0704.3368, submitted to PRD)
52
Loudest event statistic
  • Because this is a Bayesian analysis, resulting in
    a a-postiori probability distribution on the
    parameter we wish to measure (true rate), the
    result can be used to
  • combine P(R ) from different measurements (eg,
    from different independent analyses, searches,
    detectors) into an improved probability
    distribution, to better bound or bracket the true
    rate
  • The result from, eg, S4 can be used as a prior
    for the S5 analysis,

53
Picking up where we left off last time
  • Upper limits on CBC rate systematic errors
  • Calibration uncertainties
  • Coincident triggers. eThinca
  • Background time slides
  • Glitchs and non-gaussian noise
  • Detector characterization, DQ flags
  • Chisq veto, effective SNR
  • Heirarchical pipeline
  • Coherent SNR
  • Template bank veto
  • Detection confidence, qscans
  • Parameter accuracy and estimation
  • Software tools, LIGO Data Grid
  • Future work

54
Sources of Systematic Uncertainty
  • On e
  • Distances to galaxies
  • Accuracy of template waveforms
  • Effect of cuts on real vs. simulated signals
  • Calibration
  • Finite statistics of simulation
  • On CL
  • Number of sources in galaxies other than the
    Milky Way
  • Use blue light luminosity
  • Metallicity corrections
  • Blue light luminosity of the Milky Way
  • Most of our uncertainties are due to the
    astrophysical source distribution, not under our
    control!
  • Strong correlation between errors on distances to
    nearby galaxies and absolute luminosity of those
    galaxies.
  • This will get less significant as our sensitivity
    becomes mnore dominated by the average source
    density ?? 0.01 L10 / Mpc3 with, say, 10
    error
  • Theoretical errors (real waveforms vs templates,
    real waveforms vs injected simulated signals)
    will improve as theory improves (NR, AR)
  • Experimental errors (calibration, simulation
    statistics) will improve with more work and
    computer time.
  • Much work will go into improving calibration
    errors to better than the current 10

55
CalibrationDARM control loop
56
Open loop gainmodel vs measurement
57
Resulting DARMERR response function
58
Time dependence
  • Mostly due to time-varying optical power in the
    arms, thereby varying the gain from strain to
    output to photodiode
  • Use digital suspended optic controllers to place
    sinusoidal signal on end test mass mirrors, with
    known frequency and amplitude in strain.

Three lines one below UGF, one near it, one
well above. 54.7, 396.7, and 1151.5Hz Monitor
and record height of these lines every minute ?
??(t)
59
Calibration Uncertainties -- L1 during S4
Summary Numbers 5 Amplitude5o Phase
60
Coincidence
  • In the presence of Gaussian detector noise, the
    SNR from matched filtering is the optimal method
    for distinguishing signal from background due to
    noise.
  • Coincidence of triggers from multiple instruments
    can greatly suppress the loudest noise events,
    since it is very unlikely that the (rare) loudest
    noise triggers will be coincident in time and
    Mchirp
  • This allows us to turn our threshold down much
    lower, greatly increasing detection rate Rate
    (Dhorizon)3 (1/?)3
  • The Gaussian noise SNR spectrum falls very
    steeply with SNR. Coincidence allows us to reduce
    our threshold (in, eg, 1 year of running) from 8
    to 6, while keeping the probability of false
    detection below 1.

61
Coincidence
  • BUT, coincidence means that multiple detectors
    must be online if the duty cycle for each
    detector to be in Science Mode is 80, the
    probability that 3 are in Science Mode is (0.8)3
    50! We lose lifetime.
  • Also, our detectors do not all have the same
    sensitivity, and they are not co-aligned. They
    each see slightly different signals. This
    complicates coincidence.
  • The LIGO S1-S5 inspiral searches used
    triple-coincident triggers (negligible
    background) as well as double-coincident triggers
    (in both triple-time and double-time)
  • We thus have 7 kinds of triggers!
  • triples in H1H2L1 coincident time
  • H1H2, H1L1, and H2L1 doubles in H1H2L1 coincident
    time
  • doubles in H1H2 time, H1L1 time, and H2L1 time

62
International network
  • detection confidence
  • locate the sources
  • verify light speed propagation
  • decompose the polarization of gravitational
    waves
  • Open up a new field of astrophysics!

GEO
Virgo
LIGO
TAMA
AIGO
63
Trigger Coincidence
  • We require triggers to be coincident in time ?t,
    ? mchirp, ? ?(or equivalently, ?t, ??0, ??3)
  • These parameters are all correlated, so we define
    correlation ellipsoids in 3D ?parameter space,
    and require overlap of the ellipsoids
  • ethinca ellipsoidal thoughtful inspiral
    coincidence analysis

64
Estimating background
  • If noise in multiple detectors are uncorrelated
    in time, we can estimate the rate of coincident
    triggers due to accidental coincidence of noise,
    via time-slides (anti-coincidence).
  • These cant be GWs!
  • The trigger data streams from two detectors are
    slid by multiples of some fixed time, chosen to
    be longer than the correlations between triggers
    (due to template durations) and shorter than the
    noise non-stationarity of the detectors.
  • For our S5 searches, we slide by 5, 10, 250
    seconds.

65
Time-slides for H1H2
  • But co-located detectors can have correlated
    noise triggers! The two detectors at LHO H1
    (4km) and H2 (2km), below.
  • In principle, loud noise triggers are due to
    displacement noise, not strain noise, and the
    strain will be different between H1 and H2
    require h(t) amplitude (or equivalently, trigger
    effective distance) consistency.
  • Still, lots of correlated noise triggers get
    through.
  • This problem is much more serious for the
    stochastic search!!

66
Glitches
  • But the big problem is that the LIGO detectors do
    not exhibit only Gaussian, stationary noise.
  • There are loud glitches, due to seismic bumps,
    acoustic noise, servo instabilities, power line
    glitches,
  • These can be loud, can ring up templates, and
    can dominate our searches and loudest events!

67
Detector Characterization
  • DMT tools analyze data from hundreds of
    interferometric and environmental channels in
    near real-time for detector monitoring and
    characterization purposes
  • Minute trends, Band-limited noise, Line
    monitoring, Glitch identification and cataloguing
  • Correlation studies between glitches in the
    gravitational wave channel and auxiliary channels
  • Detector characterization work reflects on data
    quality and veto flags which are crucial to all
    burst and inspiral analyses
  • Coincidence analysis and event classification has
    provided evidence of events resulting from
    extreme power line glitches reflected all across
    the H1-H2 instruments

68
Data quality flags
  • The Detector characterization group and Glitch
    subgroup are charged with studying the data
    quality and establishing DQ flags.
  • Flag extended time periods when a detector is
    misbehaving.
  • Cat1 exclude these time periods from the search
    (deadtime).
  • Cat2, Cat3 analyze the data, but study the
    effect of the veto on the fake rate and
    efficiency.
  • Cat3 vetoes tend to be less certain, and they
    veto larger time periods
  • Set upper limits after requiring Cat3 veto.
  • Detection candidates vetoed by Cat4 are to be
    treated with suspicion.
  • Cat5 vetoes triggers that are coincident with
    glitch triggers in auxiliary channels. CBC group
    doesnt use these burst group does.

69
DQ flags for S5
70
SNR isnt the only tool required, in the
presence of non-Gaussian noise
71
Signal Based Vetoes
  • A large transient (glitch) in the data can cause
    the matched filter to have a large SNR output
  • We use signal based vetoes to check that the
    matched filter output is consistent with a signal
  • If we have enough cycles, one of the strongest
    vetoes is the ?2 veto

72
The ?2 veto
  • An effective method for distinguishing
    (well-modeled broad-band) inspiral signals from
    non-Gaussian noise glitch backgrounds.
  • It performs a time-frequency decomposition,
    breaking up the template in time/frequency, to
    test whether the matched-?lter output has the
    expected SNR accumulation in all the frequency
    bands.
  • Noise glitches tend to excite the matched ?lter
    at the high frequency or the low frequency, but
    seldom produce the same spectrum as an inspiral.

73
Effect of signal/template mismatch on ?2
  • After subtracting out the expected contribution
    to the SNR from each frequency bin, as predicted
    by the template, the resulting ?2 should follow a
    ?2 distribution with 2(p-1) degrees of freedom.
  • But if the template isnt perfectly matched to
    the signal, the ?2 will deviate from this, with a
    non-centrality parameter that depends on the
    amount of mismatch (?? lt ?max 0.03) and the
    strength of the signal (SNR ?).
  • It can be shown that in the presence of a signal
    and Gaussian noise that ?2 has a non-central
    chi-squared distribution with 2(p-1) degrees of
    freedom and non-central parameter ?? bounded by 2
    ?2?.
  • However, ? may possibly be slightly greater than
    2? times the measured ?2 owing to the presence of
    noise.
  • We treat ? as a tunable parameter in the LSC
    searches.

74
The ?2 veto
  • The resulting ?2 is used both as a veto and as a
    means of rescaling the raw SNR into an effective
    SNR that combines SNR with ?2 information,
    optimally discriminating between signal and
    background glitches.

p number of ?2 bins 16 for LIGO
75
Glitches in the Data
  • Glitches can still be a problem, even with signal
    based vetoes (particularly in higher mass
    searches)
  • A lot of work in the LSC is devoted to finding,
    identifying and eliminating glitches
  • Loud glitches reduce our range (and hence rate)
    by hiding signals
  • Even if a template has excellent overlap with
    signals, if it picks up lots of glitches we have
    a problem

76
Effective SNR
  • Effective SNR combines SNR with ??2 to produce a
    noise spectrum that is much more Gaussian,
    suppressing the loud tails

77
Calculating chisq,the hierarchical pipeline
  • Unfortunately, the calculation of the chisq
    signal-based veto quantity is computationally
    expensive.
  • theres no point in computing it if the triggers
    arent coincident (in time and mchirp) between 2
    or more detectors.
  • So we had to break the inpiral pipeline into two
    steps, and only compute chisq for coincident
    triggers.

78
Coherent SNR
  • The output of filtering the data with (complex
    SPA) template is an SNR time-series, which is
    complex the phase selects the best combination
    of cosine/sine (h,hx) components of the signal.
  • A single-detector trigger is a peak in the
    magnitude of this time series in time, and across
    templates in the bank.
  • A coincident trigger is a set of single-detector
    triggers that are coincident in time and template
    parameters.
  • The complex SNR time series for the templates in
    that coincident trigger can be combined together
    coherently, by applying suitable time delays,
    depending on location of the source in the sky.
  • One can maximize the combined coherent SNR over
    sky location.
  • One can apply thresholds or other criteria to
    favor coherent signals from multiple detectors,
    over ones that dont tend to add coherently.
  • One need not go back to the raw data only the
    SNR time series in the neighborhood of the
    coincident trigger (eg, 1 s) from the multiple
    detectors is necessary.

79
Coherent SNR
  • coherent signal injection
    incoherent coincident glitch

80
Other trigger information
  • There is more information from our triggers that
    can be used to distinguish signal from
    backgrounds in the presence of glitches.
  • For example, glitches can ring up large areas of
    the template bank, signals only ring up templates
    within their ambiguity function.
  • We cluster triggers in template space before the
    coincidence test.
  • The template bank veto is currently under
    development and study.
  • More info is available, and can be used in
    multivariate classification programs to more
    effectively separate signal from background and
    assign each event candidate a more meaningful
    detection statistic louder events are more
    signal-like.

81
Template bank veto
  • A glitch will ring up a broad swath of templates
    in the bank.
  • Different templates ring up over duration of
    injection or glitch (fraction of a second).
  • For signal injections, SNR peaks at best-match
    chirp mass and coalescence time.
  • Time evolution of SNR and motion across the
    template bank is different for glitches.

82
Trigger bank chisq
  • SNR, chirp mass over time can be combined into a
    chisq designed to match signal-like behavior.
  • This test is still under development and not yet
    employed in the LSC CBC pipeline (but soon!).

83
Multivariate classification
  • Kari Hodge uses a package called
    StatPatRecognition, and a particular algorithm
    called Bagged Decision Trees to create a random
    forest of decision trees, making use of a vector
    of input variables.
  • Train the algorithm on background (time slide
    triggers) and foreground (software injection
    triggers)
  • Choose an optimization criterion (maximize S/B,
    S/v(SB), many others).
  • Evaluate ROC using an independent sample of S, B.

84
Detection confidence
  • The end of a search pipeline is a rank-ordered
    list of event candidates.
  • If there are none, we turn down our thresholds
    until we see some (including a loudest event).
  • Nice to have some manageable amount of event
    candidates to consider ( 10s).
  • the events may or may not be consistent with
    estimated background
  • Eg, for H1H2 double coincident event candidates,
    where we systematically underestimate our
    background using time-slides, the remaining
    foreground events are naively inconsistent with
    the background
  • Even with our best automated tools, the
    detection statistic ?eff does not give us
    sufficient confidence in declaring detection.
  • Events must still be scanned by hand. Could they
    be due to environmental or instrumental glitches?
  • We have a very long, elaborate, time-consuming
    detection checklist.
  • Our best tools so far qscans, coherent SNR,
    null-stream.

85
First page of a many-page detection checklist
for one event
86
qscans, null stream
(this ones a hardware injection!)
87
qscans, loud glitches
88
Once we detect
  • We want to estimate the parameters of the signal
    masses, sky location, orbit orientation, spins
  • Our non-spinning templates can do an OK job of
    estimating the chirp mass, not much else.
  • To do metter, use more sophisticated tools
    Bayesian Monte Carlo Markov Chains

89
Mass Accuracy
  • Good accuracy in determining chirp
    mass. where
  • Accuracy decreases significantly with higher mass

BNS 1-3 M?
BBH 3-35 M?
90
Mass Accuracy
  • Very little ability to distinguish mass ratio.
  • Width of accuracy plots similar to entire search
    range.

BNS 1-3 M?
BBH 3-35 M?
91
Timing Accuracy
  • As before, parameter accuracy better for longer
    templates.
  • Timing accuracy determines ability to recover sky
    location
  • Timing systematic is due to injecting TD,
    recovering FD.
  • Overall systematic (same at all sites) does not
    affect sky location.

BBH 3-35 M?
BNS 1-3 M?
92
Markov Chain Monte Carlo Parameter Estimation
  • A candidate would be followed up with MCMC
    parameter estimation routine.
  • Example from simulated LIGO-Virgo data with
    injection.

Plot from Christian Roever, Nelson Christensen
and Renate Meyer
93
MCMC convergence
  • The MCMC explores a large-dimensional parameter
    space (for non-spinning binaries, 9 parameters
    per detector).
  • It performs a Markov-chain random walk through
    that space, minimizing the mismatch between the
    data and a template with those parameters.
  • To avoid landing in false minima, random termal
    noise is added. Turn up the temperature
    (simulated annealing) to escape from local
    minima.
  • Also run parallel chains with different initial
    parameters.
  • Iterate until it converges robustly on a minimal
    mismatch.
  • Explore around that minimum to establish
    posterior PDFs for each parameters

Nelson Christensen, Hans Bantilan
94
Posterior PDFs
  • We get peak (most likely) parameters, and
    confidence intervals on the parameters.
  • Some parameters are determined better than
    others!

95
Correlations and degeneracies
96
Sky localization using MCMC runs with multiple
detectors
H1H2L1
97
Software tools
  • LSC Data Analysis Software Working Group
  • code C, python, matlab, c/root
  • LIGO Algorithm Library (LAL) and LALApps
    (applications that use LAL, such as the inspiral
    matched filter code and waveform simulations),
    both written in C)
  • Most post-processing (coincidence, plots, )
    written in python
  • 3rd-party scientific software FFTW, GSL, pylab,
    ROOT, MV classifiers,
  • search/analysis pipelines are run using Condor
    DAGs on the LIGO DATA Grid (linux clusters).
  • Plans underway to use NSF OSG.

98
LIGO Data Grid clusters
99
Future
  • More signal-based vetoes and better
    signal/background discrimination
  • Better, more automated, less biased detection
    confidence procedures
  • Better spinning BBH searches
  • Incoherent IMR
  • Coherent IMR using analytical waveforms guided by
    NR
  • Better parameter estimation, source sky location
  • Tests of GR using detected waveforms
  • Faster pipelines, more computing resources (OSG)
  • Follow-up with EM detectors (ground- and
    space-based telescopes, neutrino detectors, etc)
  • Open up the new and wildly exciting field of
    gravitational wave astrophysics!
About PowerShow.com