A limit on the diffuse flux of muon neutrinos using data from 2000 2003 - PowerPoint PPT Presentation

1 / 47
About This Presentation
Title:

A limit on the diffuse flux of muon neutrinos using data from 2000 2003

Description:

This sounds very easy, but what does it mean to 'normalize the Monte Carlo' ... Ice properties, muon propagation, OM sensitivity, other unknowns? ... – PowerPoint PPT presentation

Number of Views:43
Avg rating:3.0/5.0
Slides: 48
Provided by: jessica64
Learn more at: http://icecube.wisc.edu
Category:

less

Transcript and Presenter's Notes

Title: A limit on the diffuse flux of muon neutrinos using data from 2000 2003


1
A limit on the diffuse flux of muon neutrinos
using data from 2000 2003 Jessica
Hodges University of Wisconsin Madison Baton
Rouge Collaboration Meeting April 13, 2006
Light at the end of the tunnel
2
Search for a Diffuse Flux of Neutrinos (TeV PeV)
2000 2003 807 days of detector livetime
Monte Carlo simulation Atmospheric Muons muons
created when cosmic rays hit the atmosphere,
including simulation of simultaneous downgoing
muons Atmospheric Neutrinos neutrinos created
when cosmic rays hit the atmosphere. Have an
E-3.7 energy spectrum. Signal Neutrinos
extraterrestrial neutrinos with an E-2 energy
spectrum
lt1gt Remove downgoing events with a zenith angle
cut and by requiring high quality event
observables. lt2gt Separate atmospheric
neutrinos from signal by an energy cut.
3
After Event Quality Cuts
Upgoing events
Horizontal events
The zenith angle distribution of high quality
events before an energy cut is applied. The
signal test flux is E2? 10-6 GeV cm-2 s-1 sr-1.
4
After event quality cuts, this is the final
sample of upgoing events for 4 years.
linear
log
keep
keep
Signal hypothesis E2 F 10-6
  • Key Elements for Setting a Limit
  • number of actual data events observed
  • number of background events predicted
  • 3) number of signal events predicted given the
    signal strength that you are testing

5
How to go calculate the number of background
events in the final sample 1) Count the number
of data events above and below the final energy
cut (NChannel gt 100) 2) Count the number of
atmospheric neutrinos above and below the final
cut. 3) Apply a scale factor to the low energy
Monte Carlo events so that the number of events
exactly matches the low energy data. 4) Apply
this same scale factor to the background Monte
Carlo above 100. This is the number of background
events (b) that goes in the final computation of
the limit. This sounds very easy, but what does
it mean to normalize the Monte Carlo?
6
Scaling the low energy atmospheric neutrino Monte
Carlo prediction to the low energy data can mean
either
You believe the atmospheric neutrino theory to be
correct and you are correcting the detector
acceptance or efficiency. (This could be many
factors ice, muon propagation, OM sensitivity.)
This was done in the 1997 B10 analysis.
You are correcting the theoretical flux that
went in to the Monte Carlo prediction because you
believe that the theorists incorrectly predicted
the atmospheric neutrino flux.
7
Two Interpretations of What it Means to Scale the
Low Energy Monte Carlo Events to the Low Energy
Data
Atms MC
Atms MC
Data
Data
Signal
Signal
Apply the scale factor to correct for detector
efficiency. DO apply the correction factor to the
signal since we are correcting the entire
detector efficiency or acceptance.
Apply the scale factor to correct the theory that
predicts the atmospheric neutrino flux. Do NOT
apply the correction to the signal since it was
meant for only atmospheric neutrinos.
Atms MC
Atms MC
Data
Data
Signal
Signal
8
We chose to account the scale factor to the
uncertainties in the atmospheric neutrino flux.
Hence, I will NOT apply the scale factor to the
signal Monte Carlo. This means that any
uncertainties in our detection of the predicted
flux must be accounted for separately.
9
  • Key Elements for Setting a Limit
  • number of actual data events observed
  • number of background events predicted
  • 3) number of signal events predicted given the
    signal strength that you are testing
  • 2 3 are based on Monte Carlo simulations that
    contain many uncertain inputs!
  • We must consider how systematic errors would
    change the amount of signal or background in the
    final sample.

Thats easy! 6 events observed
10
First, lets consider the uncertainties in the
background prediction.
Every model or uncertainty that is applied to the
background spectrum affects the number of events
that will survive to the final event sample.
11
In the past, most AMANDA analysis have used the
Lipari model for atmospheric neutrinos. Instead,
I will use the more up-to-date calculations done
by two different groups. Barr, Gaisser, Lipari,
Robbins, Stanev 2004 BARTOL Honda, Kajita,
Kasahara, Midorikawa 2004 HONDA
1 background model (Lipari)
Bartol
Honda
There are now 2 background predictions.
12
The atmospheric neutrino flux models (Bartol and
Honda) are affected by 1) uncertainties in the
cosmic ray spectrum 2) uncertainties about
hadronic interactions
Cosmic Ray Proton Flux
short dashed green line old HONDA
pink solid line HONDA 2004
dashed green line BARTOL 2004
taken from HKKM 2004 paper
13
Uncertainties in the cosmic ray spectrum and
hadronic interactions were estimated as a
function of energy.
Percentage Uncertainty in the Atmospheric
Neutrino Flux
Log10 (E? )
Every background Monte Carlo event can be
weighted with this function by using the events
true energy.
14
If every background event is weighted UP by its
maximum error, then you can get a new, higher
prediction of the background. Every event can
also be weighted DOWN by its maximum amount of
error. This gives a new, minimum prediction.
Bartol
Honda
There are now 6 background predictions.
15
Our Monte Carlo simulation is NOT
perfect. Reasons for disagreement between data
and Monte Carlo Ice properties, muon
propagation, OM sensitivity, other
unknowns??? Consider what happens when you cut
on a distribution that does not show perfect
agreement.
keep
cut
BLUE is the true distribution (data) and orange
is the Monte Carlo. A cut on this distribution
would yield TOO MANY Monte Carlo events compared
to the truth (data).
16
Fortunately, I have a good sample of downgoing
muons and minimum bias data that can be used to
study the uncertainties in my cuts!
Data Atms µ
I have performed an inverted analysis to select
the highest quality downgoing events. All cuts
and reconstructions are the same as the upgoing
analysis just turned upside-down.
17
Examining the Cuts with Downgoing Muons
See disagreement at the final cut level. Go back
to the level before events quality cuts were
made. Estimate a percentage shift for the Monte
Carlo in each parameter that will provide better
data MC agreement . Apply the shifted Monte
Carlo cuts at the final cut level.
Median Resolution (degrees)
18
Examining the Cuts with Downgoing Muons
See disagreement at the final cut level. Go back
to the level before events quality cuts were
made. Estimate a percentage shift for the Monte
Carlo in each parameter that will provide better
data MC agreement . Apply the shifted Monte
Carlo cuts at the final cut level.
Ndirc Number of direct hits
19
Shift the downgoing Monte Carlo in the
parameters that show disagreement. This is tricky
because you must find a way to shift each
parameter into agreement without creating
disagreement in other parameters. Estimate a
correction to the Monte Carlo for each parameter
before any cuts are applied. Apply all of the
shifted cuts at the final cut level and see if
all of the distributions show agreement.
RESULTING SHIFT showing good agreement across all
important parameters Number of direct hits
(ndirc) -gt 1.1ndirc Smoothness of hits along
track (smootallphit) -gt 1.08smootallphit Median
Resolution -gt 1.05med_res Likelihood Ratio
(up to down) -gt 1.01L.R.
20
I will use the modified cuts from the downgoing
analysis to apply an additional uncertainty on my
upgoing events (both signal and background).
Shifted MC
Normal MC
Each model can be considered WITH and WITHOUT the
Monte Carlo shifted on those 4 parameters.
There are now 12 background predictions.
There are now 2 signal predictions.
Signal (shifted MC)
Signal (normal)
21
Number of Events in the Final Data Set (Nch gt
100)
Average Background 6.12 Average Signal 66.7
22
These are 12 background predictions for the final
sample, ranging from 4.52 to 7.79.
Red background from normal cuts Blue
background from shifted MC cuts
23
Another look at the inverted analysis. does the
detector have a linear response in NChannel?
log
linear
Here, you see the NChannel comparison between the
minimum bias data and dCorsika atmospheric muon
simulation.
24
It would be desirable for the ratio of data to
atmospheric muons as a function of NChannel to be
flat.
This suggests that we should not use events
between 0 and 50 channels hit to perform the
upgoing normalization.
A fit to the ratio as a function of NChannel is
not exactly flat, but its slope is very
small. How does this affect the signal and
background predictions?
25
I used the fit as a function of NChannel to scale
up the Monte Carlo from the upgoing analysis.
How the Upgoing Atmospheric Neutrinos
Responded. This only caused a small change in
the number of background events predicted to be
in my final sample. The uncertainty due to
NChannel being a non-linear parameter is at most
10. This is much less than the other errors I
have been considering and I will not worry about
it. How the Upgoing Signal Neutrinos
Responded The number of signal events predicted
above the NChannel cut changed from 68.4 events
to 85.6 events. This is a 25 error.
26
There may be additional detector effects which
mean that our signal efficiency is not
1.0. Changing the OM sensitivity seems to have a
linear effect on the NChannel spectrum of the
signal. Consider the uncertainty in the number of
signal events predicted in the final sample from
this effect to be 10. 25 uncertainty due to
NChannel non-linearity uncertainties uncertainty
in our overall detection of the signal (102
252) 1/2 27 Now its time to build the
confidence belt..
27
How to include systematic errors in confidence
belt construction Systematic Errors included by
the methods laid out in Cousins and Highland
(Nucl. Instrum. Methods Phys. Res. A,
1992) Conrad et al. (Phys. Rev. D, 2003) Hill
(Phys. Rev. D, 2003)
P (x µ, P(e,b)) ? P (x eµ b) P(e,b)
de db
I have found 12 different background predictions,
b, and 3 different values for the signal
efficiency, e. By integrating over these options,
I can include systematic errors in my confidence
belt construction.
This can be simplified with a summation and the
Poisson formula.
28
The construction of the Feldman-Cousins
confidence belt relies on the Poisson
distribution.
(µ b)x e -(µ b) x!
P ( x µ b)
At every value of the signal strength, µ, you can
calculate a probability distribution function.
P( x µ b)
µ, Signal strength
X (number observed)
29
Applying the efficiency error to the PDF.
( µ b )
( e µ b )
where e is the efficiency uncertainty on the
signal e 0.73, 1.00 or 1.27
Why not put a factor of e on the background? 1)
Linear nch-dependent effects in the detector will
be removed by normalization at low Nch. 2)
Non-linear nch-dependent effects were computed to
be very small compared to the other errors in the
background that we are considering.
30
There are now 12 values for the background
prediction and 3 values for the signal
efficiency. This makes a total of 36 values of
(eµ b) that you can use to construct the
confidence belt.
Here you see the PDF for µ 0.5. Three PDFs are
averaged into one (the magenta line). For my
analysis, 36 PDFs are averaged into one for every
value of µ.
31
My Sensitivity
Event upper limit 5.78
The most probable observation if µ 0 is 6.
(This is also the median.)
The sensitivity is the maximum signal strength µ
that is consistent with the most probable
observation if there is no signal (µ 0).
32
My Limit My Sensitivity
Event upper limit 5.78
6 events were observed
The upper limit is the maximum signal strength µ
that is consistent with 6 events observed.
33
No Systematic Errors
Event upper limit 4.91
Limit with no systematic errors This confidence
band, without systematic errors, is not as wide.
(The background assumption is the average of
Bartol Central Honda Central)
34
E2 F lt (E2 Ftest) event upper limit
nsignal
E2 F lt (10-6) 5.78 66.7
E2 F90 (E) lt 8.7 x 10-8 GeV cm-2 s-1 sr-1
Limit with no systematic errors 10-6 4.91 /
68.4 7.2 x 10-8
35
Gev )
90 signal region
The limit is valid over the region that contains
90 of the signal in the final data sample
(Nchgt100). 90 region 104.2 GeV to 106.4 GeV
15.8 TeV to 2.51 PeV
36
Limit from this analysis
37
Testing signal models other than E-2
These models have not yet been unblinded, but the
sensitivity and suggested NChannel cut are listed.
38
Many thanks...
Thanks to Gary Hill who says he always has
time for another question Thanks to Teresa
Montaruli for help with the neutrino flux models
and their uncertainties. Thanks to the diffuse
discussion group Gary, Teresa, Chris, Paolo,
Albrecht and Francis Thanks to John and Jim for
your advice in the office everyday
39
(No Transcript)
40
Ndirc gt 13
41
Ldirb gt 170
42
abs(Smootallphit) lt 0.250
43
Median resolution lt 4.0
44
No Cogz cut
45
Likelihood ratio cut was zenith dependent Jkchi
(down - Bayesian) Jkchi (up - Pandel)
gt-38.2cos(Zenith7/57.29)27.506
46
123o
100o
180o
Zenith gt 100
47
FINAL CUTS Ndirc (Pandel) gt 13 Ldirb (Pandel)
gt170 abs (Smootallphit (Pandel) ) lt 0.250 Zenith
(Pandel) gt 100o Median_resolution
(P08err1,P08err2) lt 4.0o Jkchi (down - Bayesian)
Jkchi (up - Pandel) gt-38.2cos(Zenith7/57.29)
27.506 Nchgt100
Write a Comment
User Comments (0)
About PowerShow.com