Using LONI to Simulate Mass Transferring Binary Stars - PowerPoint PPT Presentation

1 / 33
About This Presentation
Title:

Using LONI to Simulate Mass Transferring Binary Stars

Description:

Using LONI to Simulate Mass Transferring Binary Stars Dominic Marcello Graduate Student at Louisiana State University Department of Physics and Astronomy – PowerPoint PPT presentation

Number of Views:131
Avg rating:3.0/5.0
Slides: 34
Provided by: lsu99
Learn more at: http://www.phys.lsu.edu
Category:

less

Transcript and Presenter's Notes

Title: Using LONI to Simulate Mass Transferring Binary Stars


1
Using LONI to Simulate Mass Transferring Binary
Stars
  • Dominic Marcello
  • Graduate Student at Louisiana State University
    Department of Physics and Astronomy
  • dmarcello_at_phys.lsu.edu
  • Major Professor Joel Tohline

2
What is a star?
  • A hot ball of self-gravitating gas
  • Elements fuse in the core to produce energy
  • Most all stars begin with fusing H into He
  • This energy diffuses toward surface, slowly over
    many thousands of years
  • Gas pressure gradient is balanced by gravity

3
High Mass Stellar Evolution
  • A He core builds up and H fusion continues in a
    shell around this core.
  • When heat builds up enough at core, He will begin
    to fuse into heavier elements (C and O).
  • For high mass stars, this process can continue
    with successively heavier elements, forming an
    onion
  • When core tries to fuse Fe, it collapses and
    results in an supernova
  • http//www.astro.cornell.edu/academics/courses/ast
    ro101/herter/lectures/lec19.htm

4
Low Mass Stellar Evolution
  • Core never gets hot enough to fuse elements
    heavier than He
  • Sheds outer layers as He burns forming a
    planetary nebula
  • The leftover He, C, and/or O (possible heavier
    elements) core is called a white dwarf (WD)
  • WD has no nuclear energy source it will cool
    over many millions of years
  • Supported against gravity by electron degeneracy
    pressure
  • Most stars are low mass and will end evolution as
    WD

Source Hubble Space Telescope
Source Hubble Space Telescope
Source Hubble Space Telescope
Source http//cse.ssl.berkeley.edu/bmendez/ay10/2
000/cycle/whitedwarf.html
5
What is a binary star?
  • A binary star is two stars orbiting their common
    center of mass
  • Very common - about half of all the stars in the
    sky are actually systems of two or more stars
  • Are formed out of the same cloud of gas at about
    the same time gravitational capture is
    extremely unlikely
  • A binary star is two stars orbiting their common
    center of mass
  • Very common - about half of all the stars in the
    sky are actually systems of two or more stars
  • Are formed out of the same cloud of gas at about
    the same time gravitational capture is
    extremely unlikely

Source http//en.wikipedia.org/wiki/FileOrbit5.g
if
6
What is a mass transferring binary star?
  • If the two stars get close enough to one another,
    one star can pull gas off the surface of the
    other star.
  • This can happen if one star expands due to its
    evolution or if it is brought closer to its
    companion by angular momentum losses.
  • About half of all binaries probably experience
    mass transfer at some point

7
(No Transcript)
8
Evolution of a Binary System
Source Postnov, K.A., Living Reviews in
Relativity 9 (2006), 6.
9
What happens to mass transferring DWD binaries?
  • What can happen if the mass transfer runs away?
  • When the mass of a WD exceeds the Chandrasekhar
    limit of 1.4 Solar masses, degeneracy pressure is
    no longer able to support it against gravity
  • Supernova
  • Accretor is pushed barely over this limit,
    causing a gravitational collapse that leads to a
    flash thermonuclear reaction, exploding the start
  • Stars merge into a new object - electrons are
    pushed into protons forming an object entirely
    out of neutrons a neutron star - supported by
    neutron degeneracy pressure
  • OR mass transfer rate may level off and may
    last for a long time.
  • A cataclysmic variable may result helium which
    builds up on the surface of the accretor
    periodically gets dense and hot enough to burn,
    causing a smaller explosion and a nova

10
Gravitational Waves
  • Einstein's theory of General Relativity predicts
    the existence of gravitational waves
  • These waves can carry away angular momentum from
    the binary, causing it to get closer together.
    Mass transfer and merger can result.
  • Merging compact objects should emit these waves
  • Very weak and have not yet been directly detected
  • Lots of noise - gravity wave telescopes such as
    LIGO need to know what to look for
  • Source http//en.wikipedia.org/wiki/File116658ma
    in_dwarf_collage_lg.gif

11
Laser Interferometer Gravitational- Wave
Observatory
  • http//www.ligo.caltech.edu/
  • Three detectors one located in Livingston
    Parish and two in Washington State
  • Gravitational waves will distort space
  • LIGO has two 4 km arms at right angles and
    constantly monitors their length using lasers and
    mirrors.
  • If a gravitational wave were to pass over LIGO,
    its arms would change lengths

Source http//en.wikipedia.org/wiki/FileGravitat
ionalWave_PlusPolarization.gif
Source https//blogs.creighton.edu/gkd58409/?p25
12
How do we use a computer to simulated a binary
star system?
  • Stars behave according to the equations of fluid
    hydrodynamics
  • These equations are continuous
  • They must be transformed into a discrete form to
    be understood by a computer

13
Simulation on Discrete 3D Grid
  • The evolution variables (density, momentum,
    energy) live at the center of the cells of a 3d
    cylindrical mesh
  • Compute the flow rate of the evolution variables
    into and out of each cell
  • Multiply times the time step size
  • Accumulate new result into cell
  • Time step size is limited by geometry and flow
    rate

14
Simulation Size
  • Our typical binary star simulation is done on a
    grid of 170 radial zones, 256 azimuthal zones,
    and 55 vertical zones.
  • 170x256x55 2.4 million zones
  • About 8 conserved variables per zone means 19
    million variables.
  • For double precision floating point (8 bytes per
    number), this means 146 MB per frame
  • There may be many hundreds of actual time-steps
    between frames
  • We output 100 120 frames per binary orbit and
    run for typically 10 to 30 orbits.
  • A data set for a full run is close to ½ TB

15
What to do with the data
  • A data set that large is hard to comprehend
    without reduction
  • We use visualization tools such as VisIt to look
    at the data in different ways
  • 2D slices or cross sections with color
    representing value
  • Contour plots, either 2D or 3D
  • Spreadsheet of entire data set
  • Able to make a series of data files into a movie
  • Many more features
  • We also use simple C codes to read through the
    data file and compute global quantities that are
    of interest, such as
  • Total angular momentum of components and system
  • Orbital separation
  • Mass transfer rate from donor to accretor
  • Many others
  • We can then use gnuplot to plot these quantities
    over time

16
Example of 3D Contour Movie
17
Example of 2D Slice
18
2D slice with contours
19
1D plots made with gnuplot
20
How much computation is needed?
  • Each timestep requires on the order of a few
    dozen floating point operations per variable.
  • This means on the order of several billion
    floating point ops per timestep
  • Each orbit requires around 50,000-100,000
    timesteps
  • Running on a single PC, a single simulation would
    take years to complete.
  • Therefore we need to use High Performance
    Computers (HPC)
  • Many hundreds to thousands of processors all
    interconnected by network cables.
  • Our binary simulation code typically runs on
    between 64 and 150 processors. It takes about a
    month to run a full simulation

21
Commodity Clusters
  • Developed as cheap way to build HPC
  • Initially used off the shelf parts linked
    together by network cables
  • Now the computer companies (Dell, IBM) sell them
    ready made
  • Kentucky Linux Athlon Testbed 2 (Source
    University of Kentucky)
  • Built in 2000 for around 40k
  • 22.8 GFLOPS
  • lt 1 GFLOP per 1,000
  • National Center for Supercomputing Applications
    (NCSA) (Source NCSA))
  • Built in 2007 (retiring now)
  • 9600 Processors
  • 85 TFLOPS

22
(No Transcript)
23
Stone Soupercomputer
  • Researchers at Oak Ridge National Lab could not
    get grant to build an HPC
  • In 1997, collected various computers around the
    lab not being used and hooked them together
  • Heterogeneous cluster
  • Named Stone Soupercomputer

24
Top 500 Supercomputer Sites
  • Semi-Annual Ranking of World's Supercomputers
  • Ranking determined by measuring FLOPS (Floating
    Point Operations Per Second) using standard
    benchmarks
  • http//www.top500.org/

25
Louisiana Optical Network Initiative (LONI)
  • Queenbee (Downtown Baton Rouge)
  • 5 Identical Linux Clusters
  • Eric (LSU)
  • Oliver (ULL)
  • Louie (Tulane)
  • Poseidon (UNO)
  • Painter (LaTech)
  • 5 IBM AIX Clusters
  • Bluedawg (LaTech)
  • Ducky (Tulane)
  • Zeke (ULL)
  • Neptune (UNO)
  • LaCumba (Southern Baton Rouge)
  • Whole System
  • 1385 Nodes, 8520 Cores
  • 80 TFLOPS
  • 8.8 Tbytes RAM
  • Queenbee
  • 680 Nodes x 8 Cores/Node 5440 Cores
  • 50.7 TFLOPS
  • 1 GB / Core
  • Quad-Core 2.33 GHz Intel Xeon 64 Bit
  • Linux Clusters
  • 128 Nodes x 4 Cores/Node 512 Cores
  • 5 TFLOPS
  • 1 GB/Core
  • Dual-Core 2.33 GHz Intel Xeon 64 Bit
  • AIX Clusters
  • 13 nodes x 8 Cores/Node 104 Cores
  • 0.851 TFLOPS
  • 2 GB/Core
  • 1.9 GHz IBM Power5

26
(No Transcript)
27
Top 500 List June 2007
28
Top 500 List November 2010
29
How is parallel programming different?
  • Problem must be broken down into different chunks
  • Those chunks are each independently operated on
    by their own processor
  • Synchronization and communication between the
    processors and their data is required
  • A library such as the Message Passing Interface
    (MPI) can be used to facilitate communication and
    synchronization

30
Parallel Execution Model
31
Parallel is (usually) non-trivial
  • There are many pitfalls and efficiency concerns
    in parallel programming not present in serial
    programming
  • Poor design can result in poor scaling less
    computing output per processor
  • Exceptions include embarrassingly parallel
    problems like computing pi Monte Carlo style
  • These problems require little communication
    between processors
  • Most interesting problems, such as fluid
    dynamics, require lots of communication between
    processors
  • This results in overhead and can result in poor
    scaling
  • Past a certain point, more processors results in
    less speed. Breaking this barrier is at the
    forefront of HPC research and will require a new
    execution model (I.e ParalleX)

32
What other stuff can HPC do?
  • Molecular dynamics
  • Bioinformatics
  • Climate Models
  • Statistics
  • Hurricane forecast modeling
  • Time critical so HPC is crucial
  • Source http//weatherbe.files.wordpress.com/2010/
    09/igor_track21.gif

33
Next Sessions
  • LONI programming and execution environment
  • Visualization tools such as VisIt and gnuplot
  • Programming in MPI
Write a Comment
User Comments (0)
About PowerShow.com