6 multi-cpu systems to be used as compute servers. These must be based on Intel Pentium CPU's and include a RAID-based disk storage subsystem. Approximately 6TB of usable space disk space is required across these systems. - PowerPoint PPT Presentation

1 / 21
About This Presentation
Title:

6 multi-cpu systems to be used as compute servers. These must be based on Intel Pentium CPU's and include a RAID-based disk storage subsystem. Approximately 6TB of usable space disk space is required across these systems.

Description:

The Tender Probable solutions 4 University Sites 8 way multi processor 700Mhz Xeon system with 2GB RAM with 0.5 TB direct attached Fibre Channel Raid disks. – PowerPoint PPT presentation

Number of Views:162
Avg rating:3.0/5.0
Slides: 22
Provided by: gron
Category:

less

Transcript and Presenter's Notes

Title: 6 multi-cpu systems to be used as compute servers. These must be based on Intel Pentium CPU's and include a RAID-based disk storage subsystem. Approximately 6TB of usable space disk space is required across these systems.


1
(No Transcript)
2
Extract from the tender
  • 6 multi-cpu systems to be used as compute
    servers. These must be based on Intel Pentium
    CPU's and include a RAID-based disk storage
    subsystem. Approximately 6TB of usable space disk
    space is required across these systems.
  • 24 dual processor PC's of varying specification.
    This lot is divided into sub lots, one for
    delivery to the UK, the other for the USA. The
    sub lots may be bid for separately.
  • 5TB of disk storage (to be placed on an existing
    Fibre channel based storage subsystem in the
    USA).
  • 4 tape libraries for delivery in the UK

We included one line that caused the responses to
vary enormously The system should be able to
support substantial IO loads from file system to
CPU (probably in the range 100-300MB/sec).
3
The Tender Probable solutions
  • 4 University Sites
  • 8 way multi processor 700Mhz Xeon system with 2GB
    RAM with 0.5 TB direct attached Fibre Channel
    Raid disks.
  • RAL
  • Two of the above servers
  • 2.5 TB disk, probably using a FC switch based SAN
  • 16 Farm nodes (dual proc. Pizza box style?)
  • Fermilab Chicago
  • 5TB disks to attach to existing SGIs large disk
    storage
  • 8 High powered workstation class PCs (Dual PIII,
    IDE 180GB disk)
  • Tape systems have been postponed as Fermi have
    had trouble with Exabyte Mammoth drives.

4
Why Multi CPU not Farm?
  • System is to support multiple physicists working
    simultaneously on different data sets. (Analysis
    not MC.)
  • All disks are locally attached and hence high IO
    bandwidth.
  • Easier to manage one large system than a farm?
  • Large on chip cache and large memory should
    provide excellent performance.
  • Large chunk of money allows you to consider
    buying this type of well engineered high end
    system. Which can be expanded later with
    commodity price PCs to provide a farm.
  • Farms well suited to Monte Carlo work - RAL 16
    node farm.

5
How much ??
  • Server of this type cost from 25K (to 50K.
  • Fibre Channel disk subsystems gt 15K
  • Prices vary from vendor to vendor. Bids for the
    total contract varied from 800K to greater than
    2Million.
  • We hope to spend less than 700K

6
Evaluation Kit
  • Intel 8 way 500Mhz Xeon tested earlier (October
    2000)
  • Dell 8450 8 way 700Mhz Xeon with 2GB Ram, 29GB
    scsi raid disks, with dell 650F FC disk storage
    unit containing 10 36GB disks. ( Installed
    Fermi 6.1.1, tested cdf code and ran bonnie
    benchmarks plus others.)
  • Benchmarking IBM kit in Greenock, Scotland next
    week.
  • Other contenders Compaq, Hitachi, Solution
    Centre, Kingswell, Pars.
  • Tender advertised in the EU journal.

7
Benchmarking
  • Main aim was to have high IO bandwidth
  • FC should give 100Mbytes/sec
  • Bonnie test difficult (First rule is use files
    much larger than RAM size. Difficult when you
    have 2GB RAM as MAX file size on ext2 is 2GB.)
  • Simple copying tests hampered by linux caching.
  • Physics tests also had to be modified to avoid
    caching skewing the results.
  • CPU tests scale well. IO perf. Less than
    expected. Will be clearer when we have used other
    equipment.

8
4 Universities Glasgow, Liverpool, Oxford and
UCL. One large server with 0.5 TB fully mirrored
raid disks. (ie 1TB) Disk subsystem probably
based on Fibre Channel Disks to provide high I/O
throughput.
9
(No Transcript)
10
(No Transcript)
11
IBM
12
RAL system is double a university system but with
2.5TB useable disk.
13
RAL Farm of 16 dual CPU systems The 1U servers
available enable efficient use of space and
plenty of expandability. Phase two will be in
12-18 months time. Roughly same spend.
14
(No Transcript)
15
IBM
16
The High Bandwidth, High End, High Costs
Solution Some vendors chose to aim at the 300MB/s
end of our suggested IO bandwidth range. Three
vendors chose to offer the LSI Logic MetaStor
storage system. This is capable of delivering
350Mbytes/sec bandwidth. Cost of these system was
roughly twice that offered by Dell/IBM. Cost of
expansion was over double street price per disk.
17
LSI Logic MetaStor
High End Solution offered by some vendors.
Provides 350MB/s performance at a price!! Cost of
individual disks FC 73GB disk 2000 Street price
700 IBM price 590
18
Fermilab Disks
  • This lot was tightly specified to match existing
    systems at Fermi.
  • They use Chaparral FC controllers connected to
    scsi disks all in one Kingston(?) shelf. All
    cables are internal to the shelf, all that comes
    out is power and fibre.
  • Vendors that offer alternatives are less favoured.

19
This controller is used by Fermilab and is the
preferred option for the disks there. It can be
placed in the rack along with 9 disks to keep all
cables internal. Just Fibre channel comes out.
20
The third option
  • A new chaparral controller - 200MB/sec FC to
    Ultra 160 SCSI disks.
  • This brand new controller could offer increased
    bandwidth.
  • Requires new FC HBAs - availability?
  • Size of vendor a worry.

21
Conclusions
  • Tendering is a tricky business
  • Hard to predict
  • Very time consuming
  • Should make a decision Real Soon Now
Write a Comment
User Comments (0)
About PowerShow.com