Author Ken MilbergPowerTCO - PowerPoint PPT Presentation

1 / 36
About This Presentation
Title:

Author Ken MilbergPowerTCO

Description:

It also helps detect thrashing, which can occur when memory is extremely low and ... can detect whether the system is thrashing and actually tries to remedy the ... – PowerPoint PPT presentation

Number of Views:63
Avg rating:3.0/5.0
Slides: 37
Provided by: JTre4
Category:

less

Transcript and Presenter's Notes

Title: Author Ken MilbergPowerTCO


1
Driving the Power of AIX
  • Author - Ken Milberg/PowerTCO
  • Publisher - Merrikey Lee/MC Press
  • Foreword Joefon Jann/IBM

2
Agenda
  • Driving the Power of AIX
  • Table of Contents
  • Five-Step tuning Methodology
  • Best Practices
  • CPU
  • Memory
  • I/O (Disk Network)
  • AIX 6.1 Summary changes
  • Contact Information

3
Table of Contents
Section 1. Introduction Chapter 1. Tuning
Methodology 10 Chapter 2.
Introduction to AIX 16
Chapter 3. Introduction to Power 19
Chapter 4. Summary, Tips Quiz
24 Section 2. CPU Tuning Chapter 5.
Introduction 32 Chapter 6.
Monitoring 35
Chapter 7. Tuning 60
Chapter 8. Summary, Tips Quiz 74 Section 3.
Memory Tuning Chapter 9. Introduction
84 Chapter 10. Monitoring
90 Chapter 11. Tuning
106 Chapter 12.
Summary, Tips Quiz 118 Section 4. Disk I/O
Tuning Chapter 13 Introduction
128 Chapter 14. Monitoring
137 Chapter 15. Tuning 151 Chapter
16. Summary, Tips Quiz 157 Section 5. Network
I/O Tuning Chapter 17. Introduction
168 Chapter 18. Monitoring
179 Chapter 19. Tuning 195 Chapter
20. Summary, Tips Quiz 207 Section 6. Bonus
Section Chapter 21. AIX 6.1 Tuning
216 Chapter 22. Oracle on AIX
236 Chapter 23. Linux on Power (LoP) 252
4
Driving the Power of AIX
  • Starting planning over a year ago combination
    of AIX 5.3 and AIX 6.1 material.
  • Subject matter separated by subsystem CPU, RAM,
    Disk, Network I/O.
  • Bonus Chapters on AIX 6.1, Oracle on AIX Linux
    on Power (LoP).
  • The book you have today is self-published MC
    Press will release in the Fall.
  • Foreword written by Joefon Jann IBM
    Distinguished engineer coinventor of the Jann
    MPP Workload Model for modeling parallel
    workloads being the architect of the Deep Blue
    Chess machine which beat Kasparov.

5
Five-Step Tuning Methodology
  • Establish a baseline
  • Stress test and monitor
  • Identification of bottleneck
  • Tune
  • Repeat
  • You cannot tune your systems without
  • - First understanding what the acceptable level
    of performance is to the business.
  • - What the system performs like when it is
    first rolled out.
  • The time to start tuning your system is when you
    are not actively tuning you must start
    monitoring from day-1.

6
Best Practices
  • Never roll out more than one change at a time
  • How will you know which change had the effect
    that you are thinking it had, when you are
    rolling in multiple changes. There are some
    exceptions to this where certain integrated
    parameters are recommended to be changed. The IBM
    documentation is good about informing you if/when
    there are parameters which should be changes
    simultaneously.
  • Always test out your changes in other
    environments prior to production, I.E.
    Development, Stress, Q/A...
  • The quickest way to the SA unemployment line is
    to roll-out changes to production without careful
    testing. This cannot be stressed enough.

7
CPU - Introduction
  • Unlike other subsystems, less to tune and more to
    plan, monitor and architect
  • How is your partition architected Entitled
    capacity, number of virtual processors, SMT
  • How are jobs being run and when Is cron being
    utilized?
  • Oftentimes bottlenecks that appear CPU related
    really are Memory or I/O bound.
  • Take the time to understand what the problem
    really is
  • Dont make the mistake of throwing more iron at
    the problem
  • Increasing the amount of CPUs per partition is
    not usually the answer

8
CPU - Monitoring
  • Generic tools
  • vmstat, sar
  • AIX Specific
  • nmon, topas, mpstat
  • Tracing tools
  • Curt, filemon, netpmon, pprof, splat

9
CPU - Monitoring
What is the bottleneck?
root_at_lpar30p682e_pub/ gt vmstat 2 5 System
configuration lcpu4 mem3072MB ent0.40 kthr
memory page faults
cpu ----- -------------
---------------------- ------------
----------------------- r b avm fre re
pi po fr sr cy in sy cs us sy id
wa pc ec 1 0 128826 641397 0 0 0 0
0 0 448 87 138 0 1 98 0 0.01 2.8 1
0 128826 641397 0 0 0 0 0 0 385 10
136 0 1 99 0 0.01 2.2 1 0 128826 641397
0 0 0 0 0 0 381 13 138 0 1 99 0
0.01 2.2 1 0 128826 641397 0 0 0 0 0
0 364 40 138 0 1 99 0 0.01 2.4 1 0
128826 641397 0 0 0 0 0 0 610 13 138
0 2 98 0 0.01 3.3
10
CPU - Monitoring
  • vmstat (Unix-generic)
  • vmstat -fsviItlw -p-P pagesizeALL
    Drives Interval Count

r The average number of runnable kernel threads
over the sampling interval you have chosen. b
The average number of kernel threads that are in
the virtual memory waiting queue over your
sampling interval. r should always be higher than
b if it is not, it usually means you have a CPU
bottleneck. fre The size of your memory free
list. Dont worry so much if this number is
really small. More important, determine whether
any paging is going on if this size is small. pi
Pages paged in from paging space. po Pages
paged out to paging space. Our focus is on the
last section, CPU us User time sy System
time id Idle time wa Waiting on I/O pc
Number of physical processors consumed (displayed
only if the partition is configured with shared
processors) ec Percentage of entitled capacity
(displayed only if the partition is configured
with shared processors)

11
CPU - Monitoring
What is the bottleneck?
root_at_lpar30p682e_pub/ gt vmstat 2 5 System
configuration lcpu4 mem3072MB ent0.4 kthr
memory page faults
cpu ----- -------------
---------------------- -------------
----------------------- r b avm fre re
pi po fr sr cy in sy cs us
sy id wa pc ec 2 1 169829 600290 0 0
0 0 0 0 553 36538 175 64 32 4 0 0.79
84.9 3 2 169829 600290 0 0 0 0 0 0
778 33033 175 60 29 11 0 0.84 73. 4 1 169828
600291 0 0 0 0 0 0 403 11904 179 76 10
4 10 0.69 87.8 2 1 169828 600291 0 0 0
0 0 0 368 30745 175 82 14 2 2 0.91 85.5
6 2 169830 600289 0 0 0 0 0 0 395
27898 173 57 34 4 5 0.89 91.
12
CPU - Monitoring
What is the bottleneck?
root_at_lpar30p682e_pub/ gt vmstat 2 System
configuration lcpu4 mem3072MB ent0. kthr
memory page faults
cpu ----- ------------
---------------------- ------------
---------------------- r b avm fre re pi
po fr sr cy in sy cs us sy id
wa pc ec 1 4 128826 641397 0 0 0 0
0 0 448 87 138 24 1 40 35 0.01 2.8 1
7 128826 641397 0 0 0 0 0 0 385 10
136 35 14 20 31 0.01 2.2 2 7 128826 641397
0 0 0 0 0 0 381 13 138 35 4 20 41
0.01 2.2 3 4 128826 641397 0 0 0 0 0
0 364 40 138 40 17 16 27 0.01 2.
13
CPU - Tuning
  • Process and Thread Management
  • - Third party schedulers, nice renice
  • SMT
  • - smctl
  • Processor Affinity
  • - bindprocessor
  • schedo Manages CPU scheduler
  • - timeslice

14
CPU - Tuning
  • renice dynamically reassigns a priority to a
    running process. Using renice can cause the
    system to assign either a higher or a lower
    priority to a given process. When you use renice,
    you actually change the value of the priority of
    a thread, (default 40) by changing the nice value
    of its process
  • Lets renice and look at its priority

root_at_lpar30p682e_pub/ gt ps -l F S UID
PID PPID C PRI NI ADDR SZ WCHAN TTY
TIME CMD 240001 A 20004773 90156 164038 0
60 20 30448400 724 pts/0 000 ksh 200005 A
0 311534 376960 0 80 30 48376400
688 pts/0 000 ksh 200001 A 0 376960
90156 0 60 20 6045c400 736 pts/0
000 ksh
root_at_lpar30p682e_pub/ gt renice -10
376960 root_at_lpar30p682e_pub/ gt ps -l F S
UID PID PPID C PRI NI ADDR SZ
WCHAN TTY TIME CMD 240001 A 20004773 90156
164038 0 60 20 30448400 724 pts/0 000
ksh 200005 A 0 311534 376960 0 80 30
48376400 688 pts/0 000 ksh 200001 A
0 376960 90156 0 50 10 6045c400 736
pts/0 000 ksh
15
CPU - Tuning
  • smtctl -m off on -w boot now
  • The smtctl command (introduced in AIX 5.3)
    displays Symmetric Multi-Threading Information
    (SMT). SMT, part of IBMs Hypervisor based
    virtualization, PowerVM - provides for two
    threads of execution per virtual processor. To
    determine whether system is SMT enabled, run
    the command without any flags

root_at_lpar30p682e_pub/ gt smtcl This system is
SMT capable SMT is currently enabled SMT boot
mode is not set SMT threads are bound to the same
virtual processor.
System performance usually increases about 30
percent when SMT is enabled, so you almost always
want to enable this functionality. SMT is
best-suited for multithreaded, I/O-intensive
applications. It is not a good fit for
numerically intensive workloads.
16
CPU - Tuning
  • Processor Affinity
  • All threads within a process can be bound to run
    on a specified processor. AIX automatically tries
    to encourage processor affinity by having one run
    queue per CPU. Using process affinity settings to
    bind or unbind threads can help you find the root
    cause of hangs or deadlocks that are difficult to
    debug. Some applications might also run faster if
    their threads are always bound to run on one
    particular CPU.
  • In this case, weve also used the -p flag, which
    saves the parameter on a reboot.

root_at_lpar30p682e_pub/ gt bindprocessor -q The
available processors are 0 1 2
3 root_at_lpar30p682e_pub/ gt CPU binding
bindprocessor 12769 2 This example assigns
process ID (PID) 12769 to logical CPU 2.
17
CPU - Tuning
  • schedo - Used to configure scheduler tuning
    parameters
  • schedo  -p  -r   -o TunableNewvalue
  • schedo  -p  -r   -d Tunable 
  • timeslice - Represents the largest number of
    clock ticks that a thread can be in control of
    before facing the possibility of being replaced
    by another thread. In some cases, increasing the
    timeslice can improve system throughput by
    reducing context switching. make sure you run
    vmstat (or sar) enough to determine whether there
    really is a considerable amount of context
    switching going on.

schedo -p -o timeslice2 Setting timeslice to 2
in nextboot file Setting timeslice to 2
18
Memory- Introduction
  • Virtual Memory Manager - Manager services all
    memory requests from the system, not just virtual
    memory. When the system accesses random access
    memory (RAM), the VMM needs to allocate space,
    even when plenty of physical memory is left on
    the system. It implements a process of early
    allocation of paging space.
  • VMMs objectives are to help minimize both the
    response time of page faults and the use of
    virtual memory where it can. Given the choice
    between RAM and paging space, the preference is
    to use physical memory if the RAM is available.
  • Working segments using computational memory - is
    used while your processes are actually working on
    computing information. These working segments are
    temporary (transitory) and exist only up until
    the time a process terminates or the page is
    stolen. They have no real permanent disk storage
    location. When a process terminates, both the
    physical and paging spaces are released in many
    cases.
  • Persistent segments using file memory - it has a
    permanent storage location on the disk. Data
    files or executable programs are mapped to
    persistent segments rather than to working
    segments. The data files can relate to file
    systems, such as Journaled File System (JFS),
    Enhanced Journaled File System (JFS2), or Network
    File System (NFS). These files remain in memory
    until the time that a file is unmounted, a page
    is stolen, or a file is unlinked. After a data
    file is copied into RAM, VMM controls when these
    pages are overwritten or used to store other
    data.
  • Paging and Swapping - When a process references a
    page on disk, it must be paged, which could cause
    other pages to page out again. VMM is constantly
    working in the background, stealing frames that
    have not been recently referenced using the page
    replacement algorithm. It also helps detect
    thrashing, which can occur when memory is
    extremely low and pages are constantly being
    paged in and out to support processing. VMM
    actually has a memory load control algorithm,
    which can detect whether the system is thrashing
    and actually tries to remedy the situation.

19
Memory- Monitoring
  • svmon (AIX-specific)
  • svmon -G -i Intvl NumIntvl -z
  • size The size of real memory frames, or simply
    real memory
  • inuse The number of frames containing actual
    pages pages in RAM in use by processes plus
    persistent pages that belonged to a terminated
    process and remain resident in RAM.
  • free The number of pages on the free list.
  • pin The number of pages pinned in physical
    memory (RAM), which cannot be paged out.
  • virtual The number of pages allocated in the
    virtual space.

svmon -G size
inuse free pin
virtual memory 786432 211735 574697
110862 17442 pg space 393216
863
20
Memory- Monitoring
Memory Leaks Program or process that keeps on
allocating more memory and does not release it.
This situation can cause real memory to be used
up extremely quickly and, in a worst-case
scenario, can even precipitate a system crash by
causing the system to run out of paging space.
root_at_lpar30p682e_pub/home/u0004773 gt svmon -uP
-t 5 grep -p Pi --------------------------------
---------------------------------------------
Pid Command Inuse Pin Pgsp
Virtual 64-bit Mthrd 16MB 286880 xmwlm
21074 7802 0 20859 N N
319648 IBM.ERrmd 20666 7815 0
20532 N Y 336046 rmcd
19919 7805 0 19276 N Y
413902 IBM.ServiceRM 19680 7818 0
19242 N Y N 401612 IBM.CSMAgentR
19623 7816 0 19462 N Y
21
Memory- Monitoring
lsps (AIX-specific) lsps -s -c -l -a
Psname -t lvnfs The lsps command
provides the paging space statistics. This very
important command should be part of your
repertoire. One additional view besides the -a
view illustrated above is used the -s
flag. Its important to note that the -a flag
reports only paging space that is being used,
while -s provides a summary of all paging space
allocated, including early page space allocation.
root_at_lpar30p682e_pub/ gt lsps -a Page Space
Physical Volume Volume Group Size Used
Active Auto Type hd6 hdisk0
rootvg 1536MB 1 yes yes lv
22
Memory- Tuning
  • vmo - Command to configure Virtual Memory
    Manager tuning parameters, 61 tunables in AIX
    5.3.
  • vmo -h tunable -L tunable -x
    tunable
  • vmo -p-r (-a -o tunable)
  • vmo -p-r (-D (-d tunable -o
    tunablevalue))
  • Page Space Allocation - AIX provides three
    different modes of paging space allocation
    deferred page space allocation (DPSA), late page
    space allocation (LPSA), and early page space
    allocation (EPSA). The default policy is deferred
    page space allocation.
  • Paging Size - A type of logical volume with
    allocated disk space that stores information
    which is resident in virtual memory but is not
    currently being accessed.

23
Memory- Tuning
  • minperm, maxperm, maxclient, and lru_file_repage
  • We definitely want the Virtual Memory Manager to
    favor working storage, meaning we dont want AIX
    to page working storage. What we really want is
    for the system to favor the caching that the
    database (Oracle in this case) uses. The way to
    do this is to set the vmo commands maxperm
    parameter to a high enough value while also
    making certain that the lru_file_repage parameter
    is set correctly.
  • minperm The point below which the page stealer
    algorithm will steal file or computational pages,
    regardless of repaging rates.
  • maxperm The point above which the page stealer
    will steal only file pages.
  • maxclient The minimum percentage of RAM that
    can be used to cache client pages.
  • lru_file_repage Setting this value to 0 (off)
    allows AIX to free only file cache memory
    (provided numperm is
    greater than minperm and VMM can steal enough
    memory to satisfy demand), virtually
    guaranteeing that working storage remains in
    memory.
  • The most important vmo settings are minperm and
    maxperm. Setting these parameters appropriately
    will ensure that your system is tuned to favor
    either computational memory or file memory.
  • This command binds the process to processor 3

24
Memory- Tuning
  • Our old approach to tuning minperm and maxperm
    was to set maxperm to a low number much lower
    than the default value (20) and set minperm to
    less than or equal to 10. This is how we normally
    would have tuned our database server.
  • The new approach is to set maxperm to a very high
    value higher than its default (80) and to
    make sure lru_file_repage is set to 0. IBM
    introduced the lru_file_repage parameter in AIX
    5.2 with ML4 and in AIX 5.3 with ML1. The
    lru_file_repage value indicates whether the VMM
    repage counts should be considered and what type
    of memory should be stolen. The default setting
    is 1 (it becomes 0 in AIX 6.1), so we need to
    change it to 0 to have the VMM steal file pages
    rather than computational pages. This technique
    solves the old problem of having to limit JFS2
    file cache to guarantee memory for applications
    such as Oracle.

vmo -p -o minperm5 vmo p o maxperm90
vmo p o maxclient90
25
Memory- Tuning
  • Page Space Allocation
  • AIX provides three different modes of paging
    space allocation deferred page space allocation
    (DPSA), late page space allocation (LPSA), and
    early page space allocation (EPSA).
  • The default policy is deferred page space
    allocation. DPSA works by delaying the allocation
    of paging space until the time when it is
    necessary to page out the page. This approach
    ensures that there is no wasted paging space, an
    important component of demand paging. In fact,
    when you have a large amount of RAM, you may
    actually never even use any of your paging space.
  • LPSA causes paging disk blocks not to be
    allocated until the corresponding pages in RAM
    are touched. This method is usually intended for
    environments where optimum performance is more
    important than reliability. The reason is that in
    this scenario a program can fail due to lack of
    memory.
  • The EPSA policy is usually used if you want to
    make certain that processes wont be killed
    because of low paging conditions. EPSA does this
    by preallocating paging space. This is the
    opposite end of the spectrum from LPSA. EPSA is
    used in environments where reliability rules. To
    turn on EPSA, you set the PSALLOC environment
    variable to early (PSALLOCearly).
  • This output shows that the default method, DPSA,
    is being used. To disable this policy, you would
    set the defps parameter to 0. This value would
    cause the LPSA policy to be used.

root_at_lpar30p682e_pub/ gt vmo -a grep def defps
1
26
Memory- Tuning
  • How much paging space?
  • Generally speaking, if a system has less than
    4GB of RAM, I usually like to create a one-to-one
    ratio of paging space versus RAM. If it has 8GB
    or higher, I set my paging space to as little as
    half the size of RAM.
  • Is your process using paging space?

root_at_lpar30p682e_pub/home/u0004773 gt svmon -P
grep -p 286880 -----------------------------------
----------------------------------------- Pid
Command Inuse Pin Pgsp Virtual
64-bit Mthrd 16MB 286880 xmwlm 21009
7802 0 20925 N N N
When your free list is really low and youre
paging incessantly, your system will start to
release processes to avoid thrashing. It will
even kill processes if sufficient paging space is
not available. To prevent this from happening,
you can tune these three vmo values npskill
This parameter specifies the number of free
paging space pages at which AIX starts killing
(SIGKILL) processes. npswarn This parameter
specifies the number of free paging space pages
at which AIX starts sending warnings (SIGDANGER)
to processes. nokillroot Setting this parameter
to 1 prevents processes owned by root from being
killed when parameter npskill has starting to
take effect.
27
Disk I/O - Introduction
  • The slowest operation for running programs is the
    time spent on actually retrieving your data from
    disk. This activity involves the physical disk as
    well as its logical components, such as the
    Logical Volume Manager (LVM). All the tuning in
    the world will do little if you have a poorly
    architected subsystem.
  • You can do more to optimize your throughput
    during the initial configuration of your I/O
    devices than you can ever do with tuning. Poor
    layout of your data affects I/O performance much
    more than anything.
  • First introduced in AIX 4.3, direct I/O bypasses
    the Virtual Memory Manager (VMM), enabling the
    transfer of data directly to disk from the users
    buffer.
  • Introduced in AIX 5.2, concurrent I/O (CIO) is
    almost identical to direct I/O, but one better.
    With direct I/O, inodes (data structures that are
    associated with files) are locked to prevent a
    condition in which multiple threads might try to
    change the contents of a file at the same time.
    CIO actually bypasses this inode lock, letting
    multiple threads read and write data concurrently
    to the same file. Direct I/O can cause major
    problems with databases that continuously read
    from the same file. Concurrent I/O solves this
    problem, making it the preferred method of
    running databases.
  • Asynchronous I/O (AIO) conceptually relates to
    whether applications are waiting for I/O to
    complete before processing additional data. In
    other words, it lets applications continue to
    process, while I/O runs in the background. This
    approach improves performance because processing
    can occur simultaneously. Virtually everything
    AIO-related has changed with the implementation
    of AIX 6.1

28
Disk I/O- Monitoring
nmon nmon is my favorite monitoring
tool of all. the data that you collect
from nmon is either available from your screen
(similar to topas) or available through reports
that you can capture for trending and analysis.
What this tool provides that others simply do not
is the ability to view pretty looking charts from
an Microsoft Excel spreadsheet, which can be
handed off to senior management or other
technical teams for further analysis. This is
done with the use of the nmon analyzer, which
provides the hooks into nmon.
29
Disk I/O- Monitoring
nmon analyzer With respect to disk I/O, nmon
reports the following data disk I/O rates, data
transfers, read/write ratios, and disk adapter
statistics. How do you use nmon to capture data
and import it into the analyzer? Use the
open-source sudo command and run nmon for three
hours, taking a snapshot every 30 seconds
sudo nmon -f -t -r test1 -s 30 -c 180 Next, sort
the created output file sort -A
testsystem_yymmdd.nmon gt testsystem_yymmdd.csv Th
en FTP the .csv file to your PC, start the nmon
analyzer spreadsheet (enabling macros), and click
on Analyze nmon data. The nmon command also helps
track the configuration of asynchronous I/O
servers. nmon is my favorite
monitoring tool of all. the data that you collect
from nmon is either available from your screen
(similar to topas) or available through reports
that you can capture for trending and analysis.
What this tool provides that others simply do not
is the ability to view pretty looking charts from
an Microsoft Excel spreadsheet, which can be
handed off to senior management or other
technical teams for further analysis. This is
done with the use of yet another unsupported tool
called the nmon analyzer, which provides the
hooks into nmon
30
Disk I/O- Monitoring
iostat - The equivalent of vmstat for disk
perhaps the most highly utilized of commands for
i/o The command reports the following
information tm_act Percentage of time that
the physical disk was active, or the total time
of disk request Kbps Amount of data (in
kilobytes per second) transferred to the
drive tps Number of transfers per second issued
to the physical disk Kb_read Total data (in
kilobytes) from the measured interval that is
read from the physical volumes Kb_wrtn Amount
of data (kilobytes) from the measured interval
that is written to the physical
volumes nmon is my favorite
monitoring tool of all. the data that you collect
from nmon is either available from your screen
(similar to topas) or available through reports
that you can capture for trending and analysis.
What this tool provides that others simply do not
is the ability to view pretty looking charts from
an Microsoft Excel spreadsheet, which can be
handed off to senior management or other
technical teams for further analysis. This is
done with the use of yet another unsupported tool
called the nmon analyzer, which provides the
hooks into nmon
iostat 1 System configuration lcpu4
disk4 tty tin tout avg-cpu
user sys idle iowait
0.0 392.0 5.2 5.5
88.3 1.1 Disks tm_act Kbps
tps Kb_read Kb_wrtn hdisk1
0.5 19.5 1.4 53437739
21482563 hdisk0 0.7 29.7 3.0
93086751 21482563 hdisk4 1.7
278.2 6.2 238584732 832883320 hdisk3
2.1 294.3 8.0 300653060 832883320
31
Disk I/O- Monitoring
lvmstat The resulting output shows the most
utilized logical volumes on your system since you
started the data collection tool This
detail is very helpful when drilling down to the
logical volume layer in tuning your systems
iocnt Number of read and write requests Kb_read
Total data (in kilobytes) from your measured
interval that is read Kb_wrtn Total data (in
kilobytes) from your measured interval that is
written Kbps Amount of data transferred (in
kilobytes per second) nmon is
my favorite monitoring tool of all. the data that
you collect from nmon is either available from
your screen (similar to topas) or available
through reports that you can capture for trending
and analysis. What this tool provides that others
simply do not is the ability to view pretty
looking charts from an Microsoft Excel
spreadsheet, which can be handed off to senior
management or other technical teams for further
analysis. This is done with the use of yet
another unsupported tool called the nmon
analyzer, which provides the hooks into nmon
lvmstat -v data2vg Logical Volume iocnt
Kb_read Kb_wrtn Kbps appdatalv
306653 47493022 383822 103.2
loglv00 34 0 3340
2.8 data2lv 453 234543
234343 89.3
32
Disk I/O- Tuning
  • ioo - The ioo command is used to tune virtually
    all I/O-related tuning parameters.
  • ioo  -p  -r   -o Tunable  NewValue 
  • ioo  -p  -r  -d Tunable
  • Some important JFS2-specific file system
    performance enhancements include sequential page
    read-ahead and sequential and random
    write-behind. The AIX Virtual Memory Manager
    anticipates page requirements for observing the
    patterns of files that are accessed. When a
    program accesses two pages of a file, the Virtual
    Memory Manager assumes that the program will keep
    trying to access the file in a sequential method.
    You can set VMM thresholds to configure the
    number of pages to be read ahead. With JFS2, make
    note of two important parameters
  • J2_minPageReadAhead Determines the number of
    pages to read ahead when VMM initially detects a
    sequential pattern
  • J2_maxPageReadAhead Determines the maximum
    number of pages VMM can read in a sequential file

root_at_lpar29p682e_pub/ gt ioo -o
maxpgahead32 Setting maxpgahead to 32
33
Disk I/O- Tuning
  • lvmo - One of those new commands introduced in
    AIX 5.3. You use the lvmo command to set and
    display pinned memory buffer, or pbuf, tuning
    parameters. The Logical Volume Manager uses pbufs
    to control pending disk I/O operations. Lets
    display the lvmo tunables for the data2vg volume
    group
  • The following parameters are available for
    tuning
  • pv_pbuf_count Number of pbufs that can be added
    when a physical volume is added to the volume
    group
  • max_vg_pbuf_count Maximum number of pbufs that
    can be allocated for a volume group
  • global_pbuf_count Number of pbufs that can be
    added when a physical volume is added to any
    volume group
  • Lets increase the pbuf count for this volume
    group

lvmo -v data2vg -a vgname data2vg pv_pbuf_coun
t 1024 total_vg_pbubs 1024 max_vg_pbuf_count
8192 perv_blocked_io_count 7455 global_pbuf_co
unt 1024 global_blocked_io_count 7455
lvmo -v redvg -o pv_pbuf_count2048
34
Disk I/O- Tuning
Tuning Parameters for JFS and JFS2
35
AIX 6.1 - changes
  • Restricted Tunables - In AIX 6.1, IBM now
    classifies many tunables as restricted in an
    attempt to discourage junior administrators from
    changing certain parameters deemed critical
    enough to be classified as restricted.
  • I/O Pacing In AIX 6.1 I/O pacing is turned on
    by default. a mechanism that lets you limit the
    number of pending I/O requests to a file, thereby
    preventing disk I/O-intensive processes (usually
    in the form of large sequential writes) from
    exhausting the CPU.
  • Asynchronous I/O (AIO) - There are no more AIO
    devices in the ODM. AIX 6.1 no longer provides
    the aioo command (what a short life span), and
    these tunables are now used only with ioo. .Two
    new parameters have also been added to ioo
    aio_active and posix_aix_active no more AIO
    servers run less pinned memory fewer
    processes greater performance
  • netcdctrl - This facility is used to manage the
    new network caching daemon, which has also been
    introduced to improve performance when resolving
    names using Domain Name Server (DNS). You can
    start this daemon from the System Resource
    Controller (SRC). Its main configuration file is
    /etc/netcd.conf.
  • The command reports the following information
  • tm_act Percentage of time that the physical
    disk was active, or the total time of disk
    request
  • Kbps Amount of data (in kilobytes per second)
    transferred to the drive
  • tps Number of transfers per second issued to
    the physical disk

36
Contact Information
  • Ken Milberg kmilberg_at_PowerTCO.com
  • Joefon Jann joefon_at_us.ibm.com
  • MC Press mlee_at_mcpressonline.com
Write a Comment
User Comments (0)
About PowerShow.com