MOVE4: Upgrading Your Database to OpenEdge 10 - PowerPoint PPT Presentation

1 / 41
About This Presentation
Title:

MOVE4: Upgrading Your Database to OpenEdge 10

Description:

Loads one table at a time. Faster than dictionary load. Slower than binary load ... Create new database structure. Load in to type ii data area ... – PowerPoint PPT presentation

Number of Views:73
Avg rating:3.0/5.0
Slides: 42
Provided by: PSC64
Category:

less

Transcript and Presenter's Notes

Title: MOVE4: Upgrading Your Database to OpenEdge 10


1
MOVE-4 Upgrading Your Database to OpenEdge 10
Gus Björklund
Wizard, Vice President Technology
2
Audience Survey
3
Why Upgrade to OpenEdge 10 RDBMS ?
4
Why Upgrade to OpenEdge 10 RDBMS ?
The 10.1A RDBMS is better, stronger, faster
5
Why Upgrade to OpenEdge 10 RDBMS ?
Add table/field/index online
Datatypes
Add extents online
Data archival
XML support
Save key events
Business Agility
Replication failback
High Availability
Maintenance
New IDE
logfile format
Truncate logfile online
Defrag tables online
OO ext to 4GL
Clusters
JTA support
AI archiving utility
Type II Storage
Fast quiet points
Audit core service
Audit archive util
SQL
index concurrency
DB defaults
Audit roles defined enforced
Security
Performance
Adaptive transaction end locking
Multi-threaded utilities
Limits / Scalability
Type 4 JDBC drivers
6
General Upgrade Strategy
  • Backup
  • Keep dbanayls promon data for reference
  • Install OpenEdge 10 (no need to overwrite V9)
  • Upgrade DB to 10
  • Run UPDATE STATISTICS for SQL
  • Do your backups !!!!
  • Recompile 4GL code
  • Run your application

7
The 5 Minute Upgrade
8
Database Server Conversion Steps
  • Upgrade clients first, then database server
  • Preparation
  • Backup database
  • Truncate BI, Disable AI
  • Install OpenEdge 10
  • Run conversion utility
  • Backup database
  • Start database

9
The 5-Minute Rule
  • Conversion program runs in-place, in 5-minutes or
    less
  • Mostly, we upgrade the schema tables
  • No changes to records or indexes
  • No changes to physical structures

10
Database convert
proutil ltdbgt -C conv910 -B 512 probkup ltdbgt
11
What Happens after 5-minutes?
  • You can run the database !
  • It will very likely run better than before
  • Fragmentation may still exist
  • Most of your DBA scripts should work fine
  • All data in Type I data areas
  • Optimize when time permits ...

12
Do some more work !
What if I want more performance than I get with
the fast convert ?
13
Move data to type ii data areas
14
Moving to Type II Data Areas
  • Before
  • First upgrade with conversion utility
  • Move schema tables
  • Create type ii data areas
  • Move tables and indexes
  • After
  • Truncate old data area
  • Delete old data area

15
Before the move
16
Moving schema tables
  • proutil ltdbgt -C mvsch
  • Renumbers existing schema area
  • Creates new schema area no. 6
  • Copies schema tables
  • Deletes old schema tables

17
Moving Tables and Indexes
  • Tablemove and Indexmove
  • Dump and Load

18
Storage Settings
  • data block size 4k or 8k
  • bi ai block size should match each other
  • Type II Data Area Cluster Sizes
  • Table areas 512
  • Index areas 512
  • Dump and load required to change data block size

19
Using Table and Index Move
20
Table Move
proutil ltdbgt -C tablemove owner-name.table-name
table-area index-area
Can move just table, ortable and its indexes
(preferred)
21
Index Move
proutil ltdbgt -C idxmove owner-name.indexname
area-name
22
After the move
23
Truncating area
  • proutil ltdbgt -C truncate biproutil ltdbgt -C
    truncate area ltarea-namegt
  • Area logically truncated
  • Extents can be deletedprostrct remove ltdbgt
    ltextent-typegt ltarea-namegt

24
If You cant tablemoveDumping and Loading
25
Dump and Load Strategy
  • Before
  • Backup
  • Upgrade with conversion utility
  • Move schema tables
  • Dump definitions and tables
  • Create new database with type ii data areas
  • Load definitions
  • Load table data
  • Build indexes
  • Backup

26
Dump data definitions
  • Use Data Administration tool
  • Dump all table and index definitions
  • produces .df files

27
Dump/Load Choices
  • Dictionary Dump (to text)
  • Dictionary load
  • Bulkload
  • Custom Dump Programs
  • Custom loader
  • Custom buffer-copy Programs
  • Binary Dump
  • Binary load

28
Dictionary Dump
  • Easy
  • Slow
  • 2 GB File size limit
  • Cant choose dump order
  • Have to dump entire table

29
Bulkload
  • proutil ltdbgt -yy n -C BULKLOAD fd-file -B 1000
  • Use dictionary or custom code to export data
  • 2 GB file limit
  • Loads one table at a time
  • Faster than dictionary load
  • Slower than binary load

30
Custom Dump Programs
  • Have to write yourself (but not too hard)
  • Use EXPORT and IMPORT statements
  • Can dump subset of data
  • Can run multiple loads concurrently
  • but not into type i data areas

31
buffer-copy
  • Have to write yourself
  • Planning required
  • Can do while applications use old database
  • Can copy part of data
  • Can choose dump order via index
  • No dump file needed
  • Can load multiple tables concurrently

32
Binary Dump and Load
  • Easy
  • No 2 GB file size limit
  • Can choose dump order (by index)
  • Can dump part of data
  • Can load multiple tables concurrently

33
Using Binary Dump and Load
34
Binary Dump and Load
  • Dump from type I data areas
  • Create new database structure
  • Load in to type ii data area
  • proutil ltdbgt -C dump lttablegt . index 0
  • Dump performance vs Read performance
  • Choose an index based on read order instead
  • 10 x differences
  • proutil ltdbgt -C load lttablegt.bd build
  • Load with build indexes
  • Load to truncated or new area
  • (truncate rather than emptied)

35
Large Databases Dump and LoadFastest Overall
Process
  • Binary dump and load
  • Multiple streams (3-5 per CPU)
  • Dump on smallest index ( blocks)
  • Index rebuild in same step as load
  • saves one phase
  • may be faster YMMV
  • Parallel load into separate type II data areas
    when possible
  • But a minor point

36
Large Databases Dump and LoadFastest Overall
Process
  • Tune for high activity
  • Dump with
  • RO (ymmv) and high B
  • Load with
  • high B, r
  • 1-2 APWs per CPU,
  • -pwqdelay 10, -pwscan 1024, -pwwmax 1024
  • 16K BI blocks, large clusters,
  • no ai/2PC
  • At every step, spread activity across disks /
    controllers

37
After the load
38
After Loading You Need To
  • build indexes
  • can do in groups or all at once
  • by area, by table, by owner
  • by area might make sense
  • dont forget sort scratch space (-SS)
  • backup your database

39
Index Rebuild After Load
  • Index Rebuild by area, table, or schema proutil
    ltdbgt -C idxbuild table lttablegt proutil ltdbgt -C
    idxbuild area ltareagt
  • Other values
  • -SG 64 (sort groups)
  • -SS filename (scratch space location list)
  • -TM 32 (merge buffers)
  • -TB 32 (temp block size)
  • -B 1000

40
Review
41
Summary
  • 10.1A RDBMS is better, stronger, faster even
    with NO application changes
  • Conversion is quick
  • Optimise at your leisure
  • Upgrade when you go home

42
More Info
  • Related sessions
  • DB8 Highly Parallel Dump and Load
  • MOVE14 Migrating Your Authentication System to
    OpenEdge 10.1A and Beyond
  • INNOV1 OpenEdge 10.1A Overview
  • INNOV12 OpenEdge Database Product Roadmap

43
WantAnswers
?
44
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com