If your application accesses data in a random order, for example an online transaction-processing application, you can dump and reload the data to optimize it for random access. Use a database with small, fixed-length extents spread across the volumes to balance the I/O across the disks. Use several clients to load data simultaneously, thereby spreading the data across all disks.
This technique also improves load performance on SMP platforms.
To dump and reload data to optimize it for random access:
1. Dump the data and definitions.
2. Load the definitions using the Data Dictionary or the Data Administration tool.
3. In multi-user mode, start a server with a before-image writer (BIW) and asynchronous page writer (APW).
4. Start a client for each processor on your system. Have each client load certain tables.
For each client, write an ABL procedure similar to the following:
/*Run this procedure after connecting to a database*/
DEFINE VARIABLE iNextStop AS INTEGER NO-UNDO.
DEFINE VARIABLE iRecCount AS INTEGER NO-UNDO.
DEFINE VARIABLE ix AS INTEGER NO-UNDO.
INPUT FROM tablename.d.
TOP:
REPEAT TRANSACTION:
iNextStop = iRecCount + 100.
REPEAT FOR table-name WHILE iRecCount LT iNextStop
ON ERROR UNDO, NEXT ON ENDKEY UNDO, LEAVE TOP:
CREATE table-name.
IMPORT table-name.
iRecCount = iRecCount + 1.
END.
END.
The clients, loading the data simultaneously, distribute the data across all disks. This eliminates hot spots (that is, areas where data might be concentrated).
5. After the data is loaded, perform a full index rebuild using PROUTIL IDXBUILD.