7.5.2 IOPS Requirements...

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
dav_mcnair
Premium Member
Premium Member
Posts: 35
Joined: Thu Apr 19, 2007 12:42 pm

7.5.2 IOPS Requirements...

Post by dav_mcnair »

We are running into performance I/O issues with our 7.5.2 install and need to determine recommended IOPS for Datastage. We are running on an Hitachi SAN and the storage team is asking for this number to size the disk arrays correctly. Any ideas on how to caluclate or does IBM have a minimum recommendation?
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

IOPS = "I/O per second"?

This number realy should come out of your application. I/O operations to disk depend upon several factors and SANs are notoriously difficult to configure correctly in order to use them at anywhere close to the manufacturer's numbers.
Parallel jobs will write to disk in several ways, each with their own block sizes and I/O characteristics.

The file systems used are quite important. What use is a JFS for temp files? Sparse disks sound nice but one pays a price for this potential saving of disk space when those sparse OS files are filled with data.

a) Scratch and Temp files - these usually end up somewhere on the SAN in a RAID-5 or similarly configured location. A waste of time, space and SAN capabilities since there is no need to for any type of recoverability. Usually these files are quite slow on a SAN

b) DataSets and other permanent files with high data volume - PX will split physical data across several sequential files in a data set. These files don't change much. The SAN or disk subsystem should try to place this across as many spindles as possible.
Post Reply