Page 1 of 1

Mount points for scratch disk and resource disk

Posted: Fri Nov 04, 2011 7:01 am
by dsscholar
Hi There,


Is Necessary to have the seperate mount points for scratch disk and resource disks?

will it effect performance if we define all resource and scratch disk in the same mount point?

Please give some suggestions

Thanks

Posted: Fri Nov 04, 2011 7:13 am
by chulett
No.

Yes.

Posted: Fri Nov 04, 2011 7:52 am
by BI-RMA
Hi craig,

with a mount point being identical to a harddrive or an array of RAID-volumes I am with you entirely.

But what about our ultra-modern SAN-concepts where even the admins have virtually no control over which disks are actually providing disk-space for a specific mount-point (because in a way they all do)?

It might be interesting to test this out, really. But - alas - I am afraid I'll not have the time to do that for a while...

Posted: Fri Nov 04, 2011 9:04 am
by chulett
I have neither the time nor the ability to test anything like that. Interesting point, though.

Posted: Fri Nov 04, 2011 9:38 am
by jwiles
It's generally recommended that scratch disk/sort disk (if you define pools for sort) be local storage or local SAN (not shared SAN) on each physical server. The primary goal is to provide high-performance access/read/write capability while avoiding I/O contention with other users. Avoid NFS for this storage as much as is possible.

Regarding shared SAN resources, the scenario you describe is becoming more prevalent in the IT world. When possible, it is to your advantage to work closely with the storage teams to come to a common ground. As long as you can articulate to them how SAN storage allocation can affect the performance of your IS environment, whether it be the physical allocation, LUN allocations or bandwidth, you should be able to arrive at a configuration which provides satisfactory performance.

Regards,

Posted: Mon Nov 07, 2011 9:10 am
by kduke
Admins do have control over SAN storage they just choose to lump them all in one big logical. It makes it easier to admin. This does hurt performance. All this type of storage in DataStage should be considered temporary. It is built on the fly during ETL processes. It should be easily recreated by running some ETL job. If you are using datasets or hashed files as persistent storage then you have a design problem.

If all storage is deemed temporary then RAID and other dedundant storage is over kill and not needed. Less expensive and faster storage is more important.

Admins are not giving you the best performance by adding this kind of storage.