scratch space

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
samsuf2002
Premium Member
Premium Member
Posts: 397
Joined: Wed Apr 12, 2006 2:28 pm
Location: Tennesse

scratch space

Post by samsuf2002 »

how to monitor the scratch space and how to increase the size
i am using sql server

can any one plz help me out
thanks
hi sam here
jdmiceli
Premium Member
Premium Member
Posts: 309
Joined: Wed Feb 22, 2006 10:03 am
Location: Urbandale, IA

More information please about your question...

Post by jdmiceli »

Howdy!

When you say 'scratch space', what exactly do you mean? If you mean you don't have enough room for all the files you are unloading or hashed files being created, then set up a directory structure on a different disk or set of disks and use parameters to refer to where you are creating your files. For example, create a parameter called 'dirHash' with a path like '/datastage/dev/projectname/r1/hash/' and the appropriate path on a disk somewhere. Obviously, I used Unix slashes; you may need to use Winblows whacks instead :lol: Then, in the Hashed File Stage, when you name the file refer to it as '#dirHash#filename'. Some people prefer to leave the last slash outside of the parameter so it would be defined as '/datastage/dev/projectname/r1/hash' and the reference would be '#dirHash#/filename.

Hope this helps. If I didn't answer your question, please clarify what you are looking for and I'll try try again! :D

Bestest
Bestest!

John Miceli
System Specialist, MCP, MCDBA
Berkley Technology Services


"Good Morning. This is God. I will be handling all your problems today. I will not need your help. So have a great day!"
samsuf2002
Premium Member
Premium Member
Posts: 397
Joined: Wed Apr 12, 2006 2:28 pm
Location: Tennesse

Post by samsuf2002 »

when i am running th ejob i am getting error as scratch space full and my job is getting aborted. i got hint from dsxchange that we need to increase the scratch space but i dont know where i will find that option in data stage
hi sam here
meena
Participant
Posts: 430
Joined: Tue Sep 13, 2005 12:17 pm

Post by meena »

Hi,
Check for "scratchdisk" node in APT_CONFIG_FILE environment variable pointing to configuration file. You can either increase the size or remove the limits in the configuration file.
Can you explain about you job(any lookup's/amount of data).
samsuf2002 wrote:when i am running th ejob i am getting error as scratch space full and my job is getting aborted. i got hint from dsxchange that we need to increase the scratch space but i dont know where i will find that option in data stage
samsuf2002
Premium Member
Premium Member
Posts: 397
Joined: Wed Apr 12, 2006 2:28 pm
Location: Tennesse

Post by samsuf2002 »

hi meena
my job has 2 lookup and size of file is 40 gb .
hi sam here
thumsup9
Charter Member
Charter Member
Posts: 168
Joined: Fri Feb 18, 2005 11:29 am

Post by thumsup9 »

what is the memory avaialble , you can check with ulimit -a
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

You could create a shell script that periodically executes du -s pathname on the scratch disk directories.

Yes, I know you're on Windows, but with 7.5x2 you have MKS Toolkit installed so you have access to all UNIX commands. Just remember to name the shell at the top of the script, and execute from a UNIX shell.

If you don't have 7.5x2 then you're not running parallel jobs on Windows.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply