Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.
Moderators: chulett , rschirm , roy
Nagaraj
Premium Member
Posts: 383 Joined: Thu Nov 08, 2007 12:32 am
Location: Bangalore
Post
by Nagaraj » Wed May 04, 2011 9:59 am
i have a pretty simple design
oracle connector----> TFM--->LKP--->dataset
the source is based on 3 views joined together, rows are about 5 million.
trying to run the job on default config first i.e 1 node.
The job fails with the following message.
APT_CombinedOperatorController,0: Write to dataset on [fd 8] failed (Success) on node node1,
APT_CombinedOperatorController,0: Orchestrate was unable to write to any of the following files:
APT_CombinedOperatorController,0: /IBM/InformationServer/Server/Datasets/ctproceduredelted.ds.xxx.hostname.0000.0000.0000.2d9b.cf8abf34.0001.368bc84f
APT_CombinedOperatorController,0: Block write failure. Partition: 0
Any idea where do i start first to look into ???
Nagaraj
Premium Member
Posts: 383 Joined: Thu Nov 08, 2007 12:32 am
Location: Bangalore
Post
by Nagaraj » Wed May 04, 2011 10:25 am
I have checked the disck space everything looks fine to me,
the disk on which DS in installed is only 30% filled
and the data files directory is only 35% filled.
so its clearly not an issue with the disk space.
and also the ulimit -a is set to unlimited and i am running on 32 bit linux OS.
Nagaraj
Premium Member
Posts: 383 Joined: Thu Nov 08, 2007 12:32 am
Location: Bangalore
Post
by Nagaraj » Wed May 04, 2011 10:41 am
Also changed the LKp to reference Datasets instead of Database, still the issue exists.
chulett
Charter Member
Posts: 43085 Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO
Post
by chulett » Wed May 04, 2011 11:03 am
You checked the disk space available while the job was running?
-craig
"You can never have too many knives" -- Logan Nine Fingers
Nagaraj
Premium Member
Posts: 383 Joined: Thu Nov 08, 2007 12:32 am
Location: Bangalore
Post
by Nagaraj » Wed May 04, 2011 11:07 am
1.Tried with the Oracle EE stage, same error message appears,
also tried changing the paremeters, auto buffering mode etc.
2. Tried simplifying the Job to read from db and copy stage followed by Dataset.
all of them are sending out the same error message at the source stage itself.
Nagaraj
Premium Member
Posts: 383 Joined: Thu Nov 08, 2007 12:32 am
Location: Bangalore
Post
by Nagaraj » Wed May 04, 2011 11:47 am
chulett wrote: You checked the disk space available while the job was running?
yes no change,
Nagaraj
Premium Member
Posts: 383 Joined: Thu Nov 08, 2007 12:32 am
Location: Bangalore
Post
by Nagaraj » Wed May 04, 2011 12:25 pm
chulett wrote: You checked the disk space available while the job was running?
checked again seems like the disk space growing close to 74% and still increasing, i killed the job and its aborted now, still it is increasing, any way to stop this before the server is shut down.
Nagaraj
Premium Member
Posts: 383 Joined: Thu Nov 08, 2007 12:32 am
Location: Bangalore
Post
by Nagaraj » Wed May 04, 2011 1:15 pm
what do we have here
/IBM/InformationServer/Server/Datasets
believe this is the tmp directory used for paging?
Nagaraj
Premium Member
Posts: 383 Joined: Thu Nov 08, 2007 12:32 am
Location: Bangalore
Post
by Nagaraj » Wed May 04, 2011 1:31 pm
/IBM/InformationServer/Server/Datasets
this is eating up all the space.
is there any way to give a different path instead of the above, if yes? where do i give it?
chulett
Charter Member
Posts: 43085 Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO
Post
by chulett » Wed May 04, 2011 3:19 pm
That would be controlled by your config file.
-craig
"You can never have too many knives" -- Logan Nine Fingers
Nagaraj
Premium Member
Posts: 383 Joined: Thu Nov 08, 2007 12:32 am
Location: Bangalore
Post
by Nagaraj » Wed May 04, 2011 4:26 pm
gr8 thanks, made the changes.
now i will focus on the issue. see if this problem araises again.