Page 1 of 1

Job fails

Posted: Wed May 04, 2011 9:59 am
by Nagaraj
i have a pretty simple design

oracle connector----> TFM--->LKP--->dataset

the source is based on 3 views joined together, rows are about 5 million.
trying to run the job on default config first i.e 1 node.

The job fails with the following message.

APT_CombinedOperatorController,0: Write to dataset on [fd 8] failed (Success) on node node1,

APT_CombinedOperatorController,0: Orchestrate was unable to write to any of the following files:

APT_CombinedOperatorController,0: /IBM/InformationServer/Server/Datasets/ctproceduredelted.ds.xxx.hostname.0000.0000.0000.2d9b.cf8abf34.0001.368bc84f

APT_CombinedOperatorController,0: Block write failure. Partition: 0

Any idea where do i start first to look into ???

Posted: Wed May 04, 2011 10:25 am
by Nagaraj
I have checked the disck space everything looks fine to me,
the disk on which DS in installed is only 30% filled
and the data files directory is only 35% filled.

so its clearly not an issue with the disk space.

and also the ulimit -a is set to unlimited and i am running on 32 bit linux OS.

Posted: Wed May 04, 2011 10:41 am
by Nagaraj
Also changed the LKp to reference Datasets instead of Database, still the issue exists.

Posted: Wed May 04, 2011 11:03 am
by chulett
You checked the disk space available while the job was running?

Posted: Wed May 04, 2011 11:07 am
by Nagaraj
1.Tried with the Oracle EE stage, same error message appears,
also tried changing the paremeters, auto buffering mode etc.
2. Tried simplifying the Job to read from db and copy stage followed by Dataset.


all of them are sending out the same error message at the source stage itself.

Posted: Wed May 04, 2011 11:47 am
by Nagaraj
chulett wrote:You checked the disk space available while the job was running?
yes no change,

Posted: Wed May 04, 2011 12:25 pm
by Nagaraj
chulett wrote:You checked the disk space available while the job was running?
checked again seems like the disk space growing close to 74% and still increasing, i killed the job and its aborted now, still it is increasing, any way to stop this before the server is shut down.

Posted: Wed May 04, 2011 1:15 pm
by Nagaraj
what do we have here

/IBM/InformationServer/Server/Datasets

believe this is the tmp directory used for paging?

Posted: Wed May 04, 2011 1:31 pm
by Nagaraj
/IBM/InformationServer/Server/Datasets
this is eating up all the space.

is there any way to give a different path instead of the above, if yes? where do i give it?

Posted: Wed May 04, 2011 3:19 pm
by chulett
That would be controlled by your config file.

Posted: Wed May 04, 2011 4:26 pm
by Nagaraj
gr8 thanks, made the changes.
now i will focus on the issue. see if this problem araises again.