ds_seqput: error in 'write()' - Error 0
Posted: Wed Apr 09, 2008 9:10 am
Hi,
I am using sequential stage (Pipes enabled) in my job which will write and read the records. Then using CRC32 function the checksum is generated and the generated checksum will be updated in my target. The pipe file is split in the job so that the sequential file / pipe stage is represented as 2 objects on the DS canvas. I am getting the following eror
ds_seqput: error in 'write()' - Error 0
The job runs in a multiple instance (eg: 7 instances) and it is working fine for 6 six instances but throws error for only one instance.
I searched a lot from the forum and below are my observations
1) There may be an issue in the file memory exceeding the limit defined
2) or there may be server overload
I could find the server is not overloaded and through df -k command i could find there is no issue with file space.
I am creating the file by mkfifo in before job routine with permission 666.When i trigger the job again its failing . when i tried using to open the file using cat command and do a ctrl+Z and then run the job its running fine even though i remove the file and create it (before job routine rm and mk fifo)
This error is frequent and i am looking forward a solution for such scenarios. Also i would like to know what could be the reason why the job is executing fine when i do a cat command followed by ctrl+z?
I am using sequential stage (Pipes enabled) in my job which will write and read the records. Then using CRC32 function the checksum is generated and the generated checksum will be updated in my target. The pipe file is split in the job so that the sequential file / pipe stage is represented as 2 objects on the DS canvas. I am getting the following eror
ds_seqput: error in 'write()' - Error 0
The job runs in a multiple instance (eg: 7 instances) and it is working fine for 6 six instances but throws error for only one instance.
I searched a lot from the forum and below are my observations
1) There may be an issue in the file memory exceeding the limit defined
2) or there may be server overload
I could find the server is not overloaded and through df -k command i could find there is no issue with file space.
I am creating the file by mkfifo in before job routine with permission 666.When i trigger the job again its failing . when i tried using to open the file using cat command and do a ctrl+Z and then run the job its running fine even though i remove the file and create it (before job routine rm and mk fifo)
This error is frequent and i am looking forward a solution for such scenarios. Also i would like to know what could be the reason why the job is executing fine when i do a cat command followed by ctrl+z?