Page 1 of 1

Posted: Fri May 25, 2007 2:26 am
by Christina
Hi,

Can you give a unix command to delete a particular row from a file.

I will delete this row and try running. so that i can confirm whether problem lies only in this record or i am missing something else.

Posted: Fri May 25, 2007 6:55 am
by ralleo
Is row 1,198,231 the last row in the file?

Anyway you can do this.

1. $ head -n 1198230 <inputfilename> > <outputfilename>
This will give you all the rows except the one that it fails on.

If after running, this does work, then you have badly a delimited file, where one of the columns hasnt got a comma, hence getting the error message

Posted: Fri May 25, 2007 2:13 pm
by ray.wurlod
Within the Columns grid in the Sequential File stage you can scroll right and find a place where you can set up a "missing value" rule. Try changing that rule for DEPT to Replace, so that the job does not fail.

But, yes, you do need to check the file itself and verify whether there are too few delimiters in the line in question.

Posted: Sat May 26, 2007 4:33 am
by Christina
That line is not my last line.

but i tried running that job after removing that line. then it gave the same error in someother column.

I am running with some 100 records now.

i have to check whether it is running fine. :(

Posted: Sat May 26, 2007 7:00 am
by chulett
You need to not be removing lines and instead determine the exact nature of the problem and if it is possible to read the file properly. Some badly formated delimited files can be... problematic and should be returned to sender. :wink:

Otherwise, these turn out to be 'normal' issues you need to learn how to handle. How about posting some sample lines from the file? Good ones and then problem children as well. Posting the current version of your error(s) would help as well.