Unable to write to a sequential file

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

hiral.chauhan
Premium Member
Premium Member
Posts: 45
Joined: Fri Nov 07, 2008 12:22 pm

Unable to write to a sequential file

Post by hiral.chauhan »

Hello everyone,
I tried searching for a post/topic in this forum similar to the problem I have but did not have a lot of luck. Please forgive me if I have missed anything :)

Ok so, I have a job that reads data from a sequential file. It then does some lookups and transformations and then writes to another sequential file. I have an ExecSH command that strips out nulls from the generated output file (see below):

tr -d '\000' < /myprod_targetdirectory/xfilename.txt > /myprod_targetdirectory/filename.txt | rm /myprod_targetdirectory/xfilename.txt

For some reason, the first two times it ran in production the file did not get created giving me the following message. But the file was created the third time (i.e. today) it ran.

"Executed command: tr -d '\000' < /myprod_targetdirectory/xfilename.txt > /myprod_targetdirectory/filename.txt | rm /myprod_targetdirectory/xfilename.txt
*** Output from command was: ***
SH: /myprod_targetdirectory/xfilename.txt: No such file or directory"

The job completed successfully without any error. The message above is NOT a warning. There are two more jobs that execute the same kind of ExecSH command but for a different file, their target directory is the same so I rule out the permission issue here but may be I am wrong.

I can work around the issue by making the job check if the file was created and re-run it until it is because the file got created the third time. But I need to understand the issue why the file was not created the first time.

Dear experts, I humbly request you to help me find out what could be the reason why the job did not create the file. and what should be done when something like this happens. Should I just remove the ExecSH and handle the nulls in my job and see if rm command had to do anything with it? or Should I check if the job creates the file and re-run it?

I appreciate your help in this!

Thanks,
Hiral
Thanks,
Hiral Chauhan
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Haven't given this much thought but first thing I noticed was the fact that you are piping the two commands together. Replace the pipe with && for a proper conditional break between the two.
-craig

"You can never have too many knives" -- Logan Nine Fingers
hiral.chauhan
Premium Member
Premium Member
Posts: 45
Joined: Fri Nov 07, 2008 12:22 pm

Post by hiral.chauhan »

Thanks Craig. let me implement this and check.

Actually I have trouble re-creating the scenario.. it almost always creates the file in Development/QA environment. (Wonder why things behave funny in Production :( )
Thanks,
Hiral Chauhan
pandeesh
Premium Member
Premium Member
Posts: 1399
Joined: Sun Oct 24, 2010 5:15 am
Location: CHENNAI, TAMIL NADU

Post by pandeesh »

first check your unix command whether it's working as expected.
pandeeswaran
qt_ky
Premium Member
Premium Member
Posts: 2895
Joined: Wed Aug 03, 2011 6:16 am
Location: USA

Post by qt_ky »

Code: Select all

echo "Some OSs will let you separate commands by semicolon also" ; echo "What's the plural of OS, anyway?"
Choose a job you love, and you will never have to work a day in your life. - Confucius
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

I believe that all will. I specifically mentioned && as it is a conditional separator, meaning the second command will only execute if the first 'succeeds'. With a semi-colon the second one executes regardless of the fate of the first. Use whichever is appropriate.

OSes. :wink:
-craig

"You can never have too many knives" -- Logan Nine Fingers
qt_ky
Premium Member
Premium Member
Posts: 2895
Joined: Wed Aug 03, 2011 6:16 am
Location: USA

Post by qt_ky »

Got it... thanks.
Choose a job you love, and you will never have to work a day in your life. - Confucius
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

There's also || which executes the downstream command only if the upstream command fails.

OSs (there's no second e in Systems except in French)
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
hiral.chauhan
Premium Member
Premium Member
Posts: 45
Joined: Fri Nov 07, 2008 12:22 pm

Post by hiral.chauhan »

Hello Everyone,

I implemented your advice of replacing the pipe character with "&&" and it is working like a charm.
But my problem is that the business wants me to handle [NUL] or 0x000 without using the unix command.

Datastage by default pads 0x0 when a sequential file is generated. I am trying to remove this padding. So far I have taken routes like:

1. Used APT_STRING_PADCHAR = 0x20 : But failed to remove spaces later on using Trim function since my target is Char.
2. Used APT_STRING_PADCHAR = 0x20 : Used Convert() function, Convert("Char(20)","",Myfieldname) to remove spaces, but failed again.

I am not sure what I am doing wrong or may be I am completely on the wrong track here. It would be a great help if you can throw some light or advice on how to get this issue resolved...

I am running out of ideas now...
Thanks,
Hiral Chauhan
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

0x20 is 32 in decimal. You should have used Char(32) rather than Char(20), but " " is actually more efficient. None of these will handle ASCII NUL, however, which is Char(0) - and is not the same as NULL (unknown value).

Code: Select all

Convert(" ", "", NullToEmpty(MyFieldValue))
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Mike
Premium Member
Premium Member
Posts: 1021
Joined: Sun Mar 03, 2002 6:01 pm
Location: Tampa, FL

Post by Mike »

hiral.chauhan wrote:my target is Char
Char is a fixed-width data type. Your attempt to use Trim and Convert is fruitless since the result will just be padded back to the required width.

If you don't want the pad characters, then use the Varchar data type.

Mike
hiral.chauhan
Premium Member
Premium Member
Posts: 45
Joined: Fri Nov 07, 2008 12:22 pm

Post by hiral.chauhan »

Hi Ray, I implemented

Code: Select all

Convert(" ", "", NullToEmpty(MyFieldName))
after making APT_STRING_PADCHAR = 0x20 but I still see spaces padded in the file. :(
Am I doing something wrong..

I also tried - Convert(Char(32), "", MyFieldName) but still no luck...

Hi Mike, I used VarChar instead of Char but it is still padding spaces(or nulls depending on the pad char env variable) between the columns...

for example, below is my required output

00010562012021600001755963F0000333

but it notepad++ it shows me the below output when I specify apt_string_padchar = 0x20.. it shows me [NUL] when apt_string_padchar is default.

0001056 20120216 00001755963 F 0000333

I cannot see the spaces or [NUL] in a regular notepad...
Thanks,
Hiral Chauhan
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Have you defined " " as the field delimiter on the Format properties of the sequential file stage?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
hiral.chauhan
Premium Member
Premium Member
Posts: 45
Joined: Fri Nov 07, 2008 12:22 pm

Post by hiral.chauhan »

Hi Ray,

No I have not defined " " as the field delimiter on the Format properties.

Should I?

Thanks,
Hiral
Thanks,
Hiral Chauhan
Mike
Premium Member
Premium Member
Posts: 1021
Joined: Sun Mar 03, 2002 6:01 pm
Location: Tampa, FL

Post by Mike »

No... Ray asked that because your output looks as if that could've been the case.

Are you by chance moving a Decimal to a Varchar?

Mike
Post Reply