writing High Values into Asci or Binary File

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
eyabmo_rbc
Participant
Posts: 10
Joined: Tue Nov 20, 2007 7:15 am
Location: CANADA

writing High Values into Asci or Binary File

Post by eyabmo_rbc »

How do i assign a column with a high value ,, if my output column is col1 , part of asci or ebcedic file ?

thanks
E.M
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

What is your job design? What stage types are you using? Under what conditions do you need to assign a "high value"? Have you decided what "high value" means/is for each of the data types you are using?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
eyabmo_rbc
Participant
Posts: 10
Joined: Tue Nov 20, 2007 7:15 am
Location: CANADA

HIGH VALUE

Post by eyabmo_rbc »

ray.wurlod wrote:What is your job design? What stage types are you using? Under what conditions do you need to assign a "high value"? Have you decided what "high value" means/is for each of the data types you are u ...
Parallel job reading data from Asci file , use a transformer , write data back into a DS and an asci file , columns am reading is suppose to take high value as a default value , one of the columns is character and the other is Decimal .

thanks
E.M
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

"high" and "low" values are a COBOL concept - is this data going back to COBOL? You will need to check what the EBCDIC "HIGH VALUE" is and either use that value in a binary field, or use the appropriate ASCII mapping for a non-binary one so that when the file is converted back to EBCDIC those values are correct.
eyabmo_rbc
Participant
Posts: 10
Joined: Tue Nov 20, 2007 7:15 am
Location: CANADA

Post by eyabmo_rbc »

ArndW wrote:"high" and "low" values are a COBOL concept - is this data going back to COBOL? You will need to check what the EBCDIC "HIGH VALUE" is and either use that value in a binary field, or use the appropria ...
We will have both scenarios , first output will be used for our load , so , what would be the equvilent " High value " to a field in asci file ?
second file will be used by another group , they need it in a cobol equivalent , just to make it simple for you ,, what is the " High Value " equivalent in a Dataset or asci file , if the column is char or decimal
E.M
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

DataStage has no knowledge of "HIGH VALUE" - it is a COBOL concept. In most cobol implementations a PIC(X) high value is 0xFF but I seem to recall that there are differences depending on the COBOL implementation. ANSII Standard would be 0xFF. I can't recall what is used for COMP-n data types, I don't have an ANSII book available here.

I think an EBCDIC 0xFF is normally not defined and might be transferred to ASCII unchanged. In that case the value would remain CHAR(255). Try it and tell us if that is the case.
infranik
Participant
Posts: 20
Joined: Tue Oct 11, 2005 8:11 am

Post by infranik »

(I am continuing this thread as I didnt want to open a new one..hope no one minds)

I am able to get perfect high values and low values through a server job using function - char(255) and char(0)
but when I try to use the same in parallel jobs, I am unable to achieve the same result. (using sequential file, binary data format and EBCDIC character set) I can get low values but cannot get high values.

does anyone know how to get high values in a parallel job?
Post Reply