So we were finally able to fix this, i am outlining the steps we did:
1) Read the columns with hex values with Extended property "Unicode".
2) Used the Seq and SeqAt functions to convert the values from EBCDIC to ASCII.
The file was downloaded as binary from the mainframe server.
Cheers!
Search found 24 matches
- Tue Sep 06, 2016 10:47 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Problem reading hexadecimal characters in CFF
- Replies: 6
- Views: 4086
- Mon Aug 29, 2016 1:23 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Problem reading hexadecimal characters in CFF
- Replies: 6
- Views: 4086
Tried to read the file using the sequential file stage and defined similar properties as the CFF with EBICDIC as character set and binary. The special characters occur like {0c}, {d1} etc. when i view the data via Designer. But when running the job, the job aborts giving a short read error, the job ...
- Mon Aug 29, 2016 9:23 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Problem reading hexadecimal characters in CFF
- Replies: 6
- Views: 4086
Thanks for your response Franklin. I tried to read this attribute as Binary, but it threw an error in the very next attribute on the copy code. This attribute was being read ok with decimal values with the original copy code when the problematic column was read as PIC X(1). This next column is defin...
- Mon Aug 29, 2016 9:14 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: reading hex value in datastage
- Replies: 2
- Views: 3471
Thanks for your response Franklin. I tried to read this attribute as Binary, but it threw an error in the very next attribute on the copy code. This attribute was being read ok with decimal values with the original copy code when the problematic column was read as PIC X(1). This next column is defin...
- Mon Aug 29, 2016 7:40 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Problem reading hexadecimal characters in CFF
- Replies: 6
- Views: 4086
Problem reading hexadecimal characters in CFF
Hi - There is a mainframe file that we copied over to our unix server (using binary mode). When trying to read the file in datastage, some columns show as special characters. When we asked our source about it, they said these columns are defined as hex on their side. For eg: they see the value of th...
- Mon Aug 29, 2016 7:40 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Problem reading hexadecimal characters in CFF
- Replies: 1
- Views: 2113
Problem reading hexadecimal characters in CFF
Hi - There is a mainframe file that we copied over to our unix server (using binary mode). When trying to read the file in datastage, some columns show as special characters. When we asked our source about it, they said these columns are defined as hex on their side. For eg: they see the value of th...
- Mon Dec 07, 2015 12:26 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Confusion on Partitioning for Join stage
- Replies: 14
- Views: 9107
- Wed Dec 02, 2015 12:15 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Question on Join
- Replies: 1
- Views: 1779
Question on Join
Hi -
Say there are 5 key columns used to do the join. One of those columns has null value from the source as well as the reference table. The other four columns have matching values.
Will a lookup be found in this case? (no null handling is done) or will the job abort or throw a warning etc.
Say there are 5 key columns used to do the join. One of those columns has null value from the source as well as the reference table. The other four columns have matching values.
Will a lookup be found in this case? (no null handling is done) or will the job abort or throw a warning etc.
- Wed Nov 25, 2015 8:17 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Checksum Stage not giving consistent output
- Replies: 5
- Views: 4196
My observations for this were very unusual, the original job when renamed with a suffix and compiled and rerun the record count was exactly the same as the copy job. When the job was renamed to the original name and rerun the count was again exactly the same as the copy job. Atleast the count with b...
- Wed Nov 25, 2015 8:14 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Lookup scenario with duplicate keys in the reference table
- Replies: 2
- Views: 3590
Lookup scenario with duplicate keys in the reference table
Hi - There is a scenario in one of the new requirement of our project: The input stream has columns COLA and COLB, COLB is the key column. The reference link has COLB and COLC. I want to do a left outer join based on COLB but COLB has duplicate entries in the reference link. The values for COLC are ...
- Mon Nov 02, 2015 9:57 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Checksum Stage not giving consistent output
- Replies: 5
- Views: 4196
Found something really weird during the testing, the job i was working on filters the record based on the checksum. It basically checks after the sort if the checksum for the record is equal to the previous value, if not then it sends the record to the output file. If i create a copy of the job, com...
- Wed Oct 28, 2015 1:59 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Checksum Stage not giving consistent output
- Replies: 5
- Views: 4196
- Wed Oct 28, 2015 12:57 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Checksum Stage not giving consistent output
- Replies: 5
- Views: 4196
Checksum Stage not giving consistent output
Hi - I have a job that does a check sum on certain columns. The stage property has the "Use all columns except those specified" and some columns are defined in the "Exclude Column" list. As part of a change request i have to by pass a couple of additional columns but i don't want...
- Thu Jan 12, 2012 11:50 pm
- Forum: General
- Topic: Record Count in Transformer
- Replies: 10
- Views: 6623
Record Count in Transformer
Hi ,
I have the job like ....
seq. file.....>transformer......>dataset
I want to count the number of records coming from the source .How can I do that in the transformer stage .
P.S :I don't want to use DSgetlinkinfo() and Aggregator stage or any script
I have the job like ....
seq. file.....>transformer......>dataset
I want to count the number of records coming from the source .How can I do that in the transformer stage .
P.S :I don't want to use DSgetlinkinfo() and Aggregator stage or any script
- Wed Dec 14, 2011 11:11 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Problem with date format
- Replies: 1
- Views: 1779
Problem with date format
Hi , I have a parallel job with the source as a dataset .A field AMORT_DT is coming as VARCHAR from the input and is being converted to Date in the transformer . After the lookup I want to check the converted Date field for Empty field (i.e '') I think '' is not a valid date type . Is there any othe...