Reading Multiple Record Types (Record Id)using CFF stage

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
FranklinE
Premium Member
Premium Member
Posts: 739
Joined: Tue Nov 25, 2008 2:19 pm
Location: Malvern, PA

Post by FranklinE »

Not sure if this will help, but the differences I see between your settings and the ones I use successfully in this situation (sequential file read to transformer constraints on record type):

NLS map = ISO-8859-1
Format - Type defaults - Decimal: Allow all zeros = yes

I should also point out that I use "pure" mainframe formatting: no delimiters at any point, record type implicit. Part of your problem, as I see it, is a lack of consistency in creating your file to begin with. All three record formats should have the same length with filler for the "shorter" header and trailer, or they should all be variable length with the usual length data embedded at the beginning of the record.
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
kogads
Premium Member
Premium Member
Posts: 74
Joined: Fri Jun 05, 2009 5:36 pm

Post by kogads »

Thanks for the information FranklinE.
I have tried with the settings you have mentioned above including the Record Type and no delimiter but still I am getting unreadable characters in the output.
FranklinE
Premium Member
Premium Member
Posts: 739
Joined: Tue Nov 25, 2008 2:19 pm
Location: Malvern, PA

Post by FranklinE »

Glad to help.

How are you parsing the VarChar "common" record into the individual columns? I'm guessing, but your problem may be type conversion rather than the input format.

Packed-decimal and numeric-binary EBCDIC is a tough thing to handle.

I thought of one more possibility: What is your NLS Locale setting? Is it OFF or Project Default (OFF)? If not, that might be messing with your character sets.
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
kogads
Premium Member
Premium Member
Posts: 74
Joined: Fri Jun 05, 2009 5:36 pm

Post by kogads »

As you mentioned, the problem might be with the conversion of Packed and Binary data in the sequential file as I have taken the column with VARCHAR datatype. I tried using CFF stage becuase of this conversion of Packed and Binary data but there are issues with the record ID constraint with that stage for the detail records.

under the Job properties - NLS- The Default collation locale for stages is set as Project (OFF) and the default map for stages is set as Project Default (UTF-8).

Thanks for your time and information.
FranklinE
Premium Member
Premium Member
Posts: 739
Joined: Tue Nov 25, 2008 2:19 pm
Location: Malvern, PA

Post by FranklinE »

One final thought: Examine how you are converting your data from character to packed decimal. Unfortunately, I don't have a "live" example to offer, but in theory you should be able to reference the packed decimal field in the varchar column by position and length, and use StringToDecimal into a column with the correct attribute settings for packed decimal.

If you've already done that, all I can add is good luck. :(
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
kogads
Premium Member
Premium Member
Posts: 74
Joined: Fri Jun 05, 2009 5:36 pm

Post by kogads »

Thanks for your response. I made the settings under decimal as Packed = Yes in the sequential file. So I thought the Sequential file has the capability to convert packed decimals. I could extract the data successfully when I have used the same setting to read another file in the past which has one record type and 4 columns(Sequential file converted the Packed decimals by itself without any external functions).

I am not able to understand why I am getting all unreadable characters like boxes, Question marks and some numbers with the current file though I have used the same settings as in the past.

Thanks for your time and response.
Post Reply