Page 1 of 2
Left outer join in HASH file
Posted: Wed Jul 21, 2010 4:09 am
by karthi_gana
All,
1) I have a sequential file. It contain 16000 records.
2) I have a HASH file. It contains 2000 records
I desiged the job like,
Code: Select all
Seq_File _____> Transformer <--------- Hash File
|
|
|
v
Final_Seq_File
Seq_File contains Fund_Id, Portfolio_Id
Hash File contains Portfolio_id
Final_Seq_File contains 2000 records now.
But my requirement is, I have to store all 16000 records in the Final_Seq_File alongg with Portfolio_id (HASH FILE), if Portfolio_id is not matching with Seq_File, simply i have to store 'NULL'.
Posted: Wed Jul 21, 2010 4:30 am
by Sainath.Srinivasan
Isn't that the default working mechanism - unless you have constrained something !!??
Posted: Wed Jul 21, 2010 5:43 am
by chulett
Right, you have a constraint that only allows lookup successes to pass - you need to "turn that off".
Posted: Wed Jul 21, 2010 7:16 am
by karthi_gana
chulett wrote:Right, you have a constraint that only allows lookup successes to pass - you need to "turn that off".
I don't have any constraint under 'Transformer'.
Posted: Wed Jul 21, 2010 7:32 am
by Sainath.Srinivasan
See the row count per link and find where the constraint is.
Posted: Wed Jul 21, 2010 8:02 am
by battaliou
Er, you're trying to put NULL into a sequential file? Good luck.
Posted: Wed Jul 21, 2010 8:32 am
by chulett
Have you checked your job's log to see if there were warnings logged during the write to the sequential file?
Posted: Wed Jul 21, 2010 9:38 am
by Sainath.Srinivasan
Ensure that all columns are nullable. Or set a dummy value if the lookup fails.
Posted: Thu Jul 22, 2010 12:28 am
by karthi_gana
chulett wrote:Have you checked your job's log to see if there were warnings logged during the write to the sequential file?
Yes...I have checked the log...i din't see any warnings...
Posted: Thu Jul 22, 2010 1:57 am
by Sainath.Srinivasan
Did you ensure that all NULL returns are made into a non-null values.
DataStage server is observed to be notorious in losing rows where NULLs are moved into non-nullable columns.
Posted: Thu Jul 22, 2010 3:57 am
by karthi_gana
Sainath.Srinivasan wrote:Did you ensure that all NULL returns are made into a non-null values.
DataStage server is observed to be notorious in losing rows where NULLs are moved into non-nullable columns.
I am using only two columns in the HASH file, both of them are NON NULL values.
so i choose 'NO' for Nullable property.
Posted: Thu Jul 22, 2010 4:06 am
by Sainath.Srinivasan
![Idea :idea:](./images/smilies/icon_idea.gif)
They may be non-nullable in the hashed file. But if the lookup fails, the return is null.
Posted: Thu Jul 22, 2010 4:15 am
by karthi_gana
karthi_gana wrote:Sainath.Srinivasan wrote:Did you ensure that all NULL returns are made into a non-null values.
DataStage server is observed to be notorious in losing rows where NULLs are moved into non-nullable columns.
I am using only two columns in the HASH file, both of them are NON NULL values.
so i choose 'NO' for Nullable property.
I am just thinking the below steps...
1) Get matching records
2) Get unmatching records
3) Merge both the files
i think it may be a workaround for this problem...
Posted: Thu Jul 22, 2010 4:34 am
by ray.wurlod
Your steps 1 through 3 can be summarised as "get ALL the records".
Posted: Thu Jul 22, 2010 5:15 am
by arunpramanik
karthi_gana wrote:
2) Get unmatching records
i think it may be a workaround for this problem...
That is what you are not getting in the job you have designed, so your work around will not work