When I start my job without UV lookup it ends after 8min. When I start my job with UV lookup it ended after 15 hours <wow>
Any ideas about what cause the problem?
Hi, I have to read data from a seq file to a stage which will allow me to make a lookup. This input data has a complex key of two first columns: name nr col1 col2 ... col3 primary key: name + nr So I would like to create Hashed file. But in later job I need to make two different lookups using differ...
Use user-defined SQL in the ODBC stage. If the lookup delivers N rows for a single lookup there will be N rows delivered to the Transformer stage's output (assuming no constraint). I solved my problem by using UV stage. But before I put my data in UV table I need to cut the last part of id column (...
Open Help from the Designer and find the topic Defining Multirow Lookup for Reference Inputs which will explain all. Yes, I have just read that chapter some time ago. But I still don't know how to solve my problem, because my foreign key is not the same as key in reference table. It is concatenated...
Hi Piotrek, I hope i have understood your question correctly. No, you don't :) In lookup table there are no duplicated records. But for every source record, there are n (variuos number) records in lookup table, but each record has a different id: "refId;nr", refId - a lookup key, nr - a n...
I have in my input a reference column. I need to connect that reference field with records from another file. And in that file I will have many referenced records, because id is in format reference;n. So there could be many referenced records: reference;1, reference;2, reference;3, etc. But I don't ...
Your question has the answer. User Merge stage. I don't think so. I tried doing it by merge stage, but I didn't get the expected output. As I know merge will take data from first file and merge it with every record in the second with the same key. But I need only once to put data from first file, t...
I have three files: - first file: 0,data 1,data 2,data etc -second file: 0,data 0,data ... 1,data 1,data ... etc -third file: 0,data 0,data ... 1,data 1,data 1,data etc And I need to merge that files by taking for each record from first file, all records with the same key from the second file and af...
The output file needs to be constructed in three parts. The header record. One job or routine. Create file. The individual account records. One job sequence. Append to file. The trailer record. One job or routine. Append to file. But what if I get on my input not only one record, but many more? And...
Maybe somebody had a similar problem and can give me some advise? I am thinking about that problem and I still haven't got any acceptable solution. I could write some basic functions to handle that multivalues fields, but I am looking for more universal solution.
Thanks in advance.
It's not easy to call them, but you can set the functions up as being callable through the GCI from DataStage BASIC. You can then create a Routine containing that call. Download the GCI manual from IBM's UniVerse web site. So I think it will be much easier and faster to re-write source code of that...