ray.wurlod wrote:This is the correct format. The braces are added by the data browser; they are not actually part of the data, but are indicated by the metadata (record schema) in which subrecord is indicated. ...
Hi All, Iam using compare stage and for output i enabled RCP and defined 3 cols.. diff(tinyint) first(unknown) second(unknown) Iam getting proper output but problem is iam getting like below format.. diff first second -1 (1,aaa,bbb) (3,aaa,bbb) Iam getting subrecords in braces..Is this correct forma...
Does XML file have default XSD file....I mean i used sample xml file in my job with out defining xsd file...I imported metadata from that sample xml file itself and worked using in xml stages....
If default xsd is defined by default where can we see it??
Hi All, I was trying to generate job by fast track..The idea is to join two tables in fast track... My doubt is i dont have a target join table in database ..does fast track supports sequential file to store the join result? And if doesnt support should we have a target table already present in data...
Hi Ray, I tried using what you've suggested me..but not getting the result.. What i understood is "preload file to memory" , if it is set the lookup is done from memory and if record id not found lookup is done in file... By doing this if iam getting a record which is updated twice before ...
Hi All, Iam trying to implement SCD2 in server job... ODBC(Dim table)------->Transformer-------->SEQ(capturingupdates) | | hash file | | ODBC(source / reference) If any record got updated from reference , a new record of same key is created with updated values which is basically SCD2.. My doubt is i...
Thanks chullet for your reply, So, If data is huge , around million records its is efficient to use hash file stage to store data before further processing.. Have another question Q.Hash files stores non key values as dynamic arrays....I want to know how key values are mapped to non key values...lik...
Hi All, Iam a new Bie to Datastage...Firstly this forum is very use full for learning and thanks for Everyone contributing for helping to learn and solve issues... I have a doubt regarding Speed and efficiency in identifying a record(for huge number of records say for example 10,000)... 1.Creating i...