Page 2 of 2

Posted: Thu Feb 17, 2011 4:56 am
by mansoor_nb
Hi Vidyut,

I knew that the first option will result in Deadlock, that's why i had asked you to handle it :wink:

The another option is, Load all the unique records using the FASTLOAD method and write all the duplicate records into a file. Write a BTEQ script to load the duplicate records into the same table again using MLOAD (as you mentioned the volume of duplicate records are in hundereds, performance will not Degrade).
The BTEQ script should be triggered only after the successful completion of the data load in the sequence.

Google for BTEQ scripts you will get the syntax of the BTEQ script. Use it and give it a shot. I am sure this will work. Let all of us know about the results.

Thanks

Posted: Thu Feb 17, 2011 5:29 am
by Vidyut
Hi mansoor--Thanks for all ur valuable inputs :)

and Ray we are discussing a technical topic over here(not English Literature) and as far as others are able to understand what am saying I believe its fine

Posted: Thu Feb 17, 2011 3:45 pm
by ray.wurlod
My point is that many of them (such as my east Asian customers) can not easily understand these forms.

Posted: Thu Feb 17, 2011 3:57 pm
by jwiles
And y is sometimes a vowel....

In addition to Ray's point, although we are generally "techie" here we do try to keep the posts a little more professional than you might see in other areas of the web (not picking on any particular site as that shortcut speak is used almost everywhere).

As this site is used by many who do not have a good knowledge of the English language, it helps them immensely to post in clear complete words. 'U' can mean something completely different in another language...

Regards,

Posted: Thu Feb 17, 2011 8:31 pm
by Lovesh_Patni
Hi Vidyut,

You can still try Mansoor's original approach. Before loading Link 1 with duplicate rows into the table put them into a flat file and then load it. This will allow for unique data to be loaded into the table (Link 2) before Link 1 (via Flat file) attempts to put a lock into the table. This might slow down the performance a bit but it will still be faster.

Posted: Thu Feb 17, 2011 9:50 pm
by jwiles
How are you suggesting the method of writing the duplicates to a flat file and then loading it? Surely not within the same job? And depending upon the delay of reading a flat file? Reading a few hundred rows from a flat file will not take long, especially compared to reading millions of rows from elsewhere.

Perhaps load the records to separate tables, then an After SQL statement to load the dupes over to the main table. Or the Distributed Transaction Stage might be appropriate.

Given that it's only a few hundred rows, unless MLOAD is abysmally slow I don't see what the problem using it is. You do what you need to do in order to meet the requirements, and if the path you would like to work won't then abandon it and move on to what will.

Regards,

Posted: Fri Feb 18, 2011 6:16 am
by Vidyut
ray.wurlod wrote:My point is that many of them (such as my east Asian customers) can not easily understand these forms. ...
Hi Ray / Jwiles

I got your point and your concern for others is appreciable :)

And for the issue- I have created a new job(which was the last option I have) for loading the duplicates using Mload and the existing job loads only the unique records using the TD Enterprise Stage(as it used to do earlier).

Thanks all