I knew that the first option will result in Deadlock, that's why i had asked you to handle it
The another option is, Load all the unique records using the FASTLOAD method and write all the duplicate records into a file. Write a BTEQ script to load the duplicate records into the same table again using MLOAD (as you mentioned the volume of duplicate records are in hundereds, performance will not Degrade).
The BTEQ script should be triggered only after the successful completion of the data load in the sequence.
Google for BTEQ scripts you will get the syntax of the BTEQ script. Use it and give it a shot. I am sure this will work. Let all of us know about the results.
and Ray we are discussing a technical topic over here(not English Literature) and as far as others are able to understand what am saying I believe its fine
In addition to Ray's point, although we are generally "techie" here we do try to keep the posts a little more professional than you might see in other areas of the web (not picking on any particular site as that shortcut speak is used almost everywhere).
As this site is used by many who do not have a good knowledge of the English language, it helps them immensely to post in clear complete words. 'U' can mean something completely different in another language...
Regards,
- james wiles
All generalizations are false, including this one - Mark Twain.
You can still try Mansoor's original approach. Before loading Link 1 with duplicate rows into the table put them into a flat file and then load it. This will allow for unique data to be loaded into the table (Link 2) before Link 1 (via Flat file) attempts to put a lock into the table. This might slow down the performance a bit but it will still be faster.
How are you suggesting the method of writing the duplicates to a flat file and then loading it? Surely not within the same job? And depending upon the delay of reading a flat file? Reading a few hundred rows from a flat file will not take long, especially compared to reading millions of rows from elsewhere.
Perhaps load the records to separate tables, then an After SQL statement to load the dupes over to the main table. Or the Distributed Transaction Stage might be appropriate.
Given that it's only a few hundred rows, unless MLOAD is abysmally slow I don't see what the problem using it is. You do what you need to do in order to meet the requirements, and if the path you would like to work won't then abandon it and move on to what will.
Regards,
- james wiles
All generalizations are false, including this one - Mark Twain.
ray.wurlod wrote:My point is that many of them (such as my east Asian customers) can not easily understand these forms. ...
Hi Ray / Jwiles
I got your point and your concern for others is appreciable
And for the issue- I have created a new job(which was the last option I have) for loading the duplicates using Mload and the existing job loads only the unique records using the TD Enterprise Stage(as it used to do earlier).