My Job is running very slow
Moderators: chulett, rschirm, roy
My Job is running very slow
Hi,
I am running a job with stages
Src(Seqfile)-->LKP-->3 XFM stages-->LCC-->DB2
I selected Round Robin method in Link Partitioner and Sort/Merge method in Link Colletor and sort key is the Key column of Input columns. Update action is "Clear the table, then insert rows" and given DELETE Statement in Before SQL.
I am processing 1 lakh records.
why my is running very slow?
any assistance can be apprecieated.
thanks in advance.
I am running a job with stages
Src(Seqfile)-->LKP-->3 XFM stages-->LCC-->DB2
I selected Round Robin method in Link Partitioner and Sort/Merge method in Link Colletor and sort key is the Key column of Input columns. Update action is "Clear the table, then insert rows" and given DELETE Statement in Before SQL.
I am processing 1 lakh records.
why my is running very slow?
any assistance can be apprecieated.
thanks in advance.
Ravi
Re: My Job is running very slow
Is it necessary to use sort/merge collection method?. Why don't you try round robin collection method. Performance may improve. Sort merge collection method is used mostly when hash partition is made.ravij wrote:Hi,
I am running a job with stages
Src(Seqfile)-->LKP-->3 XFM stages-->LCC-->DB2
I selected Round Robin method in Link Partitioner and Sort/Merge method in Link Colletor and sort key is the Key column of Input columns. Update action is "Clear the table, then insert rows" and given DELETE Statement in Before SQL.
I am processing 1 lakh records.
why my is running very slow?
any assistance can be apprecieated.
thanks in advance.
When you had deleted the table why do you need to clear it again. Just insert into the table.
--Balaji S.R
Either replace your DB/2 write stage with a sequential file stage and measure your speed to determine if the DB/2 is the bottleneck. Also, what %age CPU do your 3 transform stage show? If they are at less than 30% on a long run with a moderately busy system then perhaps you don't need splitting into 3 processes. If the DB/2 is your bottleneck, dispense with the link collector and have all 3 transforms write out to the database in parallel.
My Job is running very slow
Hi Kumar,
thanks for reply.
What does it mean?
I am giving the query in Before SQL. like this
DELETE FROM SCHEMA.TABLENAME
thanks in advance.
thanks for reply.
Instead of Delete, why cant you just replace the whole content.
What does it mean?
I am giving the query in Before SQL. like this
DELETE FROM SCHEMA.TABLENAME
thanks in advance.
Ravi
Re: My Job is running very slow
As mentioned in other post, Delete uses the transaction log. Where as if you use replace it by pass the log files.ravij wrote:Hi Kumar,
thanks for reply.
Instead of Delete, why cant you just replace the whole content.
What does it mean?
I am giving the query in Before SQL. like this
DELETE FROM SCHEMA.TABLENAME
thanks in advance.
Another suggestion, 'Truncate' is far better that 'Delete' without condition.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Do you really need the Link Collector? DB2 will be quite happy with three inputs, so long as the sets of keys in each set are disjoint (so that there is no contention for locks). You may see substantial throughput gain as well.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
If you opt the option suggested by Ray, Replace may not be a wise idea.ray.wurlod wrote:Do you really need the Link Collector? DB2 will be quite happy with three inputs, so long as the sets of keys in each set are disjoint (so that there is no contention for locks). You may see substantial throughput gain as well.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
The first question to be answered is always: what else is running on the DS server?
Second is: what is running on the database server?
Third is: have you looked at your job processes cpu utilization?
Fourth is: are warning messages spewing into your job log?
Fifth is: what does your load SQL look like (pure inserts versus inserts w/updates versus wildcard updates etc)
Second is: what is running on the database server?
Third is: have you looked at your job processes cpu utilization?
Fourth is: are warning messages spewing into your job log?
Fifth is: what does your load SQL look like (pure inserts versus inserts w/updates versus wildcard updates etc)
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact: