Page 1 of 1

A Datastage Job creating an Issue to the entire Database

Posted: Wed Jul 19, 2006 12:15 am
by Ratan Babu N
Hi,
In my Job, I am using two datasets and then Funneling it by a Funnel stage (Funnel Type --- Continous Funnel) and then inserting it into a Db2 Table by using a User defined Update (Giving Insert Staement in User defined Update). Partioning type is Db2.

The Job is running on 4 nodes and one dataset is carrying around 5000 records and the other is carrying 0 records.

Row Commit Interval is 999999

Array Size 999999

The Job is Aborting and the entire database is geeting down because of this Job. Can a Single Job create an Issue to entire Database?
Plz help me to know whats the wrong thing in this Job.

Posted: Wed Jul 19, 2006 12:55 am
by ArndW
Could you specify exactly what you mean when
the entire database is geeting down because of this Job
? The only possible issue I know is when using the load method you might get the table, and perhaps the tablespace, into a "load pending" or "backup pending" state.

Posted: Wed Jul 19, 2006 1:18 am
by Ratan Babu N
The Dba is telling that Swap space is more for this Job.

And one more thing is, I gave the output link of a Funnel stage to a Db2 stage. Can it create any problem.

Posted: Wed Jul 19, 2006 1:31 am
by ArndW
Can you see the possible link between your commit size in the job and the error? If you change your commit frequency down to 50,000 does the error go away (or happen at another point)?

Posted: Wed Jul 19, 2006 1:47 am
by Ratan Babu N
The message I got is as follows.

main_program: ORCHESTRATE step execution terminating due to SIGINT

So the message is not relted to the Rowcommit interval, right?
If we give a high Row Commit Interval, will it affect ?

Posted: Wed Jul 19, 2006 2:07 am
by ArndW
How about approaching your problem one issue at a time. Start with the DB error(s) and see if you still have your job problem. You still haven't shown us an actual error that is locking or stopping the database, just that swap space is full (which isn't a DB/2 issue but a UNIX issue).

Posted: Wed Jul 19, 2006 2:27 am
by ray.wurlod
Start with one row per transaction. This should work.
Increase it to 1000. This should work also, since DB2 can easily handle 1000 row transactions.

Ask your DBA whether the database can handle a transaction containing 1 crore records, and be prepared to run for cover!

Posted: Wed Jul 19, 2006 4:49 am
by Ratan Babu N
[Ask your DBA whether the database can handle a transaction containing 1 crore records, and be prepared to run for cover![/quote]

The Transaction handling depends on the table or the entire database?

If it depends on entire database, then some Jobs are running successfully with transaction containing more than 1 crore records.

Posted: Wed Jul 19, 2006 7:46 am
by ArndW
DB/2 has a given amount of space tablespace wide for these transactions, so the number of rows is less important than the size of that data in each row.

Posted: Wed Jul 19, 2006 10:09 pm
by Ratan Babu N
Hi Thankyou Andrw and Ray. The Issue is resolved . the Database is not able to handle 999999 Array Size which causes my problem