Page 2 of 2

Posted: Sun Jan 18, 2009 10:21 am
by chulett
No, my experience is all with Oracle for the last several years, with Progress and Informix before that. I've got no idea about any quirks that DB2 may bring to the table or how they line up with Oracle. For example, in Oracle an update or delete that affects zero records is considered successful which tends to confuse people. In other words, the operation executed successfully it just didn't happen to affect any rows which in and of itself is not an error though some people tend to think it should be. :wink:

Again, and I cannot emphasize this enough, the DataStage monitors and logs show actions sent to the database in question and the errors that it report back, something that isn't always one-to-one. For inserts, sure, you should be able to find a number of records in the target that match the monitor count however there's no such guarantee with updates or deletes, both of which can affect multiple records based on the keys chosen.

Your assertion that every update at various frequencies should be present in the table is impossible to know without intimate knowledge of the table structure and the update strategies employed. There really is no "given" there. Which brings us back to the issue of the end users not being able to "find" the records your job is inserting. Since they are typically constrained to BI tools with canned metadata and reports with who knows what in the way of constraints, I'd be more concerned if you cannot find them with SQL tools. And then correlate what you find with your knowledge of what the job is doing, the indexes / keys on the target table (etc) to determine whether or not a legitimate issue exists. I continue to be concerned by the fact that you keep mentioning "load" and "insert" and "not updated" for this, it makes me seriously wonder if there really is an issue here or not. :?

Posted: Sun Jan 18, 2009 11:39 am
by rachit82
chulett wrote:No, my experience is all with Oracle for the last several years, with Progress and Informix before that. I've got no idea about any quirks that DB2 may bring to the table or how they line up with Oracle. For example, in Oracle an update or delete that affects zero records is considered successful which tends to confuse people. In other words, the operation executed successfully it just didn't happen to affect any rows which in and of itself is not an error though some people tend to think it should be. :wink:

Again, and I cannot emphasize this enough, the DataStage monitors and logs show actions sent to the database in question and the errors that it report back, something that isn't always one-to-one. For inserts, sure, you should be able to find a number of records in the target that match the monitor count however there's no such guarantee with updates or deletes, both of which can affect multiple records based on the keys chosen.

Your assertion that every update at various frequencies should be present in the table is impossible to know without intimate knowledge of the table structure and the update strategies employed. There really is no "given" there. Which brings us back to the issue of the end users not being able to "find" the records your job is inserting. Since they are typically constrained to BI tools with canned metadata and reports with who knows what in the way of constraints, I'd be more concerned if you cannot find them with SQL tools. And then correlate what you find with your knowledge of what the job is doing, the indexes / keys on the target table (etc) to determine whether or not a legitimate issue exists. I continue to be concerned by the fact that you keep mentioning "load" and "insert" and "not updated" for this, it makes me seriously wonder if there really is an issue here or not. :?
I agree with you. My next step is to get a peek into the AS400 table structure. If possible i will be changing this particular job so that ODBC reflects insert or update function. For now we have AS400 people looking into the logs of the database for any errors or recurring issues.

However i just wanted to confirm from you and others that If there is a possibility that there could be data loss without any warning in such cases.

There is literally no documentation, BRD or any piece of information to look at. There are no annotations on the jobs. To tell you the truth the entire project is a mess that is being slowly untangled.

Posted: Sun Jan 18, 2009 12:02 pm
by chulett
rachit82 wrote:However i just wanted to confirm from you and others that If there is a possibility that there could be data loss without any warning in such cases.
That's going to have to come from someone else, one of the many DB2 gurus here. Hopefully they'll chime in when they get a chance.

Posted: Sun Jan 18, 2009 2:26 pm
by DSguru2B
Ok, if I were you, I would do the following:

- Bring the job down to another env., delete the table and reload.
- Check for before/after job subroutines, before/after stage subroutines, job control section of the job, before/after tab of the database stage.
- Make sure I am looking into the right env. while selecting records. It should be the same env. that I loaded records too. This can happen when one party is loading records and another party is looking at them. So make sure you do both.

Posted: Mon Jan 19, 2009 7:57 am
by rachit82
DSguru2B wrote:Ok, if I were you, I would do the following:

- Bring the job down to another env., delete the table and reload.
- Check for before/after job subroutines, before/after stage subroutines, job control section of the job, before/after tab of the database stage.
- Make sure I am looking into the right env. while selecting records. It should be the same env. that I loaded records too. This can happen when one party is loading records and another party is looking at them. So make sure you do both.
Great :D . Now 2 stupid questions.... :oops:
1. Should i close this topic as solved? Can i come back and then tell you
2. How do i know which topics are open and work-in-progress?

Posted: Mon Jan 19, 2009 8:07 am
by chulett
1. You tell us, is your problem resolved? :? If so, you should do two things - post the resolution and mark the problem as Resolved using the green button at the top of the screen.

2. Ideally, by looking at the thread subject - it should either be marked 'resolved' or 'workaround', otherwise a 'work in process'. Of course, not everyone comes back and does that, plus all the posts created before this functionality was added are not marked.

Posted: Mon Jan 19, 2009 8:47 am
by rachit82
chulett wrote:1. You tell us, is your problem resolved? :? If so, you should do two things - post the resolution and mark the problem as Resolved using the green button at the top of the screen.

2. Ideally, by looking at the thread subject - it should either be marked 'resolved' or 'workaround', otherwise a 'work in process'. Of course, not everyone comes back and does that, plus all the posts created before this functionality was added are not marked.
1. My problem is not solved yet. I am planning to run some tests once i copy it into test environment. Will post the result.
2. Thanks for that.

Got the job to work Finally

Posted: Thu Jan 22, 2009 11:08 am
by rachit82
I made 2 changes to the job.
Existing Job included - OCI Stage to Transformer to ODBC(AS400).
1st Modification: Added a flat file such that the records sent to ODBC are sent to Flat file too.
2nd Modification: Added another flat file such that Load to ODBC then to Transformer and then Flat file.

The first modification showed that the correct number of records were being loaded. But still it was not there on AS400. The 2nd modification showed the same record count as the records loaded into ODBC and here something worked and all records were loaded successfully and accounted for. Am going to monitor for another week or so to be on the safe side.

Thanks everyone for your inputs. But i have a sneaking suspicion that all was well on our side and the end users changed something on their end and rolled back the changed applied earlier.

Re: Got the job to work Finally

Posted: Thu Jan 22, 2009 2:55 pm
by DSguru2B
rachit82 wrote:i have a sneaking suspicion that all was well on our side and the end users changed something on their end and rolled back the changed applied earlier.
That might very well be the case. Always, and I mean always do the end to end analysis yourself, even if you have to call a bunch of people and get to "How" exactly they are viewing the data.
Good luck.