Page 1 of 1

Unable to capture Reject Record

Posted: Wed Jun 16, 2004 1:35 pm
by ketfos
Hi,
Job reads input sequential file (52982 records). The output is Oracle table.
All fields in the target are varchar except one field(number)
In the transformer I have two outputs - one writing to a Oracle table and second is a Reject link. I check Reject Row check box in constraints and also add a field in Reject File link.DBMSCODE.

Observations
1. When I see the output table it shows 52981 records and Reject file has one record.
52981 + 1 = 52982
The record with ID = NF400059 is present in both the output links
In the Reject Link for this record link.DBMSCODE has value = ORA - 01722 (invalid number)
But the same record (ID NF400059)is also present Output Oracle table.
I don't see any reason for record being rejected.

2. When I see job log in Director it shows
52982 rows read from InTrans
52981 rows written to OutTrans
1 rows written to DSLink36
The record is shows being rejected is ID 01000781.
ORA-01722: invalid number
This is correct as value for this field in input record file is varchar and not numeric

Question - Why ID NF400059 is being written in both output table and Reject file.
Why ID 01000781 is not being captured in Reject File.

Thks

Posted: Wed Jun 16, 2004 1:41 pm
by chulett
Do you have Row Buffering turned on? Try it with it off.

If your 'Array Size' is greater than 1, change it to 1 and try it again.

Posted: Wed Jun 16, 2004 2:16 pm
by ketfos
Thks
The Array Size was set to 500.
I changed it to 1
It worked and wrote the correct record in the Reject link.

Ketfos

Q. How does this makes difference?

Posted: Wed Jun 16, 2004 2:22 pm
by chulett
It gets confused. :wink: When you send 500 records at a time, it can't always correctly tell you which one was the problem child. Sometimes it doesn't even give you the correct count. :?

Posted: Wed Jun 16, 2004 3:33 pm
by ketfos
If this is the case, Ascential should treat this as bug in the product.
Why would then somebody ever user Arrya Size more than 1
This is what the DataStage help reads-

The number of rows written to or read from the database at a time.

Input. The number of rows to be transferred in one call between DataStage and Oracle before they are written. Enter a positive integer to indicate how often Oracle performs writes at a time to the database. The default value is 1, that is, each row is written in a separate statement.
Output. The number of rows read from the database at a time. Enter a positive integer to indicate the number of rows to prefetch in one call. The default value 1 means that prefetching is turned off.

Larger numbers use more memory on the client to cache the rows. This minimizes server round trips and maximizes performance by executing fewer statements. If this number is too large, the client may run out of memory.
----
Will anybody from the Ascential like to comment on this-----?

Posted: Wed Jun 16, 2004 3:41 pm
by ogmios
I'm not working for Ascential... but welcome to the wonderful world of DataStage :wink: There are more quirks in DataStage.

The particular one you've hit has been in there since the early versions.

I'll give you a few other quirks:
- why does one database stage use internal date conventions while other use the ODBC style?
- why does bulk loading go so slow via DataStage?
- Too numerous quirks in sequencers
- Ever tried using CCF stages
- Why do job sometimes refuse to start: either with a timeout on 60 seconds or some phantom subprocess crashing and leaving DataStage in a running state
- Why does dsjob sometimes not return correct values
- What's with the XML stages when you have an error in a file it throws the whole file away
- Why do log files (internal hash files) not automatically shrink after having expanded enormously

Ogmios