Equivalent of rownum of oracle in ODBC stage
Moderators: chulett, rschirm, roy
-
- Charter Member
- Posts: 199
- Joined: Tue Jan 18, 2005 2:50 am
- Location: India
Equivalent of rownum of oracle in ODBC stage
hi,
Is there any keyword in odbc stage used for reading text file equivalent to rownum of oracle.
Is there any keyword in odbc stage used for reading text file equivalent to rownum of oracle.
Shantanu Choudhary
Don't know. However, are you just trying to constrain a query to only return a certain number of rows? Something like:
A generic DataStage solution would be to put a constraint on the Transformer following the ODBC stage to say:
Or, if this is for testing purposes, put a run limit on the job from the 'Limits' tab. Tell it to stop after processing X number of records.
Code: Select all
AND ROWNUM <= 10
Code: Select all
@INROWNUM <= 10
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 3337
- Joined: Mon Jan 17, 2005 4:49 am
- Location: United Kingdom
-
- Charter Member
- Posts: 199
- Joined: Tue Jan 18, 2005 2:50 am
- Location: India
Just doing R&D.
taking a case where i have 10 million records and i am loading into a target(could be Database or file). Say after 8 million records my load fails due to space constraint in my target server.
Now instead of starting the load from begining, i would prefer to start 8million+1.
so was just thinking, by identifying the no of records loaded or rejected, if i could "skip" the records that i have already loaded, in the second RUN.
You can say, logic is somewhat like SKIP option you have in SQLLDR.
taking a case where i have 10 million records and i am loading into a target(could be Database or file). Say after 8 million records my load fails due to space constraint in my target server.
Now instead of starting the load from begining, i would prefer to start 8million+1.
so was just thinking, by identifying the no of records loaded or rejected, if i could "skip" the records that i have already loaded, in the second RUN.
You can say, logic is somewhat like SKIP option you have in SQLLDR.
Shantanu Choudhary
-
- Participant
- Posts: 12
- Joined: Fri Apr 30, 2004 9:11 am
-
- Charter Member
- Posts: 199
- Joined: Tue Jan 18, 2005 2:50 am
- Location: India
Code: Select all
I think you'll have to go with a constraint testing @INROWNUM vs a parameter you passed in at the start of the run. I don't think you can do a select where rownum > something.
Code: Select all
Exactly. And is the data source static? How do you know, when it comes time to restart, that the order and quantity of rows pulled from the source table will exactly match the failure run?
regarding getting count of records loaded or rejected it can be easily done. having two links, one output link and the other one a reject link.
finally using "DsGetLinkinfo" to get the count of the two links or by getting count from the target itself.
Shantanu Choudhary
Mentioned, yes - as in "(could be Database or file)". However, your original post and question was about ODBC and ROWNUM, which is why you got the answers and caveats that you did.talk2shaanc wrote:Yep its static, if you read back my previous post, i hav mentioned sequential file.
![Confused :?](./images/smilies/icon_confused.gif)
When loading from a sequential file, how do you think sqlldr 'skips' records? By reading them and not processing them. The same thing you would do with the Sequential file stage and an @INROWNUM constraint to bring them into the job and then pitch them in the first transformer. Curious how you think anything can get 8 million and one records into a 10 million record sequential file (which must be accessed sequentially, hence the name) without reading the first 8 million records?
It's surprisingly fast, btw, to do this in a job - and that is how it is typically handled out there. However, that doesn't mean you couldn't find something that reads the file faster, running it 'before job' and (perhaps) writing out the 'unprocessed' records to a new file that is processed during the recovery run. Or do something similar in the Filter command of the Sequential File stage. I have no idea what that might be on a Windows server, but others might have some thoughts. Me, I'd just stick with the constraint.
![Wink :wink:](./images/smilies/icon_wink.gif)
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Charter Member
- Posts: 199
- Joined: Tue Jan 18, 2005 2:50 am
- Location: India
Code: Select all
hi,
Is there any keyword in odbc stage used for reading [b]text file [/b]equivalent to rownum of oracle.
![Smile :)](./images/smilies/icon_smile.gif)
in sqlldr SKIP, it does not read the complete record, "there is NO movement of records in cache", , it just check for the presence of record and move to the next line.it simply SKIP's it blindly
But if i use @INROWNUM , it will actually read the complete records, it will check everything. Say, if you hav 10 columns , pipe delimited. it will first distiguish b/w columns, Quote characters, NULL check, if any data elements given etc etc... Then bring the data into cache in the transformer stage and finally it will be rejected or processed based on @INROWNUM.
I am trying to avoid these things, so trying to find out best way to do it.
Shantanu Choudhary