Equivalent of rownum of oracle in ODBC stage

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
talk2shaanc
Charter Member
Charter Member
Posts: 199
Joined: Tue Jan 18, 2005 2:50 am
Location: India

Equivalent of rownum of oracle in ODBC stage

Post by talk2shaanc »

hi,

Is there any keyword in odbc stage used for reading text file equivalent to rownum of oracle.
Shantanu Choudhary
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Don't know. However, are you just trying to constrain a query to only return a certain number of rows? Something like:

Code: Select all

AND ROWNUM <= 10
A generic DataStage solution would be to put a constraint on the Transformer following the ODBC stage to say:

Code: Select all

@INROWNUM <= 10
Or, if this is for testing purposes, put a run limit on the job from the 'Limits' tab. Tell it to stop after processing X number of records.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Sainath.Srinivasan
Participant
Posts: 3337
Joined: Mon Jan 17, 2005 4:49 am
Location: United Kingdom

Post by Sainath.Srinivasan »

ROWNUM in the db is BEFORE any sort in the dbms. INROWNUM is after DataStage receives the rows from the dbms.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Of course. :? Just trying to put some alternatives on the table...
-craig

"You can never have too many knives" -- Logan Nine Fingers
talk2shaanc
Charter Member
Charter Member
Posts: 199
Joined: Tue Jan 18, 2005 2:50 am
Location: India

Post by talk2shaanc »

Just doing R&D.

taking a case where i have 10 million records and i am loading into a target(could be Database or file). Say after 8 million records my load fails due to space constraint in my target server.
Now instead of starting the load from begining, i would prefer to start 8million+1.

so was just thinking, by identifying the no of records loaded or rejected, if i could "skip" the records that i have already loaded, in the second RUN.

You can say, logic is somewhat like SKIP option you have in SQLLDR.
Shantanu Choudhary
alhamilton
Participant
Posts: 12
Joined: Fri Apr 30, 2004 9:11 am

Post by alhamilton »

I think you'll have to go with a constraint testing @INROWNUM vs a parameter you passed in at the start of the run. I don't think you can do a select where rownum > something.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Exactly. And is the data source static? How do you know, when it comes time to restart, that the order and quantity of rows pulled from the source table will exactly match the failure run?

Land the data.
-craig

"You can never have too many knives" -- Logan Nine Fingers
talk2shaanc
Charter Member
Charter Member
Posts: 199
Joined: Tue Jan 18, 2005 2:50 am
Location: India

Post by talk2shaanc »

Code: Select all

I think you'll have to go with a constraint testing @INROWNUM vs a parameter you passed in at the start of the run. I don't think you can do a select where rownum > something.
ya i know, i can use @inrownum. but using @inrownum would mean that i would be reading all 8million records, but not taking forward after transformer to next stage. but i want to avoid reading.


Code: Select all

Exactly. And is the data source static? How do you know, when it comes time to restart, that the order and quantity of rows pulled from the source table will exactly match the failure run? 

Yep its static, if you read back my previous post, i hav mentioned sequential file.
regarding getting count of records loaded or rejected it can be easily done. having two links, one output link and the other one a reject link.
finally using "DsGetLinkinfo" to get the count of the two links or by getting count from the target itself.
Shantanu Choudhary
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

talk2shaanc wrote:Yep its static, if you read back my previous post, i hav mentioned sequential file.
Mentioned, yes - as in "(could be Database or file)". However, your original post and question was about ODBC and ROWNUM, which is why you got the answers and caveats that you did. :?

When loading from a sequential file, how do you think sqlldr 'skips' records? By reading them and not processing them. The same thing you would do with the Sequential file stage and an @INROWNUM constraint to bring them into the job and then pitch them in the first transformer. Curious how you think anything can get 8 million and one records into a 10 million record sequential file (which must be accessed sequentially, hence the name) without reading the first 8 million records?

It's surprisingly fast, btw, to do this in a job - and that is how it is typically handled out there. However, that doesn't mean you couldn't find something that reads the file faster, running it 'before job' and (perhaps) writing out the 'unprocessed' records to a new file that is processed during the recovery run. Or do something similar in the Filter command of the Sequential File stage. I have no idea what that might be on a Windows server, but others might have some thoughts. Me, I'd just stick with the constraint. :wink:
-craig

"You can never have too many knives" -- Logan Nine Fingers
talk2shaanc
Charter Member
Charter Member
Posts: 199
Joined: Tue Jan 18, 2005 2:50 am
Location: India

Post by talk2shaanc »

Code: Select all

hi, 

Is there any keyword in odbc stage used for reading [b]text file [/b]equivalent to rownum of oracle.  

 
This was my first post. :)


in sqlldr SKIP, it does not read the complete record, "there is NO movement of records in cache", , it just check for the presence of record and move to the next line.it simply SKIP's it blindly

But if i use @INROWNUM , it will actually read the complete records, it will check everything. Say, if you hav 10 columns , pipe delimited. it will first distiguish b/w columns, Quote characters, NULL check, if any data elements given etc etc... Then bring the data into cache in the transformer stage and finally it will be rejected or processed based on @INROWNUM.

I am trying to avoid these things, so trying to find out best way to do it.
Shantanu Choudhary
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Good luck.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Post Reply