Data.30 in my input Data

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

thurmy34
Premium Member
Premium Member
Posts: 198
Joined: Fri Mar 31, 2006 8:27 am
Location: Paris

Data.30 in my input Data

Post by thurmy34 »

Hi all

Sometimes i have in my input DATA (from a hashed file who is built in a previous job ) the string DATA.30 in the key field of my Hashed File.
If i delete the hashed file the job works fine one or two days and then the problem is back.

Code: Select all

MyJob..MyTransForm: At row 480, link "to_table", while processing column "Cust_ID"
Value treated as NULL
Attempt to convert String value "DATA.30" to Long type unsuccessful
The job generate warnings and abort because the Cust_ID cannot be null in the Database.


Any Advice ?
Thanks you
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

SOMEONE has put another file in the hashed file directory (possibly by specifying the hashed file directory in the pathname in a Sequential File stage), or removed the hidden file .Type30 from the hashed file directory.

A hashed file directory must contain precisely the three files DATA.30, OVER.30 and .Type30, and no others.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

1. Create a .Type30 file. This needs to be an empty file.

Code: Select all

echo > .Type30
2. Create a "directory file" in DataStage.

Code: Select all

CREATE.FILE TempDirX 19
This creates a directory that is a subdirectory in your project directory.

3. Using an operating system MOVE command move all the illegal files from the hashed file directory to your new TempDirX directory.

4. Verify that the hashed file directory now contains only DATA.30, OVER.30 and .Type30. You may also like to verify - using View Data perhaps - that you can see the hashed file successfully, even if it is empty.

5. If it's appropriate to do so, move the files from TempDirX into the hashed file using a "UniVerse" COPY command. First, if needed, create a pointer to your hashed file.

Code: Select all

SETFILE X:\path\hashedfile hashedfile
Then execute the copy.

Code: Select all

COPY FROM TempDirX TO hashedfile ALL


6. Verify that the records are correctly in the hashed file. Then you can delete the temporary "directory file".

Code: Select all

DELETE.FILE TempDirX
And, if you wish, you can delete the VOC pointer to the hashed file.

Code: Select all

DELETE FROM VOC WHERE @ID = 'hashedfile';
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
thurmy34
Premium Member
Premium Member
Posts: 198
Joined: Fri Mar 31, 2006 8:27 am
Location: Paris

Post by thurmy34 »

Hi
Can i fix it by Export / Delete / Import my jobs.
I don't have the right to write on the disk of the production and the project is protected.


Thank you
Last edited by thurmy34 on Wed Mar 11, 2009 4:23 am, edited 1 time in total.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

No. The extraneous files in the hashed file directory will continue to prevent its being used as a hashed file. And those files presumably are records that it was intended should be written to the hashed file.

Hashed files are not internal to jobs, they are external objects.

Of course you could completely remove them and re-run whatever it was that should have populated the hashed file, but can you guarantee that the time is the same as when they were first written; that you will generate the same set of records?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Me, assuming this is a pathed hashed file, I would just remove the directory and let whatever process created it recreate it. Assuming you're not using it for persistent storage and couldn't afford to lose what's there.

If this is an account based hashed file, then you'll need to delete the VOC entry as well.
-craig

"You can never have too many knives" -- Logan Nine Fingers
thurmy34
Premium Member
Premium Member
Posts: 198
Joined: Fri Mar 31, 2006 8:27 am
Location: Paris

Post by thurmy34 »

Hi
Sorry for the delay, i was waiting to my premium validation.

Chulett,
I already did that the job work well the day after and abort the next day (is it clear ?).

Ray,
Do i have to execute the the point 5 commands in the administrator ?

Thanks.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

You can, but would have to wrap the UNIX commands in a DOS command, for example

Code: Select all

 DOS /C "echo > .Type30"
Otherwise you can connect via telnet, source dsenv, and invoke dssh in your project directory.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

thurmy34 wrote:I already did that the job work well the day after and abort the next day (is it clear ?).
Meaning, you removed the hashed file, it worked for a day or two afterwards and then aborted again? :?

If that's the case, what was the abort message? Can you post it in its entirety?
-craig

"You can never have too many knives" -- Logan Nine Fingers
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Is there somewhere a Sequential File stage that is specifying this hashed file directory as its target?

Or perhaps a Hashed File stage with Type set to 19?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
thurmy34
Premium Member
Premium Member
Posts: 198
Joined: Fri Mar 31, 2006 8:27 am
Location: Paris

Post by thurmy34 »

Chulett
The job abort because the Cust_Id field is sent with null value to the database.
The row is rejected because a Cust_Id cannot be null.

Ray
Do you mean the file D_HashFile or the directory HashFile ?
The directories are stored in a .ini file.
I will check them.

Thanks
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

The directory of the hashed file; that is, the directory that contains the DATA.30 file.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

thurmy34 wrote:The job abort because the Cust_Id field is sent with null value to the database. The row is rejected because a Cust_Id cannot be null.
And does this have anything to do with the hashed file we are discussing? :?
-craig

"You can never have too many knives" -- Logan Nine Fingers
thurmy34
Premium Member
Premium Member
Posts: 198
Joined: Fri Mar 31, 2006 8:27 am
Location: Paris

Post by thurmy34 »

Yes it does because the hashed file is the input file.
So when it's corrupt the job abort with Cust_ID null.
Last edited by thurmy34 on Fri Sep 05, 2008 5:39 am, edited 1 time in total.
thurmy34
Premium Member
Premium Member
Posts: 198
Joined: Fri Mar 31, 2006 8:27 am
Location: Paris

Post by thurmy34 »

Yes it does because the hashed file is the input file.
So when is corrupt the job abort.
Last edited by thurmy34 on Wed Mar 11, 2009 4:25 am, edited 1 time in total.
Post Reply