waitForWriteSignal(): Premature EOF Error
Moderators: chulett, rschirm, roy
waitForWriteSignal(): Premature EOF Error
Hi,
We are upgrading from 7.1 to 8.1 we have encountred a problem on the oracle DB where there is user defined query in that which does some join etc going to a copy stage but it throws up an error saying
waitForWriteSignal(): Premature EOF on node xxxxxxxxx Socket operation on non-socket
but the query runs fine from the database but i try to view the data from the Datastage jobs it throws out an error saying
When binding output interface field "field1" to field "field": Converting a nullable source to a non-nullable result;
a fatal runtime error could occur; use the modify operator to
specify a value to which the null should be converted.
but the metdata in the oracle database is same as metedata in Datastage though.
any thoughts on this
We are upgrading from 7.1 to 8.1 we have encountred a problem on the oracle DB where there is user defined query in that which does some join etc going to a copy stage but it throws up an error saying
waitForWriteSignal(): Premature EOF on node xxxxxxxxx Socket operation on non-socket
but the query runs fine from the database but i try to view the data from the Datastage jobs it throws out an error saying
When binding output interface field "field1" to field "field": Converting a nullable source to a non-nullable result;
a fatal runtime error could occur; use the modify operator to
specify a value to which the null should be converted.
but the metdata in the oracle database is same as metedata in Datastage though.
any thoughts on this
RK
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
They are separate issues.
The first - is it intermittent or permanent? This one can occur if connection is lost even for a short time.
The second has a fairly self-explanatory message. Check both lots of metadata again, including re-importing the table definition (maybe to a different folder) - "they" may have changed it.
The first - is it intermittent or permanent? This one can occur if connection is lost even for a short time.
The second has a fairly self-explanatory message. Check both lots of metadata again, including re-importing the table definition (maybe to a different folder) - "they" may have changed it.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
I would fix the nullability issue first. Your "field1" is declared as nullable in your database - check your table DDL to confirm. Once you handle the null then you can attack the second problem, if it still exists.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
You should check your database DDL, not (only) the DataStage definition.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
If your DDL and database read stage both specify a non-nullable "Field1" then I would add "$OSH_PRINT_SCHEMA" to your job parameters and see what the actual schemas that DataStage uses are. If they differ from the database DDL then you should submit this issue to your support provider.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Why not declare the field as nullable, then do a dummy NulltoValue() or similar call downstream?
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
That i checked earlier with making it as nullable it was working .but why is the problem comes when i use NVL in the query to substitute, is with Datastage 8.x assuming the computed value as nullable irrespective of definition.My concern is we are doing upgrade with the DataStage as we have tons of jobs it affects every job where we used which eats up time can we do any fix for that other than changing code. NVL works fine with DS 7.1.
RK