My 3 input columns are:
name; start_ip; end_ip
and in a transformer I have a routine that works out all the valid ip addresses for each of these ranges.
The routine loops through all the possible IPs concatenating them into a string with the 'NAME', a pipe delimiter and Char(10), all into a long string that is written to a sequential file. I then read that sequential file as a pipe delimited file and hence normalise the data.
e.g: vListIPs = vListIPs : aName : '|' : NewIp : Char(10)
This all works fine on the development environment but is failing in the production environment when the IP range is large. (i.e. 8,323,072 IPs).
I know this seems like a lot to concatenate together into one field and if it hadn't have worked in the dev environment I would have found another way, but it does work so I am baffled.
Until just now I have always just received:
But when I hard coded the range into the job I got a bit more info:Abnormal termination of stage TfmAuHlrAU.0000_99.tGenerateIPs detected
I can only think that there is a parameter setting on the Prod server that is lower than the dev server. Does anyone have any clues as to what might control this?DataStage Job 509 Phantom 28595
Program "DSD.GetStatus": Line 25,
Available memory exceeded. Unable to continue processing record.
Program "DSD.Startup": Line 142,
Available memory exceeded. Unable to continue processing record.
Additional info:
- The prod box has huge physical memory, much more than dev.
Nothing else of significance is running.
If I return the result to the transformer but don't use the value there is no issue.
The meta data of the column was varchar(1000) which although this is less that the length of the data there were no complaints on dev. In prod I have now changed this to LongVarChar(2147483647) (not that I think this will make any difference.)
The length of the full string is 159,346,176
Nick.