In most of the stages we have a flexibility to apply unix command,which i feel is very cool.
As per the new release 9.0.1 hadoop concept was introduced and handling JSON file format was also awersome,XML stages were revised.the above listed were something which i find intresting.
The issue is resolved now.It looks like the fixed length schema and the length of the data in the file doesnot match.because of which the data was not getting populated.
I have a schema file of fixed length. Schema file is as follows: record {record_length=fixed,delim=none} ( field1:nullable string[1] {width=1}; field2:nullable string[10] {width=10}; field3:nullable string[1] {width=1}; field3:nullable string[3] {width=3}; :nullable string[1] {width=1}; :nullable st...
Hi Gurus, In our project we have more than 1000 etl jobs.In which there is a specific parameter set which is being used in 90% of the jobs and i see someone has deleted that parameter set which would result in failure of all the jobs. Is there a way that we can track the user who deleted it?. [As th...
Which version of datastage are you working on? what is that you are using for email notification is it Notification Activity or DssendemailAttachment Routine.If you are using Notification Activity switch to routine DSsendemailattachment.Hope this helps.
Hi, You may have to change your routine such that it would save the log file as per the requiredconvention.so that going forward you can directly attach the file By means of notification Activity. Option 2: Hope the log file would be saved in the DSsrver.You can use UNIX commands to change the job c...
HI Gurus, I need to list all the jobs in a project for which a particular table is being used. The table name supposed to find is "R_DM_SCRB_DTL' ". In the jobs as per the convention, we need to prefix 'ora' or 'src' something as such before the table name so in the below query I used DS_J...
Please look into the Pivot Enterprise stage in the Parallel Job Developer Guide and, should your needs not be met by that stage, ask again here in this thread. Thanks for your prompt reply ANDRW but i could not meet my requirement by pivot stage i initiated my job initially by Pivot stage,i am supp...
Hi, Please find the requirement below For better understanding about my query i have underline the required output source Target custno custpre abc 1 2 1,2,3 1 3 2,5,6 2 5 3 6 How do i perform if using transformer. Thanks in advance Regards, Sirish.
Hi, I have a suitation where i need to convert rows into coloums.Please find the details below Source target Custno custPreferences 1,2,3 1 2 2,5,6 1 3 2 5 2 6 How do i perform it by transformer. Thanks in advance Sirishds
I have a couple of jobs need to be executed sequentially but i am not supposed to use sequencer is there any alternative way to perform the control flow of jobs.
If your input is file then use sed command in sequential file filter sed "s/[^0-9]//g" file Or If you have only alphabets then use Convert function Convert("abcd...zABC...Z","",Field) Or If you have alphabets and other characters svTmp=Convert("0123456789",&q...
I have a situvation where i need to drop the string present in the source coloum and load only the numeric values at the target Source Target abc100 100 a1000 1000 abcd10 10 Can any one help me out what appropriate function in transformet need to be used or without using transformer can we devlop a ...