When using a cursor to bulk collect results in a sys.odcivarchar2list i get this error: ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bind My loop executes for many times with no error but only for a specific table which i collect all data into the varchar2 list i am getting this error. I used this code:
declare
filehandle1 utl_file.file_type;
myquery varchar2(4000) := 'select column1 ||''~''|| column2 from mytable';
mycursor sys_refcursor;
myresults sys.odcivarchar2list;
begin
filehandle1 := utl_file.fopen ('D42', 'mydata', 'w');
open mycursor for myquery;
loop
fetch mycursor bulk collect into myresults;
if myresults.count>0 Then
for idx in myresults.first..myresults.last loop
utl_file.put_line(filehandle1, myresults(idx));
end loop;
End if;
exit when mycursor%notfound;
end loop;
close mycursor;
utl_file.fclose(filehandle1);
end;
This returns results from more than 100 tables but only 1 of the tables is crashing. Thank you in advance for your help
You need to write each CLOB value out into a separate file, and include that file name as part of the main data record. Something like this:
declare
data_file utl_file.file_type;
clob_file utl_file.file_type;
buffer varchar2(32767);
position pls_integer;
chars pls_integer;
myquery varchar2(4000) := 'select column1 ||''~''|| column3, '
|| '''column2_'' || rownum, column2 from mytable';
mycursor sys_refcursor;
myresult varchar2(4000);
myfilename varchar2(120);
myclob clob;
begin
data_file := utl_file.fopen ('D42', 'mydata', 'w');
open mycursor for myquery;
loop
fetch mycursor into myresult, myfilename, myclob;
exit when mycursor%notfound;
if myclob is not null and dbms_lob.getlength(myclob) > 0 then
myresult := myresult ||'~'|| myfilename;
clob_file := utl_file.fopen ('D42', myfilename, 'w', 32767);
position := 1;
chars := 32767;
while position < dbms_lob.getlength(myclob) loop
dbms_lob.read(myclob, chars, position, buffer);
utl_file.put(clob_file, buffer);
utl_file.fflush(clob_file);
position := position + chars;
end loop;
utl_file.fclose(clob_file);
end if;
utl_file.put_line(data_file, myresult);
end loop;
close mycursor;
utl_file.fclose(data_file);
end;
/
There is one data_file
which has all the non-CLOB data, including the name of the individual file you write that row's CLOB to. The filename can be anything as long as it's unique; I've used rownum
but you can use the row's primary key ID if it has one, for example.
With a dummy table created as:
create table mytable (column1 number, column2 clob, column3 varchar2(10));
insert into mytable (column1, column2, column3) values (1, null, 'First');
insert into mytable (column1, column2, column3) values (2, 'Second CLOB', 'Second');
insert into mytable (column1, column2, column3) values (3, 'Third CLOB', 'Third');
.. this creates mydata
containing:
1~First
2~Second~column2_2
3~Third~column2_3
and files column2_2
and column2_3
with the corresponding CLOB values.
Then if I run SQL*Loader with that data file and the CLOB files available, and control file:
load data
characterset UTF8
truncate
into table mytable2
fields terminated by "~"
trailing nullcols
(
column1 char(10),
column3 char(10),
clob_filename filler char(120),
column2 lobfile(clob_filename) terminated by EOF
)
... the new table is populated as:
select * from mytable2;
COLUMN1 COLUMN2 COLUMN3
---------- ------------------------------ ----------
1 First
2 Second CLOB Second
3 Third CLOB Third
(I still think you should be doing this with the built-in tools, data pump or just copying data internally between two schemas, as discussed on previous questions...)