LONG to CLOB conversion in huge tables [message #575716] |
Fri, 25 January 2013 15:45 |
|
gnmurthy86
Messages: 1 Registered: January 2013
|
Junior Member |
|
|
Hi all,
We have a huge table in production, with LONG column. We are trying to change its datatype to CLOB.
The table has 120 Million records and is of 270 GB in size.
We tried using the oracle expdp/impdp option to try the conversion in our perf environment.
With 32 parallels, the export completed in 1.5 hrs. However, the import took 13 hrs.
I also tried the to_lob option using inserts, it went on for 20 hrs and I killed the process.
Are there any ways to improve the performance of LONG to CLOB conversion on huge tables?
|
|
|
|
|
Re: LONG to CLOB conversion in huge tables [message #575880 is a reply to message #575716] |
Mon, 28 January 2013 14:20 |
|
alan.kendall@nfl.com
Messages: 163 Registered: June 2012 Location: Culver City, California
|
Senior Member |
|
|
What I do is I alter the table and its indexes bo be nologging prior to the export (so the import will have less redo creation and less contention with the lob writes.
ENWEBTT > alter index GAMECENTER.PK_BIG_PLAY nologging;
Index altered.
ENWEBTT > alter table GAMECENTER.BIG_PLAY nologging;
Table altered.
If there are a lot of indexes you can create the non-lob indexes after the import to reduce contention.
I also increase the pga so that index creations will occur more in memory than on disk.
ECSCDAD3 > alter system set pga_aggregate_target=20g scope=memory;
System altered.
You might also check the event waits with the following script to see if there is some other problem like too small a log_buffer, log size or sga.
ECSCDAD3 > @active
SID SERIAL# MACHINE LOGIN DB SEC_WAIT USERNAME EVENT
------- ------- ------------------ --------------- ---------- -------- ----------- ------------------------
1522 16629 US\KENDALLA-LAL7 29-JAN-13 13:18 CSCDAD 0 ECSCDAD3 SQL*Net message to client
ECSCDAD3 > list
1 SELECT sid,
2 serial#,
3 machine,
4 To_char(s.logon_time, 'DD-MON-RR HH24:MI') login,
5 i.instance_name db,
6 s.seconds_in_wait sec_wait,
7 s.username,
8 s.event,
9 s.status,
10 s.program,
11 s.machine,
12 s.MODULE,
13 s.terminal
14 FROM gv$session s,
15 gv$instance i
16 WHERE i.inst_id = s.inst_id
17 AND s.status = 'ACTIVE'
18 AND s.username IS NOT NULL
19 AND s.wait_class <> 'Idle'
20 --and seconds_in_wait>0
21* ORDER BY seconds_in_wait
|
|
|
|
Re: LONG to CLOB conversion in huge tables [message #577263 is a reply to message #575882] |
Wed, 13 February 2013 16:57 |
|
Some time ago I also had to do it. I have tested many different scenarios for a week and tests showed that most quick option was a creating new partitioned table with "store clob as securefile ... compress" and parallel "insert... select...".
Main points:
1. Clob inserts cannot be "direct path insert"
2. You can speed up process with parallelizing
3. When you do it in parallel you need to reduce concurrency for filling clob index,
4. Clob compression optimizing IO throughput
Now I'm too tired, so not all could remember
Best regards,
Sayan Malakshinov
|
|
|
|