exec DBMS_UTILITY.compile_schema(schema => 'SCHEMA');
To recompile only the invalid objects in parallel:
exec UTL_RECOMP.recomp_parallel(<NUM_PARALLEL_THREADS>, 'SCHEMA');
A NULL value for SCHEMA recompiles all invalid objects in the database.
2. SGA breakdown in Solaris Locality Groups (lgroup)
To find the breakdown, execute pmap -L
. Then separate the lines that are related to each locality group and sum up the value in 2nd column to arrive at a number that shows the total SGA memory allocated in that locality group.
(I'm pretty sure there will be a much easier way that I am not currently aware of.)
3. Default values for shared pool, java pool, large pool, ..
If the *pool parameters were not set explicitly, executing the following query is one way to find out what are they currently set to.
eg.,
SQL> select * from v$sgainfo;
NAME BYTES RES
-------------------------------- ---------- ---
Fixed SGA Size 2171296 No
Redo Buffers 373620736 No
Buffer Cache Size 8.2410E+10 Yes
Shared Pool Size 1.7180E+10 Yes
Large Pool Size 536870912 Yes
Java Pool Size 1879048192 Yes
Streams Pool Size 268435456 Yes
Shared IO Pool Size 0 Yes
Granule Size 268435456 No
Maximum SGA Size 1.0265E+11 No
Startup overhead in Shared Pool 2717729536 No
Free SGA Memory Available 0
12 rows selected.
4. Fix to
PLS-00201: identifier 'GV$SESSION' must be declared error
Grant select
privilege on gv_$SESSION to the owner of the database object that failed to compile.
SQL> alter package OWF_MGR.FND_SVC_COMPONENT compile body;
Warning: Package Body altered with compilation errors.
SQL> show errors
Errors for PACKAGE BODY OWF_MGR.FND_SVC_COMPONENT:
LINE/COL ERROR
-------- -----------------------------------------------------------------
390/22 PL/SQL: Item ignored
390/22 PLS-00201: identifier 'GV$SESSION' must be declared
SQL> grant select on gv_$SESSION to OWF_MGR;
Grant succeeded.
SQL> alter package OWF_MGR.FND_SVC_COMPONENT compile body;
Package body altered.
5. Solaris Critical Thread (CT) optimization for Oracle logwriter (lgrw)
Critical Thread is a new scheduler optimization available in Oracle Solaris releases Solaris 10 Update 10 and later versions. Latency sensitive single threaded components of software such as Oracle database's logwriter benefit from CT optimization.
On a high level, LWPs marked as critical will be granted more exclusive access to the hardware. For example, on SPARC T4 and T5 systems, such a thread will be assigned exclusive access to a core as much as possible. CT optimization won't delay scheduling of any runnable thread in the system.
Critical Thread optimization is enabled by default. However the users of the system have to hint the OS by marking a thread or two "critical" explicitly as shown below.
priocntl -s -c FX -m 60 -p 60 -i pid <pid_of_critical_single_threaded_process>
From database point of view, logwriter (lgwr) is one such process that can benefit from CT optimization on Solaris platform. Oracle DBA's can either make the lgwr process 'critical' once the database is up and running, or can simply patch the 11.2.0.3 database software by installing RDBMS patch 12951619 to let the database take care of it automatically. I believe Oracle 12c does it by default. Future releases of 11g software may make lgwr critical out of the box.
Those who install the database patch 12951619 need to carefully follow the post installation steps documented in the patch README to avoid running into unwanted surprises.
6.
ORA-14519
error while importing a table from a Data Pump export dump
ORA-14519: Conflicting tablespace blocksizes for table : Tablespace XXX block size 32768 [partition specification] conflicts with previously specified/implied tablespace YYY block size 8192
[object-level default]
Failing sql is:
CREATE TABLE XYZ
..
All partitions in table XYZ are using 32K blocks whereas the implicit default partition is pointing to a 8K block tablespace. Workaround is to use the REMAP_TABLESPACE
option in Data Pump impdp command line to remap the implicit default tablespace of the partitioned table to the tablespace where the rest of partitions are residing.
7. Index building task in Data Pump import process
When Data Pump import process is running, by default, index building is performed with just one thread, which becomes a bottleneck and causes the data import process to take a long time especially if many large tables with millions of rows are being imported into the target database. One way to speed up the import process execution is by skipping index building as part of data import task with the help of EXCLUDE=INDEX
impdp command line option. Extract the index definitions for all the skipped indexes from the Data Pump dump file as shown below.
impdp <userid>/<password> directory=<directory> dumpfile=<dump_file>.dmp sqlfile=<index_def_file>.sql INCLUDE=INDEX
Edit <index_def_file>.sql to set the desired number of parallel threads to build each index. And finally execute the <index_def_file>.sql to build the indexes once the data import task is complete.