Share the content if you found it is useful (You can share using 300 community websites) click "share" at the end of the post.

You are encouraged to leave a comment.








Sunday, January 18, 2009

Manual DB Upgrade - Compatability Matrix Enclosed

Let us Upgrade the Database to 10gR2 using this blog. Step - by - Step procedure to achieve this...


Complete checklist for manual upgrades to 10gR2

Prerequisites

* Install Oracle 10g Release 2 in a new Oracle Home.

* Install JAccelerator (NCOMP) into the home from the Companion media.

* Install the latest available patchset from Metalink.

* Install the latest available Critical Patch Update. Note 290178.1 of Metalink.


* If you are upgrading to 10.2.0.3, review the following alert before performing the upgrade and apply any required patches: Note 412271.1 "ORA-600 [22635] and ORA-600 [KOKEIIX1] Reported

While Upgrading Or Patching Databases To 10.2.0.3

* Either take a cold or hot backup for your database.

* Make sure to take a backup of Oracle Home and Central Inventory. Central inventory can be located by the contents of oraInst.loc files. "oraInst.loc" is available in the following locations on various platforms:

/var/opt/oracle/oraInst.loc -- Solaris
/etc/oraInst.loc -- other operating systems
HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\inst_loc -- On windows Platform.

* Verify kernel parameters are set according to the 10gR2 Installation Guide.

* Verify that all O/S packages and patches are installed as per the Installation Guide.

Compatibility Matrix

Minimum Version of the database that can be directly upgraded to Oracle 10g Release 2

8.1.7.4 -> 10.2.X.X.X
9.0.1.4 or 9.0.1.5 -> 10.2.X.X.X
9.2.0.4 or higher -> 10.2.X.X.X
10.1.0.2 or higher -> 10.2.X.X.X

The following database version will require an indirect upgrade path.

7.3.3 (or lower) -> 7.3.4 -> 8.1.7 -> 8.1.7.4 -> 10.2.X.X.X
7.3.4 -> 8.1.7 -> 8.1.7.4 -> 10.2.X.X.X
8.0.n -> 8.1.7 -> 8.1.7.4 -> 10.2.X.X.X
8.1.n -> 8.1.7 -> 8.1.7.4 -> 10.2.X.X.X

Steps for Upgrading the Database to 10g Release 2

Preparing to Upgrade

In this section all the steps need to be performed to the previous version of Oracle. Please note that the database must be running in normal mode in the old release.

Step 1:

Log in to the system as the owner of the new 10gR2 ORACLE_HOME and copy the following files from the 10gR2 ORACLE_HOME/rdbms/admin directory to a directory outside of the Oracle home, such as the /tmp directory on your system:

ORACLE_HOME/rdbms/admin/utlu102i.sql
ORACLE_HOME/rdbms/admin/utltzuv2.sql

Make a note of the new location of these files.

Step 2:


Change to the temporary directory that you copied files to in Step 1.

Start SQL*Plus and connect to the database instance as a user with SYSDBA privileges. Then run and spool the utlu102i.sql file.

sqlplus '/as sysdba'

SQL> spool Database_Info.log
SQL> @utlu102i.sql
SQL> spool off


Then, check the spool file and examine the output of the upgrade information tool. The sections which follow, describe the output of the Upgrade Information Tool (utlu102i.sql).

Database:

This section displays global database information about the current database such as the database name, release number, and compatibility level. A warning is displayed if the COMPATIBLE initialization parameter needs to be adjusted before the database is upgraded.

Logfiles:

This section displays a list of redo log files in the current database whose size is less than 4 MB. For each log file, the file name, group number, and recommended size is displayed. New files of at least 4 MB (preferably 10 MB) need to be created in the current database. Any redo log files less than 4 MB must be dropped before the database is upgraded.

Tablespaces:

This section displays a list of tablespaces in the current database. For each tablespace, the tablespace name and minimum required size is displayed. In addition, a message is displayed if the tablespace is adequate for the upgrade. If the tablespace does not have enough free space, then space must be added to the tablespace in the current database. Tablespace adjustments need to be made before the database is upgraded.

Update Parameters:

This section displays a list of initialization parameters in the parameter file of the current database that must be adjusted before the database is upgraded. The adjustments need to be made to the parameter file after it is copied to the new Oracle Database 10g release.

Deprecated Parameters:

This section displays a list of initialization parameters in the parameter file of the current database that are deprecated in the new Oracle Database 10g release.

Obsolete Parameters:

This section displays a list of initialization parameters in the parameter file of the current database that are obsolete in the new Oracle Database 10g release. Obsolete initialization parameters need to be removed from the parameter file before the database is upgraded.

Components:

This section displays a list of database components in the new Oracle Database 10g release that will be upgraded or installed when the current database is upgraded.

Miscellaneous Warnings:

This section provides warnings about specific situations that may require attention before and/or after the upgrade.

SYSAUX Tablespace:

This section displays the minimum required size for the SYSAUX tablespace, which is required in Oracle Database 10g. The SYSAUX tablespace must be created after the new Oracle Database 10g release is started and BEFORE the upgrade scripts are invoked.

Step 3:


Check for the deprecated CONNECT Role

After upgrading to 10gR2, the CONNECT role will only have the CREATE SESSION privilege; the other privileges granted to the CONNECT role in earlier releases will be revoked during the upgrade. To identify which users and roles in your database are granted the CONNECT role, use the following query:

SELECT grantee FROM dba_role_privs
WHERE granted_role = 'CONNECT' and
grantee NOT IN (
'SYS', 'OUTLN', 'SYSTEM', 'CTXSYS', 'DBSNMP',
'LOGSTDBY_ADMINISTRATOR', 'ORDSYS',
'ORDPLUGINS', 'OEM_MONITOR', 'WKSYS', 'WKPROXY',
'WK_TEST', 'WKUSER', 'MDSYS', 'LBACSYS', 'DMSYS',
'WMSYS', 'OLAPDBA', 'OLAPSVR', 'OLAP_USER',
'OLAPSYS', 'EXFSYS', 'SYSMAN', 'MDDATA',
'SI_INFORMTN_SCHEMA', 'XDB', 'ODM');


If users or roles require privileges other than CREATE SESSION, then grant the specific required privileges prior to upgrading. The upgrade scripts adjust the privileges for the Oracle-supplied users.

In Oracle 9.2.x and 10.1.x CONNECT role includes the following privileges:

SELECT GRANTEE,PRIVILEGE FROM DBA_SYS_PRIVS
WHERE GRANTEE='CONNECT'

GRANTEE PRIVILEGE
------------------------------ ---------------------------
CONNECT CREATE VIEW
CONNECT CREATE TABLE
CONNECT ALTER SESSION
CONNECT CREATE CLUSTER
CONNECT CREATE SESSION
CONNECT CREATE SYNONYM
CONNECT CREATE SEQUENCE
CONNECT CREATE DATABASE LINK

In Oracle 10.2 the CONNECT role only includes CREATE SESSION privilege.

Step 4:


Create the script for dblink incase of downgrade of the database.

During the upgrade to 10gR2, any passwords in database links will be encrypted. To downgrade back to the original release, all of the database links with encrypted passwords must be dropped prior to the downgrade. Consequently, the database links will not exist in the downgraded database. If you anticipate a requirement to be able to downgrade back to your original release, then save the information about affected database links from the SYS.LINK$ table, so that you can recreate the database links after the downgrade.

Following script can be used to construct the dblink.

SELECT
'create '||DECODE(U.NAME,'PUBLIC','public ')||'database link '||CHR(10)
||DECODE(U.NAME,'PUBLIC',Null, U.NAME||'.')|| L.NAME||chr(10)
||'connect to ' || L.USERID || ' identified by '''
||L.PASSWORD||''' using ''' || L.host || ''''
||chr(10)||';' TEXT
FROM sys.link$ L,
sys.user$ U
WHERE L.OWNER# = U.USER# ;


Step 5:


Check for the TIMESTAMP WITH TIMEZONE Datatype. Please this step is only required for the 10gR1 The may affect existing data of TIMESTAMP WITH TIME ZONE datatype.
For example, if users enter TIMESTAMP '2003-02-17 09:00:00 America/Sao_Paulo', we convert the data to UTC based on the transition rules in the time zone file and store them on the disk. So '2003-02-17 11:00:00' along with the time zone id for 'America/Sao_Paulo' is stored because the offset for this particular time is '-02:00'. Now the transition rules are modified and the offset for this particular time is changed to '-03:00'. when users retrieve the data, they will get '2003-02-17 08:00:00 America/Sao_Paulo'. There is one hour difference compared to the original value.

Change to the temporary directory that you copied files to in Step 1.

Start SQL*Plus and connect to the database instance as a user with SYSDBA privileges. Then run and spool the utltzuv2.sql file.

$ sqlplus '/as sysdba'

SQL> spool TimeZone_Info.log
SQL> @utltzuv2.sql
SQL> spool off


If the utltzuv2.sql script identifies columns with time zone data affected by a database upgrade, then use the solution to solve this problem.

create tables with the time zone information in character format (for example, TO_CHAR(column, 'YYYY-MM-DD HH24.MI.SSXFF TZR'), and recreate the TIMESTAMP data from these tables after the upgrade.

For example, user scott has a table tztab:
create table tztab(x number primary key, y timestamp with time zone);
insert into tztab values(1, timestamp '');

Before upgrade, you can create a table tztab_back, note column y here is defined as VARCHAR2 to preserve the original value.

create table tztab_back(x number primary key, y varchar2(256));
insert into tztab_back select x,
to_char(y, 'YYYY-MM-DD HH24.MI.SSXFF TZR') from tztab;

After upgrade, you need update the data in the table tztab using the value in tztab_back.
update tztab t set t.y = (select to_timestamp_tz(t1.y,
'YYYY-MM-DD HH24.MI.SSXFF TZR') from tztab_back t1 where t.x=t1.x);


Step 6:

Starting in Oracle 9i the National Characterset (NLS_NCHAR_CHARACTERSET) will be limited to UTF8 and AL16UTF16.

For more details refer to The National Character Set in Oracle 9i and 10g

Any other NLS_NCHAR_CHARACTERSET will no longer be supported. When upgrading to 10g the value of NLS_NCHAR_CHARACTERSET is based on value currently used in the Oracle8 version.

If the NLS_NCHAR_CHARACTERSET is UTF8 then new it will stay UTF8. In all other cases the NLS_NCHAR_CHARACTERSET is changed to AL16UTF16 and -if used- N-type data (= data in columns using NCHAR, NVARCHAR2 orNCLOB ) may need to be converted.

The change itself is done in step 38 by running the upgrade script.

If you are NOT using N-type columns *for user data* then simply go to next step.
No further action required.

( so if: select distinct OWNER, TABLE_NAME from DBA_TAB_COLUMNS where
DATA_TYPE in ('NCHAR','NVARCHAR2', 'NCLOB') and OWNER not in
('SYS','SYSTEM'); returns no rows, go to next step.)

If you have N-type columns *for user data* then check:

SQL> select * from nls_database_parameters where parameter ='NLS_NCHAR_CHARACTERSET';

If you are using N-type columns AND your National Characterset is UTF8 or is in the following list:

JA16SJISFIXED , JA16EUCFIXED , JA16DBCSFIXED , ZHT32TRISFIXED
KO16KSC5601FIXED , KO16DBCSFIXED , US16TSTFIXED , ZHS16CGB231280FIXED
ZHS16GBKFIXED , ZHS16DBCSFIXED , ZHT16DBCSFIXED , ZHT16BIG5FIXED
ZHT32EUCFIXED

then also simply go to point next step. The conversion of the user data itself will then be done in step 37

If you are using N-type columns AND your National Characterset is NOT UTF8 or NOT in the following list:

JA16SJISFIXED , JA16EUCFIXED , JA16DBCSFIXED , ZHT32TRISFIXED
KO16KSC5601FIXED , KO16DBCSFIXED , US16TSTFIXED , ZHS16CGB231280FIXED
ZHS16GBKFIXED , ZHS16DBCSFIXED , ZHT16DBCSFIXED , ZHT16BIG5FIXED
ZHT32EUCFIXED

(your current NLS_NCHAR_CHARACTERSET is for example US7ASCII, WE8ISO8859P1, CL8MSWIN1251 ...)
then you have to:

* change the tables to use CHAR, VARCHAR2 or CLOB instead the N-type
or

* use export/import the table(s) containing N-type column and truncate those tables before migrating to 9i.

The recommended NLS_LANG during export is simply the NLS_CHARACTERSET, not the NLS_NCHAR_CHARACTERSET

Step 7:

When upgrading to Oracle Database 10g, optimizer statistics are collected for dictionary tables that lack statistics. This statistics collection can be time consuming for databases with a large number of dictionary tables, but statistics gathering only occurs for those tables that lack statistics or are significantly changed during the upgrade.

To decrease the amount of downtime incurred when collecting statistics, you can collect statistics prior to performing the actual database upgrade.

As of Oracle Database 10g Release 10.1, Oracle recommends that you use the DBMS_STATS.GATHER_DICTIONARY_STATS procedure to gather these statistics. You can enter the following:

$ sqlplus '/as sysdba'

SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;

In Case of the 9.0.1 or 9.2.0 release, then you should use the
DBMS_STATS.GATHER_SCHEMA_STATS procedure to gather statistics.

Backup the existing statistics as follows:

$ sqlplus '/as sysdba'
SQL>spool sdict

SQL>grant analyze any to sys;

SQL>exec dbms_stats.create_stat_table('SYS','dictstattab');

SQL>exec dbms_stats.export_schema_stats('WMSYS','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('MDSYS','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('CTXSYS','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('XDB','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('WKSYS','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('LBACSYS','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('OLAPSYS','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('DMSYS','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('ODM','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('ORDSYS','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('ORDPLUGINS','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('SI_INFORMTN_SCHEMA','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('OUTLN','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('DBSNMP','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('SYSTEM','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('SYS','dictstattab',statown => 'SYS');

SQL>spool off


This data is useful if you want to revert back the statistics

For example, the following PL/SQL subprograms import the statistics for the SYS schema after deleting the existing statistics:

exec dbms_stats.delete_schema_stats('SYS');
exec dbms_stats.import_schema_stats('SYS','dictstattab');

To gather statistics run this script, connect to the database AS SYSDBA using SQL*Plus.

$ sqlplus '/as sysdba'

SQL>spool gdict

SQL>grant analyze any to sys;

SQL>exec dbms_stats.gather_schema_stats('WMSYS',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('MDSYS',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('CTXSYS',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('XDB',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('WKSYS',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('LBACSYS',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('OLAPSYS',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('DMSYS',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('ODM',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('ORDSYS',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('ORDPLUGINS',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('SI_INFORMTN_SCHEMA',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('OUTLN',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('DBSNMP',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('SYSTEM',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('SYS',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);

SQL>spool off



Step 8:


Check for invalid objects invalid objects.

spool invalid_pre.lst
select substr(owner,1,12) owner,
substr(object_name,1,30) object,
substr(object_type,1,30) type, status from
dba_objects where status <>'VALID';
spool off


Run the following script and then requery invalid objects:

This script must be run as a user with SYSDBA privs using SQL*Plus:

$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus '/as sysdba'
SQL> @utlrp.sql

This last query will return a list of all objects that cannot be recompiled before the upgrade in the file 'invalid_pre.lst'

Step 9:


Check for corruption in the dictionary, use the following commands in sqlplus connected as sys:

Set verify off
Set space 0
Set line 120
Set heading off
Set feedback off
Set pages 1000
Spool analyze.sql

Select 'Analyze cluster "'||cluster_name||'" validate structure cascade;'
from dba_clusters
where owner='SYS'
union
Select 'Analyze table "'||table_name||'" validate structure cascade;'
from dba_tables
where owner='SYS' and partitioned='NO' and (iot_type='IOT' or iot_type is NULL)
union
Select 'Analyze table "'||table_name||'" validate structure cascade into invalid_rows;'
from dba_tables
where owner='SYS' and partitioned='YES';

spool off


This creates a script called analyze.sql.
Now execute the following steps.

$ sqlplus '/as sysdba'
SQL> @$ORACLE_HOME/rdbms/admin/utlvalid.sql
SQL> @analyze.sql

This script (analyze.sql) should not return any errors.

Step 10:


Ensure that all Snapshot refreshes are successfully completed, and that replication is stopped.

$ sqlplus '/as sysdba'
SQL> select distinct(trunc(last_refresh)) from dba_snapshot_refresh_times;


Step 11:


Stop the listener for the database:

$ lsnrctl
LSNRCTL> stop

Ensure no files need media recovery:

$ sqlplus '/ as sysdba'
SQL> select * from v$recover_file;

This should return no rows.

Step 12:


Ensure no files are in backup mode:

SQL> select * from v$backup where status!='NOT ACTIVE';

This should return no rows.

Step 13:


Resolve any outstanding unresolved distributed transaction:

SQL> select * from dba_2pc_pending;

If this returns rows you should do the following:

SQL> select local_tran_id from dba_2pc_pending;
SQL> execute dbms_transaction.purge_lost_db_entry('');
SQL> commit;

Step 14:

Disable all batch and cron jobs.

Step 15:

Ensure the users sys and system have 'system' as their default tablespace.

SQL> select username, default_tablespace from dba_users
where username in ('SYS','SYSTEM');

To modify use:

SQL> alter user sys default tablespace SYSTEM;
SQL> alter user system default tablespace SYSTEM;

Step 16:

Ensure that the aud$ is in the system tablespace when auditing is enabled.

SQL> select tablespace_name from dba_tables where table_name='AUD$';

Step 17:


Note down where all control files are located.

SQL> select * from v$controlfile;

Step 18:

Shutdown the database

$ sqlplus '/as sysdba'
SQL> shutdown immediate;

Step 19:

PERFORM a Full cold backup!!!!!!!


You can either do this by manually copying the files or sign on to RMAN:

$rman "target / nocatalog"

And issue the following RMAN commands:

RUN
{
ALLOCATE CHANNEL chan_name TYPE DISK;
BACKUP DATABASE FORMAT 'some_backup_directory%U' TAG before_upgrade;
BACKUP CURRENT CONTROLFILE TO 'save_controlfile_location';
}

Upgrading to the New Oracle Database 10g Release 2

Step 20:

Update the init.ora file:

- Make a backup of the init.ora file.

- Comment out obsoleted parameters(list in appendix A).

- Change all deprecated parameters(list in appendix B).

- Set the COMPATIBLE initialization parameter to an appropriate value. If you are
upgrading from 8.1.7.4 then set the COMPATIBLE parameter to 9.2.0 until after the
upgrade has been completed successfully. If you are upgrading from 9.2.0 or 10.1.0
then leave the COMPATIBLE parameter set to it's current value until the upgrade
has been completed successfully. This will avoid any unnecessary ORA-942 errors
from being reported in SMON trace files during the upgrade (because the upgrade
is looking for 10.2 objects that have not yet been created)

- If you have set the parameter NLS_LENGTH_SEMANTICS to CHAR, change the value
to BYTE during the upgrade (to avoid the issue in

- Verify that the parameter DB_DOMAIN is set properly.

- Make sure the PGA_AGGREGATE_TARGET initialization parameter is set to
at least 24 MB.

- Ensure that the SHARED_POOL_SIZE and the LARGE_POOL_SIZE are at least 150Mb.
Please also the check the "KNOWN ISSUES" section

- Make sure the JAVA_POOL_SIZE initialization parameter is set to at least 150 MB.

- Ensure there is a value for DB_BLOCK_SIZE

- On Windows operating systems, change the BACKGROUND_DUMP_DEST and USER_DUMP_DEST
initialization parameters that point to RDBMS80 or any other environment variable
to point to the following directories instead:

BACKGROUND_DUMP_DEST to ORACLE_BASE\oradata\DB_NAME

and

USER_DUMP_DEST to ORACLE_BASE\oradata\DB_NAME\archive

- Comment out any existing AQ_TM_PROCESSES parameter setting, and enter a new one that explicitly sets AQ_TM_PROCESSES=0 for the duration of the upgrade

- Set the parameter UNDO_MANAGEMENT=AUTO

- Make sure all path names in the parameter file are fully specified. You should not have relative path names in the parameter file.

- If you are using a cluster database, set the parameter CLUSTER_DATABASE=FALSE during the upgrade.

- If you are upgrading a cluster database, then modify the initdb_name.ora file in the same way that you modified the parameter file.

Step 21 :

Check for adequate freespace on archive log destination file systems.

Step 22 :

Ensure the NLS_LANG variable is set correctly:

$ env | grep $NLS_LANG

Step 23:

If needed copy the SQL*Net files like (listener.ora,tnsnames.ora etc) to the new location (when no TNS_ADMIN env. Parameter is used)

$ cp $OLD_ORACLE_HOME/network/admin/*.ora /network/admin

Step 24:

If your Operating system is Windows NT, delete your services With the ORADIM of your old oracle version.

Stop the OracleServiceSID Oracle service of the database you are upgrading, where SID is the instance name. For example, if your SID is ORCL, then enter the following at a command prompt:

C:\> NET STOP OracleServiceORCL

For Oracle 8.0 this is:
C:\ORADIM80 -DELETE -SID

For Oracle 8i or higher this is:
C:\ORADIM -DELETE -SID

Also create the new Oracle Database 10gR2 service at a command prompt using the ORADIM command of the new Oracle Database release:

C:\> ORADIM -NEW -SID SID -INTPWD PASSWORD -MAXUSERS USERS
-STARTMODE AUTO -PFILE ORACLE_HOME\DATABASE\INITSID.ORA

Step 25:


Copy configuration files from the ORACLE_HOME of the database being upgraded to the new Oracle Database 10g ORACLE_HOME:

If your parameter file resides within the old environment's ORACLE_HOME, then copy it to the new ORACLE_HOME. By default, Oracle looks for the parameter file in ORACLE_HOME/dbs on UNIX platforms and in ORACLE_HOME\database on Windows operating systems. The parameter file can reside anywhere you wish, but it should not reside in the old environment's ORACLE_HOME after you upgrade to Oracle Database 10g.

If your parameter file is a text-based initialization parameter file with either an IFILE (include file) or a SPFILE (server parameter file) entry, and the file specified in the IFILE or SPFILE entry resides within the old environment's ORACLE_HOME, then copy the file specified by the IFILE or SPFILE entry to the new ORACLE_HOME. The file specified in the IFILE or SPFILE entry contains additional initialization parameters.

If you have a password file that resides within the old environments ORACLE_HOME, then move or copy the password file to the new Oracle Database 10g ORACLE_HOME.

The name and location of the password file are operating system-specific.
On UNIX platforms, the default password file is ORACLE_HOME/dbs/orapwsid.
On Windows operating systems, the default password file is
ORACLE_HOME\database\pwdsid.ora. In both cases, sid is your Oracle instance ID.

If you are upgrading a cluster database and your initdb_name.ora file resides within the old environment's ORACLE_HOME, then move or copy the initdb_name.ora file to the new ORACLE_HOME.

Note:
If you are upgrading a cluster database, then perform this step on all nodes
in which this cluster database has instances configured.

Step 26:

Update the oratab entry, to set the new ORACLE_HOME and disable automatic startup:

::N

Step 27:

Update the environment variables like ORACLE_HOME and PATH

$. oraenv

Step 28:


Make sure the following environment variables point to the new Release directories:
- ORACLE_HOME
- PATH
- ORA_NLS10
- ORACLE_BASE
- LD_LIBRARY_PATH
- LD_LIBRARY_PATH_64 (Solaris only)
- LIBPATH (AIX only)
- SHLIB_PATH (HPUX only)
- ORACLE_PATH

$ env | grep ORACLE_HOME
$ env | grep PATH
$ env | grep ORA_NLS10
$ env | grep ORACLE_BASE
$ env | grep LD_LIBRARY_PATH
$ env | grep ORACLE_PATH

AIX:
$ env | grep LIBPATH

HP-UX:
$ env | grep SHLIB_PATH

Note that the ORA_NLS10 environment variable replaces the ORA_NLS33 environment variable, so you may need to unset ORA_NLS33 and set ORA_NLS10.

Step 29:

Startup upgrade the database:

$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus / as sysdba
Use Startup with the UPGRADE option:
SQL> startup upgrade

Step 30:

Create a SYSAUX tablespace. In Oracle Database 10g, the SYSAUX tablespace is used to consolidate data from a number of tablespaces that were separate in previous releases.

The SYSAUX tablespace must be created with the following mandatory attributes:

- ONLINE
- PERMANENT
- READ WRITE
- EXTENT MANAGEMENT LOCAL
- SEGMENT SPACE MANAGEMENT AUTO

The Upgrade Information Tool(utlu102i.sql in step 4) provides an estimate of the minimum required size for the SYSAUX tablespace in the SYSAUX Tablespace section.

The following SQL statement would create a 500 MB SYSAUX tablespace for the database:

SQL> CREATE TABLESPACE sysaux DATAFILE 'sysaux01.dbf'
SIZE 500M REUSE
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;

Step 31:

If table XDB.MIGR9202STATUS exists in the database, drop it before upgrading the database (to avoid the issue described in )

Step 32:

Spool the output so you can take a look at possible errors after the upgrade:

SQL> spool upgrade.log
SQL> @catupgrd.sql

The catupgrd.sql script determines which upgrade scripts need to be run and then runs each necessary script. You must run the script in the new release 10.2 environment.

The upgrade script creates and alters certain data dictionary tables. It also upgrades and configures the following database components in the new release 10.2 database (if the components were installed in the database before the upgrade).

Oracle Database Catalog Views
Oracle Database Packages and Types
JServer JAVA Virtual Machine
Oracle Database Java Packages
Oracle XDK
Oracle Real Application Clusters
Oracle Workspace Manager
Oracle interMedia
Oracle XML Database
OLAP Analytic Workspace
Oracle OLAP API
OLAP Catalog
Oracle Text
Spatial
Oracle Data Mining
Oracle Label Security
Messaging Gateway
Expression Filter
Oracle Enterprise Manager Repository

Turn off the spooling of script results to the log file:

SQL> SPOOL OFF

Then, check the spool file and verify that the packages and procedures compiled successfully. You named the spool file earlier in this step; the suggested name was upgrade.log. Correct any problems you find in this file and rerun the appropriate upgrade script if necessary. You can rerun any of the scripts described in this note as many times as necessary.

Step 33:


Run utlu102s.sql, specifying the TEXT option:

SQL> @utlu102s.sql TEXT

This is the Post-upgrade Status Tool displays the status of the database components in the upgraded database. The Upgrade Status Tool displays output similar to the following:


Oracle Database 10.2 Upgrade Status Utility 04-20-2005 05:18:40

Component Status Version HH:MM:SS
Oracle Database Server VALID 10.2.0.1.0 00:11:37
JServer JAVA Virtual Machine VALID 10.2.0.1.0 00:02:47
Oracle XDK VALID 10.2.0.1.0 00:02:15
Oracle Database Java Packages VALID 10.2.0.1.0 00:00:48
Oracle Text VALID 10.2.0.1.0 00:00:28
Oracle XML Database VALID 10.2.0.1.0 00:01:27
Oracle Workspace Manager VALID 10.2.0.1.0 00:00:35
Oracle Data Mining VALID 10.2.0.1.0 00:15:56
Messaging Gateway VALID 10.2.0.1.0 00:00:11
OLAP Analytic Workspace VALID 10.2.0.1.0 00:00:28
OLAP Catalog VALID 10.2.0.1.0 00:00:59
Oracle OLAP API VALID 10.2.0.1.0 00:00:53
Oracle interMedia VALID 10.2.0.1.0 00:08:03
Spatial VALID 10.2.0.1.0 00:05:37
Oracle Ultra Search VALID 10.2.0.1.0 00:00:46
Oracle Label Security VALID 10.2.0.1.0 00:00:14
Oracle Expression Filter VALID 10.2.0.1.0 00:00:16
Oracle Enterprise Manager VALID 10.2.0.1.0 00:00:58

Note - in RAC environments, this script may suggest that the status of the RAC component is INVALID when in actual fact it is VALID (as shown in the output from DBA_REGISTRY)

Step 34:


Restart the database:
SQL> shutdown immediate (DO NOT USE SHUTDOWN ABORT!!!!!!!!!)
SQL> startup restrict

Executing this clean shutdown flushes all caches, clears buffers and performs other database housekeeping tasks. Which is needed if you want to upgrade specific components.

Step 35:

Run olstrig.sql to re-create DML triggers on tables with Oracle Label Security policies.
This step is only necessary if Oracle Label Security is in your database.
(Check from Step 33).

SQL> @olstrig.sql

Step 36:

Run utlrp.sql to recompile any remaining stored PL/SQL and Java code.

SQL> @utlrp.sql

Verify that all expected packages and classes are valid:

If there are still objects which are not valid after running the script run the following:
spool invalid_post.lst
Select substr(owner,1,12) owner,
substr(object_name,1,30) object,
substr(object_type,1,30) type, status
from
dba_objects where status <>'VALID';
spool off


Now compare the invalid objects in the file 'invalid_post.lst' with the invalid objects in the file 'invalid_pre.lst' you create in step 9.

NOTE: If you have upgraded from version 9.2 to version 10.2 and find that the following views are invalid, the views can be safely ignored (or dropped):

SYS.V_$KQRPD
SYS.V_$KQRSD
SYS.GV_$KQRPD
SYS.GV_$KQRSD

After Upgrading a Database

Step 37:

Shutdown the database and startup the database.

$ sqlplus '/as sysdba'
SQL> shutdown
SQL> startup restrict

Step 38:

Complete the Step 38 only if you upgraded your database from release 8.1.7
Otherwise skip to Step 40.

A) IF you are NOT using N-type columns for *user* data:

select distinct OWNER, TABLE_NAME from DBA_TAB_COLUMNS where
DATA_TYPE in ('NCHAR','NVARCHAR2', 'NCLOB') and OWNER not in
('SYS','SYSTEM');
did not return rows in Step 7 of this note.

then:
$ sqlplus '/as sysdba'
SQL> shutdown immediate
and go to step 40.

B) IF your version 8 NLS_NCHAR_CHARACTERSET was UTF8:

you can look up your previous NLS_NCHAR_CHARACTERSET using this select:
select * from nls_database_parameters where parameter ='NLS_SAVED_NCHAR_CS';

then:

$ sqlplus '/as sysdba'
SQL> shutdown immediate
and go to step 40.

C) IF you are using N-type columns for *user* data *AND* your previous NLS_NCHAR_CHARACTERSET was in the following list:

JA16SJISFIXED , JA16EUCFIXED , JA16DBCSFIXED , ZHT32TRISFIXED
KO16KSC5601FIXED , KO16DBCSFIXED , US16TSTFIXED , ZHS16CGB231280FIXED
ZHS16GBKFIXED , ZHS16DBCSFIXED , ZHT16DBCSFIXED , ZHT16BIG5FIXED
ZHT32EUCFIXED

then the N-type columns *data* need to be converted to AL16UTF16:

To upgrade user tables with N-type columns to AL16UTF16 run the script utlnchar.sql:

$ sqlplus '/as sysdba'
SQL> @utlnchar.sql
SQL> shutdown immediate;
go to step 40.

D) IF you are using N-type columns for *user* data *AND * your previous NLS_NCHAR_CHARACTERSET was *NOT* in the following list:

JA16SJISFIXED , JA16EUCFIXED , JA16DBCSFIXED , ZHT32TRISFIXED
KO16KSC5601FIXED , KO16DBCSFIXED , US16TSTFIXED , ZHS16CGB231280FIXED
ZHS16GBKFIXED , ZHS16DBCSFIXED , ZHT16DBCSFIXED , ZHT16BIG5FIXED
ZHT32EUCFIXED

then import the data exported in point 8 of this note.
The recommended NLS_LANG during import is simply the NLS_CHARACTERSET,
not the NLS_NCHAR_CHARACTERSET

After the import:
$ sqlplus '/as sysdba'
SQL> shutdown immediate;
go to step 40.

Step 39:


If your database has TIMESTAMP WITH TIMEZONE data, you must update the data so that it is converted and stored based on the new time zone rules that come with the upgrade. (Step 6).

If you used the export utility to export a copy of the affected tables, you should now use the import utility to import your data from these tables back into your database. The import utility will update the timestamp data as it imports.

If you used the manual script method, you will need to update the affected timestamp data based on your backed up table. For example, if you previously backed up your table, you need to run an update statement similar to the one below to update your timestamp data.

UPDATE tztab t SET t.y =
(SELECT to_timestamp_tz(t1.y,'YYYY-MM-DD HH24.MI.SSXFF TZR')
FROM tztab_back t1
WHERE t.x=t1.x);

Step 40:

Now edit the init.ora:
- If you change the value for NLS_LENGTH_SEMANTICS prior to the upgrade put the
value back to CHAR.
- If you changed the CLUSTER_DATABASE parameter prior the upgrade set it back to TRUE

Step 41:


Startup the database:
SQL> startup

Create a server parameter file with a initialization parameter file
SQL> create spfile from pfile;

This will create a spfile as a copy of the init.ora file located in the
$ORACLE_HOME/dbs directory.

Step 42:

Modify the listener.ora file:
For the upgraded intstance(s) modify the ORACLE_HOME parameter
to point to the new ORACLE_HOME.

Step 43:

Start the listener
$ lsnrctl
LSNRCTL> start

Step 44:

Enable cron and batch jobs

Step 45:

Change oratab entry to use automatic startup
SID:ORACLE_HOME:Y

Step 46:


Upgrade the Oracle Cluster Registry (OCR) Configuration.
If you are using Oracle Cluster Services, then you must upgrade the Oracle Cluster Registry (OCR)keys for the database.

* Use srvconfig from the 10g ORACLE_HOME. For example:

% srvconfig -upgrade -dbname db_name -orahome pre-10g_Oracle_home

Useful Hints

** Upgrading With Read-Only and Offline Tablespaces

The Oracle database can read file headers created prior to Oracle 10g, so you do not need to do anything to them during the upgrade. The only exception to this is if you want to transport tablespaces created prior to Oracle 10g, to another platform. In this case, the file headers must be made read-write at some point before the transport. However, there are no special actions required on them during the upgrade.

The file headers of offline datafiles are updated later when they are brought online, and the file headers of read-only tablespaces are updated if and when they are made read-write sometime after the upgrade. In any other circumstance, read-only tablespaces never have to be made read-write.

It is a good idea to OFFLINE NORMAL all tablespaces except for SYSTEM and those containing rollback/UNDO tablespace prior to migration. This way if migration fails only the SYSTEM and rollback datafiles need to be restored rather than the entire database.

Note: You must OFFLINE the TABLESPACE as migrate does not allow OFFLINE files in an ONLINE tablespace.

** Converting Databases to 64-bit Oracle Database Software

If you are installing 64-bit Oracle Database 10g software but were previously using a 32-bit Oracle Database installation, then the databases will automatically be converted to 64-bit during the upgrade to Oracle Database 10g except when upgrading from Release 1 (10.1) to Release 2 (10.2).

The process is not automatic for the release 1 to release 2 upgrade, but is automatic for all other upgrades. This is because the utlip.sql script is not run during the release 1 to release 2 upgrade to invalid all PL/SQL objects. You must run the utlip.sql script as the last step in the release 10.1 environment, before upgrading to release 10.2.

** If error occurs while executing the catupgrd.sql

If an error occurs during the running of the catupgrd.sql script, once the problem is fixed you can simply rerun the catupgrd.sql script to finish the upgrade process and complete the
the upgrade process.

Appendix A: Initialization Parameters Obsolete in 10g

ENQUEUE_RESOURCES
DBLINK_ENCRYPT_LOGIN
HASH_JOIN_ENABLED
LOG_PARALLELISM
MAX_ROLLBACK_SEGMENTS
MTS_CIRCUITS
MTS_DISPATCHERS
MTS_LISTENER_ADDRESS
MTS_MAX_DISPATCHERS
MTS_MAX_SERVERS
MTS_MULTIPLE_LISTENERS
MTS_SERVERS
MTS_SERVICE
MTS_SESSIONS
OPTIMIZER_MAX_PERMUTATIONS
ORACLE_TRACE_COLLECTION_NAME
ORACLE_TRACE_COLLECTION_PATH
ORACLE_TRACE_COLLECTION_SIZE
ORACLE_TRACE_ENABLE
ORACLE_TRACE_FACILITY_NAME
ORACLE_TRACE_FACILITY_PATH
PARTITION_VIEW_ENABLED
PLSQL_NATIVE_C_COMPILER
PLSQL_NATIVE_LINKER
PLSQL_NATIVE_MAKE_FILE_NAME
PLSQL_NATIVE_MAKE_UTILITY
ROW_LOCKING
SERIALIZABLE
TRANSACTION_AUDITING
UNDO_SUPPRESS_ERRORS

Appendix B: Initialization Parameters Deprecated in 10g

LOGMNR_MAX_PERSISTENT_SESSIONS
MAX_COMMIT_PROPAGATION_DELAY
REMOTE_ARCHIVE_ENABLE
SERIAL_REUSE
SQL_TRACE
BUFFER_POOL_KEEP (replaced by DB_KEEP_CACHE_SIZE)
BUFFER_POOL_RECYCLE (replaced by DB_RECYCLE_CACHE_SIZE)
GLOBAL_CONTEXT_POOL_SIZE
LOCK_NAME_SPACE
LOG_ARCHIVE_START
MAX_ENABLED_ROLES
PARALLEL_AUTOMATIC_TUNING
PLSQL_COMPILER_FLAGS (replaced by PLSQL_CODE_TYPE and PLSQL_DEBUG)

Known issues

@SR:4697649.992
1) While doing a upgrade from 9iR2 to 10.2.0.X.X, on running the utlu102i.sql script as directed in step 2
Its output informs to add streams_pool_size=50331648 to the init.ora file. While adding the parameter Oracle gives streams_pool_size as invalid parameter.

STREAMS_POOL_SIZE, was introduced in release 10gR1. This message may be ignored for database version 9iR2 or less

2) One of the customer has reported on keeping the shared_pool_size at 150 MB, catmeta.sql fails with insuffient shared memory during the processing of view KU$_PHFTABLE_VI.

Please set the shared_pool_size at 200M.

3) While upgrade following error was encountered.
create or replace
*
ERROR at line 1:
ORA-06553: PLS-213: package STANDARD not accessible.
ORA-00955: name is already used by an existing object

Please make sure to set the following init parameters as below in the spfile/init file or comment them out to their default values, at the time of upgrading the database.

PLSQL_V2_COMPATIBILITY = FALSE
PLSQL_CODE_TYPE = INTERPRETED # Only applicable to 10gR1
PLSQL_NATIVE_LIBRARY_DIR = ""
PLSQL_NATIVE_LIBRARY_SUBDIR_COUNT = 0

Refer to
Note 170282.1 PLSQL_V2_COMPATIBLITY=TRUE causes STANDARD and DBMS_STANDARD to Error at Compile

Note 170282.1 PLSQL_V2_COMPATIBLITY=TRUE causes STANDARD and DBMS_STANDARD to Error at Compile



Always disconnect from the session which issues the STARTUP and connect as a fresh session before doing any further SQL. eg: On upgrade to 10.2 startup the instance with the upgrade option, exit sqlplus , reconnect a fresh SQLPLUS session as SYSDBA and then run the upgrade scripts

No comments:

Related Posts Plugin for WordPress, Blogger...

Let us be Friends...

Share |

Popular Posts

Recent Comments