Share the content if you found it is useful (You can share using 300 community websites) click "share" at the end of the post.

You are encouraged to leave a comment.








Wednesday, January 28, 2009

TUNING - ORACLE DATABASE

Memory Tuning
The total available memory on a system should be configured in such a manner, that all components of the system function at optimum levels. The following is a rule-of-thumb breakdown to help assist in memory allocation for the various components in a system with an Oracle back-end.

SYSTEM COMPONENT ALLOCATED % OF MEMORY
Oracle SGA Components ~ 50%
Operating System +Related Components ~15%
User Memory ~ 35%

The following is a rule-of-thumb breakdown of the ~50% of memory that is allocated for an Oracle SGA. These are good starting numbers and will potentially require fine-tuning, when the nature and access patterns of the application is determined.

ORACLE SGA COMPONENT ALLOCATED % OF MEMORY
Database Buffer Cache ~80%
Shared Pool Area ~12%
Fixed Size + Misc ~1%
Redo Log Buffer ~0.1%


The following is an example to illustrate the above guidelines. In the following example, it is assumed that the system is configured with 2 GB of memory, with an average of 100 concurrent sessions at any given time. The application requires response times within a few seconds and is mainly transactional. But it does support batch reports at regular intervals.

SYSTEM COMPONENT ALLOCATED MEMORY(IN MB)
Oracle SGA Components ~1024
Operating System +Related Components ~306
User Memory ~694

In the aforementioned breakdown, approximately 694MB of memory will be available for Program Global Areas (PGA) of all Oracle Server processes. Again, assuming 100 concurrent sessions, the average memory consumption for a given PGA should not exceed ~7MB. It should be noted that SORT_AREA_SIZE is part of the PGA.

ORACLE SGA COMPONENT ALLOCATED MEMORY(IN MB)
Database Buffer Cache ~800
Shared Pool Area ~128 - 188
Fixed Size + Misc ~ 8
Redo Log Buffer ~ 1 (average size 512K)


  • Another Example
Let's assume that we have a high water mark of 100 connects sessions to our Oracle database server. We multiply 100 by the total area for each PGA memory region, and we can now determine the maximum size of our SGA:
The total RAM demands for Oracle is 20 percent of total RAM for MS-Windows, 10% of RAM for UNIX

Here we can see the values for sort_area_size and hash_area_size for our Oracle database. To compute the value for the size of each PGA RAM region, we can write a quick data dictionary query against the v$parameter view :
set pages 999;
column pga_size format 999,999,999
select
2048576 + a.value + b.value pga_size
from v$parameter a, v$parameter b
where a.name = 'sort_area_size'
and b.name = 'hash_area_size';


PGA_SIZE
------------
3,621,440

The output from this data dictionary query shows that every connected Oracle session will use 3.6 megabytes of RAM memory for the Oracle PGA. Now, if we were to multiply the number of connected users by the total PGA demands for each connected user, we will know exactly how much RAM memory in order to reserve for connected sessions.

Total RAM on Windows Server 1250 MB
Less:
Total PGA regions for 100 users: 362 MB
RAM reserved for Windows (20 percent) 500 MB
----------
862 MB

Hence, we would want to adjust the RAM to the data buffers in order to make the SGA size less than 388 MB (that is 1250MB - 862 MB). Any SGA size greater than 388 MB, and the server will start RAM paging, adversely affecting the performance of the entire server. The final task is to size the Oracle SGA such that the total memory involved does not exceed 388 MB.

Top Oracle Init.ora’s Parameters

  • BUFFER_POOL_KEEP - How many buffers to have for pinned objects that you need
  • BUFFER_POOL_RECYCLE - How many buffers to have for new stuff that will get pushed out
  • CHECKPOINT_PROCESS = True (starts CKPT process for better performance at checkpoints)
  • CLOSED_CACHED_OPEN_CURSORS = Indicates if the cursors must be closed immediatly after the committ. If you are using a lot of cursors or Developer 2000, use FALSE
  • COMPATIBLE - Set for correct version and features
  • CPU_COUNT = number of CPUs on your system.
  • DB_BLOCK_BUFFERS = This parameter determines the number of blocks in the database buffer cache in the SGA. Increment if DB_BUFFER_CACHE hit ratio <>
    *Determine if DB_BLOCK_BUFFERS is high enough (Goal > 98% for web systems, 95% for others)
      select 100-(sum(decode(name, 'physical reads', value,0))/
      (sum(decode(name, 'db block gets', value,0)) +
      (sum(decode(name, 'consistent gets', value,0))))) * 100
      "Read Hit Ratio
      from v$sysstat;


    Per Buffer
    Another way to see this ratio, as of V8.1, is per pool from the V_BUFFER_POOL_STATISTICS view. This does not include the direct physical reads, so per pool we would have:
      select name,(1-(physical_reads/(db_block_gets+consistent_gets)))*100 cache_hit_ratio
      from v$buffer_pool_statistics;

    NAME CACHE_HIT_RATIO
    -------------------- ---------------
    KEEP 77.42
    RECYCLE 100.00
    DEFAULT 50.91

    Now logically, we don't care about the hit ration in the RECYCLE pool since this is for buffers that we think will only be used once and then flushed out. The KEEP and DEFAULT pools still have a much smaller hit ratio than we are told we need. So if we followed the guidelines we would add more buffers.

    A Different Approach
    We can ask the question the other way around. Instead of 'Do we need more?' we can as 'Do we have more than we need?' No matter what the hit ratio is, if we are not using all of the buffers that have been allocated, there is no advantage in allocating more. In fact, this could slow us down by forcing more swapping at the OS level. So we can just check if there are free buffers:

      select count(1) from v$bh where status='free';

    COUNT(1)
    ----------
    984

    This is from the same instance in which I have the 56 percent hit ratio. Here I see that increasing the number of buffers will not impact the hit ratio at all since I have free buffers right now. But I might want to shift my allocation of buffers between the pools. I want the highest hit ratio in my keep pool since I know that I am going to be reusing this data. Ideally, I have one buffer free all the time. This would tell me that I have not over-allocated and that I have exactly what is needed. At the same time I will want to check my paging on the server. I might make the instance faster by decreasing the size of my SGA. Of course, there are other factors in memory consumption and you will want to take all into account.


  • DB_BLOCK_SIZE - Size of the blocks (db_block_size x db_block_buffers=bytes for data). Setup on database creation. Generally 8K, for DW 16K
  • DB_FILE_MULTIBLOCK_READ_COUNT= DB_FILE_MULTIBLOCK_READ_COUNT controls the number of data blocks read for each read request during a full table scan. If you are using LVM or striping, this parameter should be set so that

  • DB_BLOCK_SIZE * DB_FILE_MULTIBLOCK_READ_COUNT is a multiple of the LVM stripe size. If you are not using LVM or striping, DB_BLOCK_SIZE * DB_FILE_MULTIBLOCK_READ_COUNT should equal the maximum operating system read buffer. On many Unix Systems and Windows systems this is 64 KB. In any case, DB_FILE_MULTIBLOCK_READ_COUNT cannot be larger than DB_BLOCK_BUFFERS / 4.
    The maximum read buffer is generally higher on raw file systems. It varies from 64 KB (on AIX) to 128 KB (on Solaris) to 1 MB (HP-UX). On a UNIX file system, it is usually only possible to read one buffer per I/O, usually 8KB. On 32-bit Windows, the buffer is 256KB.
    This parameter will significantly increase the performance of a reorganization if properly tuned. For example, suppose the OS read buffer is 64 KB, the database block size is 4 KB and DB_FILE_MULTIBLOCK_READ_COUNT is set to eight. During a full table scan, each I/O operation will read only 32 KB. If DB_FILE_MULTIBLOCK_READ_COUNT is reset to 16, performance will almost double because twice as much data can be read by each I/O operation.
  • DB_WRITERS = In Oracle 8.0 and up this parameter has been de-supported and replaced by 2 other parameters namely DB_WRITER_PROCESSES and DBWR_IO_SLAVES.
  • DB_BLOCK_LRU_LATCHES - DBWR_IO_SLAVES and DB_WRITER_PROCESSES
    The DB_WRITER_PROCESSES parameter supported on Windows NT/Windows 2000?
    The Oracle8i documentation and [BUG:925955] incorrectly state that this parameter is not supported on Windows NT/2000.
    Multiple DBWR processes are mainly used to simulate asynchronous I/O when the operating system does not support it. Since Windows NT and Windows 2000 use asynchronous I/O by default, using multiple DBWR processes may not necessarily improve performance. Increasing this parameter is also likely to have minimal effect on single-CPU systems. Increasing this parameter could, in fact, reduce performance on systems where the CPU's are already over burdened. In cases where the main performance bottleneck is that a single DBWR process cannot keep up with the work load, then increasing the value for DB_WRITER_PROCESSES may improve performance.
    When increasing DB_WRITER_PROCESSES it may also be necessary to increase the DB_BLOCK_LRU_LATCHES parameter, as each DBWR process requires an LRU latch.

Reference for setting DB_BLOCK_LRU_LATCHES parameter
Default value: 1/2 the # of CPU's
MAX Value: Min 1, Max about 6 * max(#cpu's,#processor groups)
1)Oracle has found that a optimal value for this would be 2 X #CPU's and would recommend testing at this level.
2)Also setting this parameter to a multiple of #CPU's is important for Oracle to properly allocate and utilize working sets.
3)This value is hard coded in 9i
**IMPORTANT**
Increasing this parameter greater than 2 X #CPU's may have a negative impact on the system.

FREQUENTLY ASKED QUESTIONS
You have just upgraded to 8.0 or 8.1 and have found that there are 2 new parameters regarding DBWR. You are wondering what the differences are and which one you should use.

DBWR_IO_SLAVES
In Oracle7, the multiple DBWR processes were simple slave processes; i.e., unable to perform async I/O calls. In Oracle80, true asynchronous I/O is provided to the slave processes, if available. This feature is implemented via the init.ora parameter dbwr_io_slaves. With dbwr_io_slaves, there is still a master DBWR process and its slave processes. This feature is very similar to the db_writers in Oracle7, except the IO slaves are now capable of asynchronous I/O on systems that provide native async I/O, thus allowing for much better throughput as slaves are not blocked after the I/O call. I/O slaves for DBWR are allocated immediately following database open when the first I/O request is made.

DB_WRITER_PROCESSES
Multiple database writers is implemented via the init.ora parameter db_writer_processes. This feature was enabled in Oracle8.0.4, and allows true database writers; i.e., no master-slave relationship. With Oracle8 db_writer_processes, each writer process is assigned to a LRU latch set. Thus, it is recommended to set db_writer_processes equal to the number of LRU latches (db_block_lru_latches) and not exceed the number of CPUs on the system. For example, if db_writer_processes was set to four and db_lru_latches=4, then each writer process will manage its corresponding set.

Things to know and watch out for....
1. Multiple DBWRs and DBWR IO slaves cannot coexist. If both are enabled, then the following error message is produced: ksdwra("Cannot start multiple dbwrs when using I/O slaves.\n"); Moreover, if both parameters are enabled, dbwr_io_slaves will take precedence.
2. The number of DBWRs cannot exceed the number of db_block_lru_latches. If it does, then the number of DBWRs will be minimized to equal the number of db_block_lru_latches and the following message is produced in the alert.log during startup: ("Cannot start more dbwrs than db_block_lru_latches.\n"); However, the number of lru latches can exceed the number of DBWRs.
3. dbwr_io_slaves are not restricted to the db_block_lru_latches; i.e., dbwr_io_slaves >= db_block_lru_latches.

Should you use DB_WRITER_PROCESSES or DBWR_IO_SLAVES?
Although both implementations of DBWR processes may be beneficial, the general rule, on which option to use, depends on the following :
1) the amount write activity;
2) the number of CPUs (the number of CPUs is also indirectly related to the number LRU latch sets);
3) the size of the buffer cache;
4) the availability of asynchronous I/O (from the OS).

There is NOT a definite answer to this question but here are some considerations to have when making your choice. Please note that it is recommended to try BOTH (not simultaneously) against your system to determine which best fits the environment.

-- If the buffer cache is very large (100,000 buffers and up) and the application is write intensive, then db_writer_processes may be beneficial. Note, the number of writer processes should not exceed the number of CPUs.

-- If the application is not very write intensive (or even a DSS system) and async I/O is available, then consider a single DBWR writer process; If async I/O is not available then use dbwr_io_slaves.

-- If the system is a uniprocessor(1 CPU) then implement may want to use dbwr_io_slaves.

Implementing db_io_slaves or db_writer_processes comes with some overhead cost. Multiple writer processes and IO slaves are advanced features, meant for high IO throughput. Implement this feature only if the database environment requires such IO throughput. In some cases, it may be acceptable to disable I/O slaves and run with a single DBWR process.

Other Ways to Tune DBWR Processes
It can be easily seen that reducing buffer operations will be a direct benefit to DBWR and also help overall database performance. Buffer operations can be reduced by:
1) using dedicated temporary tablespaces
2) direct sort reads
3) direct Sqlloads
4) performing direct exports.

In addition, keeping a high buffer cache hit ratio will be extremely beneficial not only to the response time of applications, but the DBWR as well.

  • DML_LOCKS = Concurrent Users * 10
  • JOB_QUEUE_PROCESSES - To use DBMS_JOB
  • LOG_BUFFER = Size in Bytes for Redo Logs Buffer. Increasing the size of this parameter can increase I/O efficiency, when the transactions are long and/or numerous. Generally = 512K. Size over 1 MB is not good.
  • LOG_CHECKPOINT_INTERVAL = This value forces checkpoints to occur only at log file switches.
  • LOG_ENTRY_PREBUILD_THRESOLD= 2048. On multiple CPU machines only.
  • LOG_SIMULTANEOUS_COPIES = 2 * cpu_count
  • LOG_SMALL_ENTRY_MAX_SIZE = 50 . On multiple CPU machines only.
  • OPEN_CURSORS = Give it a big value, at least 100
  • OPTIMIZER_FEATURES_ENABLED - Don’t miss out on features
  • OPTIMIZER_INDEX_COST_ADJ - Force index use
  • OPTIMIZER_MODE - Choose, Rule, First_Rows or All_Rows
  • PARALLEL_MAX_SERVERS = This value specifies the minimum number of query servers that will be active on the instance. There are system resources involved in starting a query server, and having the query server started and waiting for requests will accelerate processing. The recommended value is:
  • 2 * max_degree * number_of_concurrent_users.
    If the value for the statistics, "Servers Busy" is high, increase PARALLEL_MAX_SERVERS
  • PARALLEL_MIN_SERVERS = This parameter sets the number of query server processes that are started when the instance starts, thus eliminating the performance penalties of frequent query server process startups and shutdowns
  • PRE_PAGE_SGA = true
  • PROCESSES = Increase this parameter, default is 50

  • ROLLBACK_SEGMENTS = The general rule is to put # of concurrent users / 4. Create at least 2 tablespaces for rollbacks.
  • SHARED_POOL_RESERVED_SIZE - Memory held for future big PL/SQL or ORA-error
  • SHARED_POOL_SIZE - Memory allocated for data dictionary and SQL & PL/SQL and reusable objects (library cache and the data dictionary cache). Increment it if CACHE hit ratio <>
    select pool, name, bytes/1024/1024 "Size in MB"
    from v$sgastat
    where name='free memory';
    You should see output similar to the following:
    NAME Size in MB
    Free memory 39.6002884
    What this return would tell you is that there is 39 M of free memory in the shared pool, which would mean that the shared pool is being under utilized. If the shared pool was 70 M, over half of it would be under utilized. This memory could be allocated elsewhere.

    *DATA DICTIONARY cache miss ratio (Goal > 90%, increase SHARED_POOL)
    Contains:
    Preparsed database procedures
    Preparsed database triggers
    Recently parsed SQL & PL/SQL requests
    This is the memory allocated for the library and data dictionary cache
    select sum(gets) Gets, sum(getmisses) Misses,
    (1 - (sum(getmisses) / (sum(gets) +
    sum(getmisses))))*100 HitRatio
    from v$rowcache;

    * El HIT RATIO del SHARED_POOL_SIZE (LIBRARY CACHE hit ratio) debe ser superior al 99%
    column namespace heading "Library Object"
    column gets format 9,999,999 heading "Gets"
    column gethitratio format 999.99 heading "Get Hit%"
    column pins format 9,999,999 heading "Pins"
    column pinhitratio format 999.99 heading "Pin Hit%"
    column reloads format 99,999 heading "Reloads"
    column invalidations format 99,999 heading "Invalid"
    column db format a10
    set pages 58 lines 80
    select namespace, gets, gethitratio*100 gethitratio,
    pins, pinhitratio*100 pinhitratio, RELOADS, INVALIDATIONS
    from v$librarycache
    /

    If all Get Hit% (gethitratio in the view) except for indexes are greater than 80-90 percent, this is the desired state; the value for indexes is low because of the few accesses of that type of object. Notice that the Pin Hit% should ve also greater than 90% (except for indexes). The other goals of tuning this area are to reduce reloads to as small a value as possible (this is done by proper sizing and pinning) and to reduce invalidations. Invalidations happen when for one reason or another an object becomes unusable.

    Guideline: In a system where there is no flushing increase the shared pool size in 20% increments to reduce reloads and invalidations and increase hit ratios.

    select sum(pins) Executions, sum(pinhits) Execution_Hits,
    ((sum(pinhits) / sum(pins)) * 100) phitrat,
    sum(reloads) Misses,
    ((sum(pins) / (sum(pins) + sum(reloads))) * 100) RELOAD_hitrat
    from v$librarycache;

    * How much memory is left for SHARED_POOL_SIZE
    col value for 999,999,999,999 heading "Shared Pool Size"
    col bytes for 999,999,999,999 heading "Free Bytes"
    select to_number(v$parameter.value) value, v$sgastat.bytes,
    (v$sgastat.bytes/v$parameter.value)*100 "Percent Free"
    from v$sgastat, v$parameter
    where v$sgastat.name = 'free memory'
    and v$parameter .name = 'shared_pool_size';

    A better query:
    select sum(ksmchsiz) Bytes, ksmchcls Status
    from SYS.x$ksmsp
    group by ksmchcls;

    If there is free memory then there is no need to increase this parameter.

    * Identifying objects reloaded into the SHARED POOL again and again
    select substr(owner,1,10) owner,substr(name,1,25) name, substr(type,1,15) type, loads, sharable_mem
    from v$db_object_cache
    -- where owner not in ('SYS','SYSTEM') and
    where loads > 1 and type in ('PACKAGE','PACKAGE BODY','FUNCTION','PROCEDURE')
    order by loads DESC;

    * Large Objects NOT 'pinned' in Shared Pool
    To determine what large PL/SQL objects are currently loaded in the shared pool and are not marked 'kept' (NOT pinned) and therefore may causing a problem, execute the following query:
    select name, sharable_mem
    from v$db_object_cache
    where sharable_mem > 10000
    and (type = 'PACKAGE' or type = 'PACKAGE BODY' or type = 'FUNCTION'
    or type = 'PROCEDURE')
    and kept = 'NO';

  • SORT_AREA_SIZE = Indica la cantidad de memoria reservada para sorts en bytes. Deberia haber pocos valores (especialmente en disco), si no es asi, entonces incrementar SORT_AREA_SIZE. Para saber si debo incrementar o no el parametro uso:

  • select name, value from v$sysstat where name like '%sort%';
  • SORT_AREA_RETAINED_SIZE = is the size that the SORT_AREA_SIZE is actually reduced to once the sort is complete. This parameter should be set less than or equal to SORT_AREA_SIZE. If we are going to make a big import or use several batch processes, increase it. Just use ALTER SESSION (for batch) or ALTER SYSTEM DEFERRED (for imports). Remember to put back to its original value. Sorts (memory) tells you the number of sorts done entirely in memory. Sorts (disk) indicates the number of sorts that required access to disk. The recommended setting for this parameter and SORT_AREA_SIZE is 65K-1MB.

  • SORT_DIRECT_WRITES = Setting SORT_DIRECT_WRITES to true allows Oracle to bypass the buffer cache for the writing of sort runs to the temporary tablespace. This can improve the performance by a factor of three or more. Be sure to also set SORT_WRITE_BUFFERS=8 and SORT_WRITE_BUFFER_SIZE=65536. SORT_DIRECT_WRITES, SORT_WRITE_BUFFERS and SORT_WRITE_BUFFER_SIZE are obsoleted in 8.1.3. The same considerations for SORT_AREA_SIZE apply to SORT_DIRECT_WRITES when using the parallel query option. Under Oracle8i, sorts always use direct writes and automatically configure the number and size of the direct write buffers.

INSTANCE TUNING
1) Library Cache Hit Ratio:
In the most basic terms, the library cache is a memory structure that holds the parsed (ie. already examined to determine syntax correctness, security privileges, execution plan, etc.) versions of SQL statements that have been executed at least once. As new SQL statements arrive, older SQL statements will be pushed from the memory structure to provide space for the new statements. If the older SQL statements need to be re-executed, they will now have to be re-parsed. Also, a SQL statement that is not exactly the same as an already parsed statement (including even capitalization) will be reparsed even though it may perform the exact same operation. Parsing is an expensive operation, so the objective is to make the memory structure large enough to hold enough parsed SQL statements to avoid a large percentage of re-parsing.
Target: 99% or greater.
Value: SELECT (1 - SUM(reloads)/SUM(pins)) FROM v$librarycache;
Correction: Increase the SHARED_POOL_SIZE parameter (in bytes) in the INIT.ORA file.

2) Dictionary Cache Hit Ratio:
The dictionary cache is the memory structure that holds the most recently used contents of ORACLE's data dictionary, such as security privileges, table structures, column data types, etc. This data dictionary information is necessary for each and every parsing of a SQL statement. Recalling that memory is around 300 times faster than disk, it is needless to say that performance is improved by holding enough data dictionary information in memory to significantly minimize disk accesses.
Target: 90%
Value: SELECT (1 - SUM(getmisses)/SUM(gets)) FROM v$rowcache;
Correction: Increase the SHARED_POOL_SIZE parameter (in bytes) in the INIT.ORA file.

3) Buffer Cache Hit Ratio:
The buffer cache is the memory structure that holds the most recently used blocks read from disk, whether table, index, or other segment type. As new data is read into the buffer cache, data that hasn't been recently used is pushed out. Again recalling that memory is approximately 300 times faster than disk, the objective is to hold enough data in memory to minimize disk accesses. Note that data read from tables through the use of indexes is held in the buffer cache much longer than data read via full-table scans.
Target: 90% (although some shops find 80% or even 70% acceptable)
Value:
SELECT value FROM v$sysstat WHERE name = 'consistent gets';
SELECT value FROM v$sysstat WHERE name = 'db block gets';
SELECT value FROM v$sysstat WHERE name = 'physical reads';
Buffer cache hit ratio = 1 - physical reads/(consistent gets + db block gets)
Correction: Increase the DB_BLOCK_BUFFERS parameter (in db blocks) in the INIT.ORA file.
Other notes:
- Compare the values for "table scans" and "table access by rowid" in the v$sysstat table to gain general insight into whether additional indexing is needed. Tuning specific applications via indexing will increase the "table access by rowid" value (ie. tables read through the use of indexes) and decrease the "table scans" values. This effect tends to improve the buffer cache hit ratio since a smaller volume of data is read into the buffer cache from disk, so less previously cached data is pushed out. (See the article on application tuning for more details regarding indexing.)
- A low buffer cache hit ratio can very quickly lead to an I/O bound situation, as more reads are required per period of time to provide the requested data. When the reads/time period exceed the workload supported by the disk subsystem, exponential performance degradations can occur. (Please see the section on Operating System tuning.)
- Since the buffer cache will typically be the largest memory structure allocated in the ORACLE instance, it is the structure most likely to contribute to O/S paging. If the buffer cache is sized such that the hit ratio is 90%, but excessive paging occurs at this setting, performance may be better if the buffer cache were sized to achieve an 85% hit ratio. Careful analysis is necessary to balance the buffer cache hit ratio with the O/S paging rate.

4) Sort Area Hit Ratio:
Sorts that are too large to be performed in memory are written to disk. Once again, memory is about 300 times faster than disk, so for instances where a large volume of sorting occurs (such as decision support systems or data warehouses), sorting on disk can degrade performance. The objective, of course, is to allow a significant percentage of sorts to occur in memory.
Target: 90% (although many shops find 80% or less acceptable)
Value:
SELECT value FROM v$sysstat WHERE name = 'sorts (memory)';
SELECT value FROM v$sysstat WHERE name = 'sorts (disk)';
Sort area hit ratio = 1 - disk sorts/(memory sorts + disk sorts);

Correction: Increase the SORT_AREA_SIZE parameter (in bytes) in the INIT.ORA file.

5) Redo Log Space Requests:
Redo logs (and archive logs if the ORACLE instance is run in ARCHIVELOG mode) are transaction logs involving a variety of structures. The redo log buffer is a memory structure into which changes are recorded as they are applied to blocks in the buffer cache (including data, index, rollback segments, etc.). Committed changes are synchronously flushed to redo log file members on disk, while uncommited changes are asynchronously written to redo log files. (This approach makes perfect sense on inspection. If an instance crash occurs, commited changes are already written to the redo logs on disk and are applied during instance recovery. Uncommited changes in the redo log buffer not yet written to disk are lost, and any uncommited changes that have been written to disk are rolled-back during instance recovery.) A session performing an update and an immediate commit will not return until the committed change has been written to the redo log buffer and flushed to the redo log files on disk. Redo log groups are written to in a round-robin manner. When the mirrored members of a redo log group become full, a log switch occurs, thus archiving one member of the redo log group (if ARCHIVELOG mode is TRUE), then clearing the members of that redo log group. Note that a checkpoint also occurs at least on each redo log switch. In most basic form, the redo log buffer should be large enough that no waits for available space in the memory structure occur while changes are written to redo log files. The redo log file size should be large enough that the redo log buffer does not fill during a redo log switch. Finally, there should be enough redo log groups that the archiving and clearing of filled redo logs does not cause waits for redo log switches, thus causing the redo log buffer to fill. The inability to write changes to the redo log buffer because it is full is reported as redo log space requests in the v$sysstat table.
Target: 0
Value: SELECT value FROM v$sysstat WHERE name = 'redo log space requests';
Correction:
- Increase the LOG_BUFFER parameter (in bytes) in the INIT.ORA file.
- Increase the redo log size.
- Increase the number of redo log groups.
Other notes:
- The default configuration of small redo log size and two redo log groups is seldom sufficient. Between 4 and 10 groups typically yields adequate results, depending on the particular archive log destination (whether a single disk, RAID array, or tape). Size will be very dependent upon the specific application characteristics and throughput requirements, and can range from less than 10 Mb to 500 Mb or greater.
- Since redo log sizes and groups can be changed without a shutdown/restart of the instance, increasing the redo log size and number of groups is typically the best area to start tuning for reduction of redo log space requests. If increasing the redo log size and number of groups appears to have little impact on redo log space requests, then increase the LOG_BUFFER initialization parameter.

6) Redo Buffer Latch Miss Ratio:
One of the two types of memory structure locking mechanisms used by an ORACLE instance is the latch. A latch is a locking mechanism that is implemented entirely within the executable code of the instance (as opposed to an enqueue, see below). Latch mechanisms most likely to suffer from contention involve requests to write data into the redo log buffer. To serve the intended purpose, writes to the redo log buffer must be serialized (ie. one process locks the buffer, writes to it, then unlocks it, a second process locks, writes, and unlocks, etc., while other processes wait for their chance to acquire these same locks). There are four different groupings applicable to redo buffer latches: redo allocation latches and redo copy latches, each with immediate and willing-to-wait priorities. Redo allocation latches are acquired by small redo entries (having an entry size smaller than or equal to the LOG_SMALL_ENTRY_MAX_SIZE initialization parameter) and utilize only a single CPU's resources for execution. Redo copy latches are requested by larger redo entries (entry size larger than the LOG_SMALL_ENTRY_MAX_SIZE), and take advantage of multiple CPU's for execution. Recall from above that committed changes are synchronously written to redo logs on disk: these entries require an immediate latch of the appropriate type. Uncommitted changes are asynchronously written to redo log files, thus they attempt to acquire a willing-to-wait latch of the appropriate type. Below, each category of redo buffer latch will be considered seperately.
- Redo allocation immediate and willing-to-wait latches:
Target: 1% or less
Value (immediate):
SELECT a.immediate_misses/(a.immediate_gets + a.immediate_misses + 0.000001)
FROM v$latch a, v$latchname b
WHERE b.name = 'redo allocation' AND b.latch# = a.latch#;
Value (willing-to-wait):
SELECT a.misses/(a.gets + 0.000001)
FROM v$latch a, v$latchname b
WHERE b.name = 'redo allocation' AND b.latch# = a.latch#;
Correction: Decrease the LOG_SMALL_ENTRY_MAX_SIZE parameter in the INIT.ORA file.
Other notes:
- By making the max size for a redo allocation latch smaller, more redo log buffer writes qualify for a redo copy latch instead, thus better utilizing multiple CPU's for the redo log buffer writes. Even though memory structure manipulation times are measured in nanoseconds, a larger write still takes longer than a smaller write. If the size for remaining writes done via redo allocation latches is small enough, they can be completed with little or no redo allocation latch contention.
- On a single CPU node, all log buffer writes are done via redo allocation latches. If log buffer latches are a significant bottleneck, performance can benefit from additional CPU's (thus enabling redo copy latches) even if the CPU utilization is not an O/S level bottleneck.
- In the SELECT statements above, an extremely small value is added to the divisor to eliminate potential divide-by-zero errors.

- Redo copy immediate and willing-to-wait latches:
Target: 1% or less
Value (immediate):
SELECT a.immediate_misses/(a.immediate_gets + a.immediate_misses + 0.000001)
FROM v$latch a, v$latchname b
WHERE b.name = 'redo copy' AND b.latch# = a.latch#;
Value (willing-to-wait):
SELECT a.misses/(a.gets + 0.000001)
FROM v$latch a, v$latchname b
WHERE b.name = 'redo copy' AND b.latch# = a.latch#;
Correction: Increase the LOG_SIMULTANEOUS_COPIES parameter in the INIT.ORA file.
Other Notes:
- Essentially, this initialization parameter is the number of redo copy latches available. It defaults to the number of CPU's (assuming a multiple CPU node). Oracle Corporation recommends setting it as large as 2 times the number of CPU's on the particular node, although quite a bit of experimentation may be required to get the value adjusted in a suitable manner for any particular instance's workload. Depending on CPU capability and utilization, it may be beneficial to set this initialization parameter smaller or larger than 2 X #CPU's.
- Recall that the assignment of log buffer writes to either redo allocation latches or redo copy latches is controlled by the maximum log buffer write size allowed for a redo allocation latch, and is specified in the LOG_SMALL_ENTRY_MAX_SIZE initialization parameter. Recall also that redo copy latches apply only to multiple CPU hosts.

7) Enqueue Waits:
The second of the two types of memory structure locking mechanisms used by an ORACLE instance is the enqueue. As opposed to a latch, an enqueue is a lock implemented through the use of an operating system call, rather than entirely within the Instance's executable code. Exactly what operations use locks via enqueues is not made sufficiently clear from any Oracle documentation (or at least none that the author has seen), but the fact that enqueues waits do degrade instance performance is reasonably clear. Luckily, tuning enqueues is very straight-forward.
Target: 0
Value: SELECT value FROM v$sysstat WHERE name = 'enqueue waits';
Correction: Increase the ENQUEUE_RESOURCES parameter in the INIT.ORA file.

8) Checkpoint Contention:
A checkpoint is the process of flushing all changed data blocks (table, index, rollback segments, etc.) held in the buffer cache to their corresponding datafiles on disk. This process occurs during each redo log switch, each time the number of database blocks specified in the LOG_CHECKPOINT_INTERVAL initialization parameter is reached, and each time the number of seconds specified in the LOG_CHECKPOINT_TIMEOUT is reached. (Also, checkpoints occur during a NORMAL or IMMEDIATE SHUTDOWN, when a tablespace is placed in BACKUP mode, or when an ALTER SYSTEM CHECKPOINT is manually issued, but these occurrences are usually outside the scope of normal daytime operation.) Depending on the number of changed blocks in the buffer cache, a checkpoint can take considerable time to complete. Since this process is essentially done asynchronously, user sessions performing work will typically not have to wait for a checkpoint to complete. However checkpoints can effect overall system performance since they are fairly resource intensive operations, even though they occur in the background. Checkpoints are, of course, absolutely necessary, but it is quite possible for one checkpoint to begin (because of LOG_CHECKPOINT_INTERVAL or LOG_CHECKPOINT_TIMEOUT settings) and partially complete, then be rolled-back because another checkpoint was issued (perhaps because of a redo log switch). It is desirable to avoid this checkpoint contention because it wastes considerable resources that can be used by other processes. Checkpointing statistics are readily available in the v$sysstat table, and the contention is fairly simple to determine.
Target: 1 or less
Value:
SELECT value FROM v$sysstat WHERE name = 'background checkpoints started';
SELECT value FROM v$sysstat WHERE name = 'background checkpoints completed';
Checkpoints rolled-back = checkpoints started - checkpoints completed;
Correction:
- Increase the LOG_CHECKPOINT_TIMEOUT parameter (in seconds) in the INIT.ORA file, or set it to 0 to disable time-based checkpointing. If time-based checkpointing is not disabled, set it to checkpoint once per hour or more.
- Increase the LOG_CHECKPOINT_INTERVAL parameter (in db blocks) in the INIT.ORA file, or set it to an arbitrarily large value so that change-based checkpoints will only occur during a redo log switch.
- Examine the redo log size and the resulting frequency of redo log switches.
Other notes: Note that regardless of the checkpoint frequency, no data is lost in the event of an instance crash. All changes are recorded to the redo logs and would be applied during instance recovery on the next startup, so checkpoint frequency will impact the time required for instance recovery. Presented below is a typical scenario:
- Set the LOG_CHECKPOINT_INTERVAL to an arbitrarily large value, set the LOG_CHECKPOINT_TIMEOUT to 2 hours, and size the redo logs so that a log switch will normally occur once per hour. During times of heavy OLTP activity, a change-based log switch will occur approximately once per hour, and no time-based checkpoints will occur. During periods of light OLTP activity, a time-based checkpoint will occur at least once every two hours, regardless of the number of changes. Setting the LOG_CHECKPOINT_INTERVAL arbitrarily large allows change-based checkpoint frequency to be adjusted during periods of heavy use by re-sizing the redo logs on-line rather than adjusting the initialization parameter and performing an instance shutdown/restart.

9) Rollback Segment Contention:
Rollback segments are the structures into which undo information for uncommited changes are temporarily stored. This behavior serves two purposes. First, a session can remove a change that was just issued by simply issuing a ROLLBACK rather than a COMMIT. Second, read consistency is established because a long-running SELECT statement against a table that is constantly being updated (for example) will get data that is consistent with the start time of the SELECT statement by reading undo information from the appropriate rollback segment. (Otherwise, the answer returned by the long-running SELECT would vary depending on whether that particular block was read before the update occurred, or after.) Rollback segments become a bottleneck when there are not enough to handle the load of concurrent activity, in which case, sessions will wait for write access to an available rollback segment. Some waits for rollback segment data blocks or header blocks (usually header blocks) will always occur, so criteria for tuning is to limit the waits to a very small percentage of the total number of all data blocks requested. Note that rollback segments function exactly like table segments or index segments: they are cached in the buffer cache, and periodically checkpointed to disk.
Target: 1% or less
Value:
Rollback waits = SELECT max(count) FROM v$waitstat
WHERE class IN ('system undo header', 'system undo block','undo header', 'undo block')
GROUP BY class;
Block gets = SELECT sum(value) FROM v$sysstat WHERE name IN ('consistent gets','db block gets');
Rollback segment contention ratio = rollback waits / block gets
Correction: Create additional rollback segments.

10) Freelist contention:
In each table, index, or other segment type, the first one or more blocks contain one or more freelists. The freelist(s) identify the blocks in that segment that have free space available and can accept more data. Any INSERT, UPDATE, or DELETE activity will cause the freelist(s) to be accessed. Change activity with a high level of concurrency may cause waits to access to these freelist(s). This is seldom a problem in decision support systems or data warehouses (where updates are processed as nightly single-session batch for example), but can become a bottleneck with OLTP systems supporting large numbers of users. Unfortunately, there are no initialization parameters or other instance-wide settings to correct freelist contention: this must be corrected on a table by table basis by re-creating the table with additional freelists and/or by modifying the PCT_USED parameter. (Please see the article on storage management.) However, freelist contention can be measured at the instance level. Some freelist waits will always occur; the objective is to limit the freelist waits to a small percentage of the total blocks requested.
Target: 1% or less
Value:
Freelist waits = SELECT count FROM v$waitstat WHERE class = 'free list';
Block gets = SELECT sum(value) FROM v$sysstat WHERE name IN ('consistent gets','db block gets');
Freelist contention ratio = Freelist waits / block gets
Correction: No method for instance-level correction. Please see the article on storage management.


Application and SQL Tuning

* Check DB Parameters
select substr(name,1,20), substr(value,1,40), isdefault, isses_modifiable, issys_modifiable
from v$parameter
where issys_modifiable <> 'FALSE'
or isses_modifiable <> 'FALSE'
order by name;

* Size of Database
compute sum of bytes on report
break on report
Select tablespace_name, sum(bytes) bytes
From dba_data_files
Group by tablespace_name;

* How much Space is Left?
compute sum of bytes on report
Select tablespace_name, sum(bytes) bytes
From dba_free_space
Group by tablespace_name;


select substr(name,1,35) name, substr(value,1,25) value
from v$parameter
where name in ('db_block_buffers','db_block_size',
'shared_pool_size','sort_area_size');

* Identify the SQL responsible for the most BUFFER HITS and/or DISK READS. Si quiero ver que hay en el SQL AREA hago:
SELECT SUBSTR(sql_text,1,80) Text, disk_reads, buffer_gets, executions
FROM v$sqlarea
WHERE executions > 0
AND buffer_gets > 100000
and DISK_READS > 100000
ORDER BY (DISK_READS * 100) + BUFFER_GETS desc
/
The column BUFFER_GETS is the total number of times the SQL statement read a database block from the buffer cache in the SGA. Since almost every SQL operation passes through the buffer cache, this value represents the best metric for determining how much work is being performed. It is not perfect, as there are many direct-read operations in Oracle that completely bypass the buffer cache. So, supplementing this information, the column DISK_READS is the total number times the SQL statement read database blocks from disk, either to satisfy a logical read or to satisfy a direct-read. Thus, the formula:
(DISK_READS * 100) + BUFFER_GETS
is a very adequate metric of the amount of work being performed by a SQL statement. The weighting factor of 100 is completely arbitrary, but it reflects the fact that DISK_READS are inherently more expensive than BUFFER_GETS to shared memory.
Patterns to look for
DISK_READS close to or equal to BUFFER_GETS This indicates that most (if not all) of the gets or logical reads of database blocks are becoming physical reads against the disk drives. This generally indicates a full-table scan, which is usually not desirable but which usually can be quite easy to fix.

* Finding the top 25 SQL
declare
top25 number;
text1 varchar2(4000);
x number;
len1 number;
cursor c1 is
select buffer_gets, substr(sql_text,1,4000)
from v$sqlarea
order by buffer_gets desc;
begin
dbms_output.put_line('Gets'||' '||'Text');
dbms_output.put_line('----------'||
' '||'----------------------');
open c1;
for i in 1..25 loop
fetch c1 into top25, text1;
dbms_output.put_line(rpad(to_char(top25),9)||
' '||substr(text1,1,66));
len1:=length(text1);
x:=66;
while len1 > x-1 loop
dbms_output.put_line('" '||substr(text1,x,66));
x:=x+66;
end loop;
end loop;
end;
/

* Displays the porcentage of SQL executed that did NOT incur an expensive hard parse. So a low number may indicate a literal SQL or other sharing problem.
Ratio success is dependant on your development environment. OLTP should be 90 percent.

select 100 * (1-a.hard_parses/b.executions) noparse_hitratio
from (select value hard_parses
from v$sysstat
where name = 'parse count (hard)' ) a
,(select value executions
from v$sysstat
where name = 'execute count') b;

* HIT RATIO BY SESSION:
column HitRatio format 999.99
select substr(Username,1,15) username,
Consistent_Gets,
Block_Gets,
Physical_Reads,
100*(Consistent_Gets+Block_Gets-Physical_Reads)/
(Consistent_Gets+Block_Gets) HitRatio
from V$SESSION, V$SESS_IO
where V$SESSION.SID = V$SESS_IO.SID
and (Consistent_Gets+Block_Gets)>0
and Username is not null;

* IO PER DATAFILE:
select substr(DF.Name,1,40) File_Name,
FS.Phyblkrd Blocks_Read,
FS.Phyblkwrt Blocks_Written,
FS.Phyblkrd+FS.Phyblkwrt Total_IOs
from V$FILESTAT FS, V$DATAFILE DF
where DF.File#=FS.File#
order by FS.Phyblkrd+FS.Phyblkwrt desc;

* Report per user
select substr(username,1,10) "Username", created "Created",
substr(granted_role,1,25) "Roles",
substr(default_tablespace,1,15) "Default TS",
substr(temporary_tablespace,1,15) "Temporary TS"
from sys.dba_users, sys.dba_role_privs
where username = grantee (+)
order by username;

select substr(a.tablespace_name,1,10) tablespace,
round(sum(a.total1)/1024/1024, 1) Total,
round(sum(a.total1)/1024/1024, 1)-
round(sum(a.sum1)/1024/1024, 1) used,
round(sum(a.sum1)/1024/1024, 1) Free,
round(sum(a.sum1)/1024/1024,1)*100/round(sum(a.total1)/1024/1024,1) Percent_free,
round(sum(a.maxb)/1024/1024, 1) Largest,
max(a.cnt) Fragment
from (select tablespace_name, 0 total1, sum(bytes) sum1,
max(bytes) MAXB,count(bytes) cnt
from dba_free_space
group by tablespace_name
union
select tablespace_name, sum(bytes) total1, 0, 0, 0
from dba_data_files
group by tablespace_name) a
group by a.tablespace_name
/

* Segments whose next extent can't fit
select substr(owner,1,10) owner, substr(segment_name,1,40) segment_name, substr(segment_type,1,10) segment_type, next_extent
from dba_segments
where next_extent>
(select sum(bytes) from dba_free_space
where tablespace_name = dba_segments.tablespace_name);

* Find Tables/Indexes fragmented into > 15 pieces
Select substr(owner,1,8) owner, substr(segment_name,1,42) segment_name, segment_type, extents
From dba_segments
Where extents > 15;

* COALESCING FREE SPACE
select file_id, block_id, blocks, bytes from dba_free_space
where tablespace_name = 'xxx' order by 1,2;

Issue ALTER TABLESPACE XX COALESCE;

Mini SQL Script for Coalesce
set echo off pages 0 trimsp off feed off
spool coalesce.sql
select 'alter tablespace '||tablespace_name||' coalesce;'
from sys.dba_tablespaces
where tablespace_name not in ('TEMP','ROLLBACK');
spool off
@coalesce.sql
host rm coalesce.sql

* Information about a Table
Select Table_Name, Initial_Extent, Next_Extent,
Pct_Free, Pct_Increase
From dba_tables
Where Table_Name = upper('&Table_name');

* Information about an Index:
Select Index_name, Initial_Extent, Next_Extent
From Dba_indexes
Where Index_Name = upper('&Index_name');


select lf_rows, lf_rows_len, del_lf_rows, del_lf_rows_len,
(del_lf_rows * 100) / lf_rows "Ratio"
from index_stats where name = upper('&Index_name');

* Fixing Table Fragmentation
Example: CUSTOMER Table is fragmented
Currently in 22 Extents of 1M each.
(Can be found by querying DBA_EXTENTS)

CREATE TABLE CUSTOMER1
TABLESPACE NEW
STORAGE (INITIAL 23M NEXT 2M PCTINCREASE 0)
AS SELECT * FROM CUSTOMER;
DROP TABLE CUSTOMER;
RENAME CUSTOMER1 TO CUSTOMER;
(Create all necessary privileges,grants, etc.)

* PINS and UNPIN objects:
execute dbms_shared_pool_keep('object_name','P o R o Q');

exec dbms_shared_pool.unkeep('SCOTT.TEMP','P');

1- Create the following Trigger

CREATE OR REPLACE TRIGGER db_startup_keep
AFTER STARTUP ON DATABASE
BEGIN
sys.dbms_shared_pool.keep('SYS.STANDARD');
sys.dbms_shared_pool.keep('SYS.DBMS_STANDARD');
sys.dbms_shared_pool.keep('SYS.DBMS_UTILITY');
sys.dbms_shared_pool.keep('SYS.DBMS_DESCRIBE');
sys.dbms_shared_pool.keep('SYS.DBMS_OUTPUT');
END;

2- The following Oracle core packages owned by user SYS should be pinned in the shared PL/SQL area:
DUTIL
STANDARD
DIANA
DBMS_SYS_SQL
DBMS_SQL
DBMS_UTILITY
DBMS_DESCRIBE
DBMS_JOB
DBMS_STANDARD
DBMS_OUTPUT
PIDL

3- Run the following Script to check pinned/unpinned packages
SELECT substr(owner,1,10)||'.'||substr(name,1,35) "Object Name",
' Type: '||substr(type,1,12)||
' size: '||sharable_mem ||
' execs: '||executions||
' loads: '||loads||
' Kept: '||kept
FROM v$db_object_cache
WHERE type in ('TRIGGER','PROCEDURE','PACKAGE BODY','PACKAGE')
-- AND executions > 0
ORDER BY executions desc,
loads desc,
sharable_mem desc;

* ROW_CHAINING
analyze table xxx list chained rows


create table add_chained as
select * from xxx
where rowid in (select head_rowid from chained_rows where table_name = upper('&Table_name'));
delete xx where row_id in (select head_rowid from chained_rows where table_name = upper('&Table_name'));

insert into xxx select * from add_chained;
drop table add_chained;

* To find out chained rows
ANALYZE TABLE TEST ESTIMATE STATISTICS;

Then from DBA_TABLES,
SELECT (CHAIN_CNT / NUM_ROWS) * 100 FROM DBA_TABLES WHERE TABLE_NAME = upper('&Table_name');

This will give us the chained rows as a percentage of the total number of rows in that table. If this percentage is high near 5% and the row doe not contain LONG or similar datatype or the row can be contained inside one single data block then PCTFREE should definitely be decreased.

Distribution of disk I/O

  • Locate the logfiles on their own disks and on the fastest-writing disks. Oracle writes to the redo logs frequently and sequentially. If there are other files on the same disk, the disks heads have to move between the end of the logfile and the other files. This movement increases the disk seek time, causing unnecessary delays in redo log I/O operations, and resulting in poor performance.
  • If easy manageability is your goal, use the UNIX file system.
  • If you are an experienced Unix and Oracle administrator, raw logical volumes can give you some performance benefits.
  • Don't put Logfiles and archived logfiles on the same disk as your datafiles
  • Allocate one disk for the User Data Tablespace.
  • Place Rollback, Index, and System Tablespaces on separate disks.
2 DISCOS:
1- exec, index, redo logs, export files, control files
2- data, rollback segments, temp, archive log files, control files

3 DISCOS
Disk 1: SYSTEM tablespace, control file, redo log
Disk 2: INDEX tablespace, control file, redo log, ROLLBACK tablespace
Disk 3: DATA tablespace, control file, redo log
or
Disk 1: SYSTEM tablespace, control file, redo log
Disk 2: INDEX tablespace, control file, redo log
Disk 3: DATA tablespace, control file, redo log, ROLLBACK tablespace

4 DISCOS
1- exec, redo logs, export files, control files
2- data, temp, control files
3- indexes, control files
4- archive logs, rollback segs, control files

5 DISCOS
1- exec, redo logs, system tablespace, control files
2- data, temp, control files
3- indexes, control files
4- rollback segments, export, control files
5- archive, control files

ANALYZE
REM Analyze all tables and indexes
set pagesize 0 feedback off trimspool on linesize 999 echo off
spool an_DB.sql
prompt spool anal_DB.txt
select 'analyze index' || owner || '.' || index_name || ' compute statistics;'
from sys.dba_indexes
where owner not in ('SYS','SYSTEM');
select 'analyze table' || owner || '.' || table_name ||
' estimate statistics sample 20 percent'
from sys.dba_tables
where owner not in ('SYS','SYSTEM');
spool off
@anal_DB.sql


DBMS_UTILITY.ANALYZE_SCHEMA('userid', 'COMPUTE');


DBMS_UTILITY.ANALYZE_SCHEMA('userid', 'ESTIMATE',NULL,20);


DBMS_UTILITY.ANALYZE_SCHEMA('TABLE' , 'schema', 't_name', 'ESTIMATE',null,20);

ANALYZE TABLE table ESTIMATE STATISTICS sample 20 percent;


DBMS_UTILITY.ANALYZE_SCHEMA('INDEX' , 'schema', 'i_name', 'COMPUTE';


DBMS_UTILITY.ANALYZE_SCHEMA ('userid', 'ESTIMATE', 100000);

ANALYZE TABLE table ESTIMATE STATISTICS sample 5000 rows;

DBMS_UTILITY.ANALYZE_SCHEMA ('userid', 'DELETE');
o
ANALYZE TABLE table DELETE STATISTICS;

DBMS_STATS.GATHER_SCHEMA_STATS.
The PL/SQL package DBMS_STATS lets you generate and manage statistics for cost-based optimisation. You can use this package to gather, modify, view, export, import, and delete statistics.

The DBMS_STATS package can gather statistics on indexes, tables, columns, and partitions, as well as statistics on all schema objects in a schema or database. The statistics-gathering operations can run either serially or in parallel (DATABASE/SCHEMA/TABLE only)

Procedure Name Description
GATHER_TABLE_STATS Collects table, column, and index statistics.
GATHER_INDEX_STATS Collects index statistics.
GATHER_SCHEMA_STATS Collects statistics for all objects in a schema.
GATHER_DATABASE_STATS Collects statistics for all objects in a database.
GATHER_SYSTEM_STATS Collects CPU and I/O statistics for the system.

Previous to 8i, you would be using the ANALYZE ... methods. However 8i onwards, using ANALYZE for this purpose is not recommended because of various restrictions; for example:

  1. ANALYZE always runs serially.
  2. ANALYZE calculates global statistics for partitioned tables and indexes instead of gathering them directly. This can lead to inaccuracies for some statistics, such as the number of distinct values.
  3. ANALYZE cannot overwrite or delete some of the values of statistics that were gathered by DBMS_STATS.
  4. Most importantly, in the future, ANALYZE will not collect statistics needed by the cost-based optimiser.
ANALYZE can gather additional information that is not used by the optimiser, such as information about chained rows and the structural integrity of indexes, tables, and clusters. DBMS_STATS does not gather this information.

Example:
execute dbms_stats.gather_table_stats (ownname => 'SCOTT'
, tabname => 'DEPT'
, partname=> null
, estimate_percent => 20
, degree => 5
, cascade => true);

execute dbms_stats.gather_schema_stats (ownname => 'SCOTT'
, estimate_percent => 20
, degree => 5
, cascade => true);

execute dbms_stats.gather_database_stats (estimate_percent => 20
, degree => 5
, cascade => true);

SQL Source - Dynamic Method
DECLARE
sql_stmt VARCHAR2(1024);
BEGIN
FOR tab_rec IN (SELECT owner,table_name
FROM all_tables WHERE owner like UPPER('&1')
) LOOP
sql_stmt := 'BEGIN dbms_stats.gather_table_stats (ownname => :1, tabname
=> :2,partname=> null, estimate_percent => 20, degree => 5 ,cascade => true); END;' ;

EXECUTE IMMEDIATE sql_stmt USING tab_rec.owner, tab_rec.table_name ;

END LOOP;
END;
/

*

Patching and Password Issues (Deep Dive)

Adpatch is running and fails on one of the workers. To fix this worker and continue with the patch installation, a new patch needs to be applied. Considering that only 1 adpatch session can run on an instance at any given time, how can a patch be applied when adpatch is already running?

Solution Description
1. Using the adctrl utility, shutdown the workers.

a. adctrl

b. Select option 3 "Tell worker to shutdown/quit"

2. Backup the FND_INSTALL_PROCESSES table which is owned by the APPLSYS schema

a. sqlplus applsys/ applsys_password

b. SQL> create table fnd_Install_processes_back as
select * from fnd_Install_processes;

c. The 2 tables should have the same number of records.

SQL> select count(*) from fnd_Install_processes_back;

SQL> select count(*) from fnd_Install_processes;

3. Backup the AD_DEFERRED_JOBS table.

a. $ sqlplus applsys/applsys_passwd

b. SQL> create table AD_DEFERRED_JOBS_back
as select * from AD_DEFERRED_JOBS;

c. The 2 tables should have the same number of records.

SQL>select count(*) from AD_DEFERRED_JOBS_back;

SQL> select count(*) from AD_DEFERRED_JOBS;

4. Backup the .rf9 files located in $APPL_TOP/admin//restart directory.

At this point, the adpatch session should have ended and the cursor should be back at the Unix prompt.

a. cd $APPL_TOP/admin/

b. mv restart restart_back

c. mkdir restart

6. Drop the FND_INSTALL_PROCESSES table and the AD_DEFERRED_JOBS table.

a. $ sqlplus applsys/

b. SQL> drop table FND_INSTALL_PROCESSES;

c. SQL> drop table AD_DEFERRED_JOBS;

8. Apply the new patch.

9. Restore the .rf9 files located in $APPL_TOP/admin/restart_back directory.

a. cd $APPL_TOP/admin/

b. mv restart restart__back1

c. mv restart_back restart

10. Restore the FND_INSTALL_PROCESSES table which is owned by the APPLSYS schema.

a. sqlplus applsys/

b. create table fnd_Install_processes as select * from fnd_Install_processes_back;

c. The 2 tables should have the same number of records.

select count(*) from fnd_Install_processes;

select count(*) from fnd_Install_processes_back;

11. Restore the AD_DEFERRED_JOBS table.

a. sqlplus applsys/

b. create table AD_DEFERRED_JOBS as select * from AD_DEFERRED_JOBS_back;

c. The 2 tables should have the same number of records.

select count(*) from AD_DEFERRED_JOBS_back;

select count(*) from AD_DEFERRED_JOBS;

12. Re-create synonyms

a. sqlplus apps/apps

b. create synonym AD_DEFERRED_JOBS for APPLSYS.AD_DEFERRED_JOBS;

c. create synonym FND_INSTALL_PROCESSES FOR APPLSYS.FND_INSTALL_PROCESSES;

13. Start adpatch, it will resume where it stopped previously.



MANUAL METHOD For changing the APPS Password
- Follow the steps exactly. MUST: Back up the table fnd_oracle_userid.

1)Have all users log out of applications.
2) Shutdown the concurrent managers.
3) Log into applications as SYSADMIN.
4) Navigate to the Register Oracle IDs form
(Security -> Oracle -> Register)
5) Query up all available Oracle IDs.
6) Log into SQL*Plus as SYSTEM.
$ sqlplus SYSTEM/
7) In applications session, enter the new password for APPLSYS:
A) View -> Query by example -> Run
B) Change password of applsys ->
press down arrow.
C) Verify password of applsys -> press down arrow.
D) File save. DO NOT requery or exit the form!
8) In the SQL*Plus session run the following command

SQL> alter user applsys identified by ;

9) In applications session, entered the new password for APPS:

A) View -> Query byexample -> Run
B) Change password of apps ->
press down arrow.
C) Verify password of apps ->
press down arrow.
D) File save. DO NOT requery or exit the form!
10) In Unix SQL*Plus
$ alter user apps identified by ;
11) Exit Oracle Applications.
12) Close Browser.
13) Log out of SQL*Plus.
14) Open new Browser session and Log into Oracle Applications.
15) Restart concurrent managers after changing batch startup script password.
Proceed with exactly the above steps to change the remaining Oracle Database Accounts (Database User name on the Oracle Users form
(Security -> Oracle -& gt; Register).
Thus if there are 140 applications passwords to update it will take a few hours.

II. BATCH METHOD Patch 1685689 offers a utility to allow changing passwords
with the command FNDCPASS with the required parameters (see usage below).

This utility should give the applications administrator the ability to run the applications

passwords change in batch.

NOTE: The patch 1685689 is not obsolete yet as of the updating of this blog

README from patch 1685689 A NEW C EXECUTABLE TO CHANGE APPLICATION, ORALCE USER AND APPLSYS PASSWORD

Usage =====
FNDCPASS apps/apps 0 Y system/manager SYSTEM APPLSYS WELCOME



Tuesday, January 27, 2009

To Install OLAP and Demand Planning In Oracle Applications

Check The following before proceed to Installation of OLAP

a) In ODP as Demand Planning System Administrator (DPSA), Demand Plan Manager (DPM) or Demand Planner (DP), locate the 'About Demand Planning' button. As DPM or DP it is in the left frame, in the Help icon (?).

Note the setting for 'Build'

Alternatively, you can check the ODPCODE.VERSION in Express Monitor or in

SQL*PLUS as the APPS user:

set serveroutput on
exec dbms_aw.execute('aw attach APPS.odpcode ro');
exec dbms_aw.execute('decimals=5');
exec dbms_aw.execute('show odpcode.version');
exec dbms_aw.execute('aw detach APPS.odpcode')


With the info in 'Build' or in ODPCODE.VERSION, you can compare with this table:
Patch Name Patch Number ODPCODE.VERSION

---- ----- ----

Patchset J, 11.5.10 3930903 1.245

Patchset J CU1 4042499 1.245

Patchset J, 11.5.10.CU2 4398235 1.250


Note that as the table is not complete for all ODP 11.5.10 patches, it is instead recommended that method b), to discover the exact ODP patch level.
These patches already contains the internal version properly set:

Patch Name Patch Number ODPCODE.VERSION

---- ----- ----

DP RUP#6 5934903 1.256

DP RUP#7 5869450 1.257

DP RUP#8 6036268 1.258

DP RUP#9 6036268 1.259

DP RUP#10 6353791 1.26600

DP RUP#11 6625130 1.26700

DP RUP#12 6751750 1.26800

DP RUP#13 7115768 1.26900

------------------------

b) Run the following query as APPS account from SQL*PLUS:

select a.bug PatchNo,

decode (a.bug,

'3930903','DPE CONSOLIDATED 11.5.10 PATCH',

'4104832', 'CUMULATIVE 1 PATCH',

'4398235','CUMULATIVE 2 PATCH ',

'5120460','DPE RUP#1',

'5367230','DP RUP#2 PATCH Obsolete',

'5578973','DP RUP#2 PATCH',

'5395666','DP RUP#3 PATCH',

'5578993','DP RUP#4 PATCH',

'5659805','DP RUP#5 PATCH',

'5934903','DP RUP#6 PATCH',

'5869450','DP RUP#7 PATCH',

'6036268','DP RUP#8 PATCH Obsolete',

'6321532','DP RUP#8 PATCH',

'6200268','DP RUP#9 PATCH',

'6416414','Patch on Top RUP#9',

'6353791','DP RUP#10 PATCH',

'6625130','DP RUP#11 PATCH',

'6751750','DP RUP#12 PATCH',

'7115768','DP RUP#13 PATCH')

Description,

nvl(to_char(b.creation_date),'Not Installed') Installed

from applsys.ad_bugs b,

(select 3930903 bug from dual union

select 4104832 bug from dual union

select 4398235 bug from dual union

select 5120460 bug from dual union

select 5367230 bug from dual union

select 5578973 bug from dual union

select 5395666 bug from dual union

select 5578993 bug from dual union

select 5659805 bug from dual union

select 5934903 bug from dual union

select 5869450 bug from dual union

select 6036268 bug from dual union

select 6321532 bug from dual union

select 6200268 bug from dual union

select 6416414 bug from dual union

select 6353791 bug from dual union

select 6625130 bug from dual union

select 6751750 bug from dual union

select 7115768 bug from dual

) a

where to_char(substr(a.bug,1,7)) = to_char(b.bug_number(+))

order by a.bug desc

------------------------


c) An alternative way to check the ODP patch level is to run these commands from SQL*PLUS:

ODP 11.5.10 base without any patch:

select * from applsys.ad_bugs where bug_number='3930903';

result:

Patch 3930903

Description DPE CONSOLIDATED 11.5.10 PATCH

Product Demand Planning Engine

Release 11i

Last Updated 05-JAN-2005


ODP 11.5.10 CUMULATIVE PATCH 1:

select * from applsys.ad_bugs where bug_number='4104832';

result:

Patch 4104832

Description HOLDER FOR DPE 11.5.10.1CU (V5) - CUMULATIVE 1 PATCH FOR 11.5.10+

Product Demand Planning Engine

Release 11i

Last Updated 17-MAR-2005


ODP CUMULATIVE PATCH 2

select * from applsys.ad_bugs where bug_number='4398235';

result:

Patch 4398235

Description HOLDER FOR DPE 11.5.10.2CU (V7) - CUMULATIVE 2 PATCH FOR 11.5.10+

Product Demand Planning Engine

Release 11i

Last Updated 08-JUL-2005


ODP RUP#1 (CU3)

select * from applsys.ad_bugs where bug_number='5120460';

result:

Patch 5120460

Description DPE RUP#1 ON TOP OF 11.5.10 CU2

Product Demand Planning Engine

Release 11i

Last Updated 03-JUL-2006



ODP RUP#2

select * from applsys.ad_bugs where bug_number='5367230';

result:

Patch 5367230

Description DP 11510 RUP#2 PATCH

Product Demand Planning Engine

Release 11i

Last Updated 05-OCT-2006


ODP RUP#2 (like previous one but with a .sql corrected)

select * from applsys.ad_bugs where bug_number='5578973';


result:

Patch 5578973

Description DP 11510 RUP#2 PATCH

Product Demand Planning

Release 11i

Last Updated 05-OCT-2006


ODP DP RUP#3

select * from applsys.ad_bugs where bug_number='5395666';


result:

Patch 5395666

Description DP RUP#3 PATCH FOR 11.5.10 BRANCH

Product Demand Planning Engine

Release 11i

Last Updated 06-NOV-2006



ODP DP RUP#4

select * from applsys.ad_bugs where bug_number='5578993';


result:

Patch 5578993

Description DP RUP#4 PATCH FOR 11.5.10 BRANCH

Product Demand Planning Engine

Release 11i

Last Updated 04-DEC-2006



ODP DP RUP#5

select * from applsys.ad_bugs where bug_number='5659805';


result:

Patch 5659805

Description DP RUP#5 PATCH FOR 11.5.10 BRANCH

Product Demand Planning Engine

Release 11i

Last Updated 11-JAN-2007


ODP DP RUP#6

select * from applsys.ad_bugs where bug_number='5934903';


result:

Patch 5934903

Description DP RUP#6 PATCH FOR (MSD+MSE) 11.5.10 BRANCH

Product Demand Planning Engine

Release 11i

Last Updated 15-MAR-2007


NOTE: Please, apply the Patch 5934903 instead of Patch 5715560.

The below Patch 5715560 should NOT be applied on RDBMS 10.x (there is

no problem if you apply it on RDBMS 9.x).


select * from applsys.ad_bugs where bug_number='5715560';


result:

Patch 5715560

Description DP RUP#6 PATCH FOR (MSD+MSE) 11.5.10 BRANCH

Product Demand Planning Engine

Release 11i

Last Updated 09-MAR-2007


ODP DP RUP#7

select * from applsys.ad_bugs where bug_number='5869450';


result:

Patch 5869450

Description DP RUP#7 PATCH FOR (MSD+MSE) 11.5.10 BRANCH

Product Demand Planning Engine

Release 11i

Last Updated 23-MAY-2007


ODP DP RUP#8

select * from applsys.ad_bugs where bug_number='6321532';


Patch 6321532

Description DP RUP#8 PATCH FOR (MSD+MSE) 11.5.10 BRANCH

Product Demand Planning Engine

Release 11i

Last Updated 03-AUG-2007


NOTE: Please, apply the Patch 6321532 instead of Patch 6036268.

The below Patch 6036268 should NOT be applied on RDBMS 10.x (there is

no problem if you apply it on RDBMS 9.x).


select * from applsys.ad_bugs where bug_number='6036268';


result:

Patch 6036268

Description DP RUP#8 PATCH FOR (MSD+MSE) 11.5.10 BRANCH

Product Demand Planning Engine

Release 11i

Last Updated 31-JUL-2007


ODP DP RUP#9

select * from applsys.ad_bugs where bug_number='6200268';


Patch 6200268

Description DP RUP#9 PATCH FOR 11.5.10 BRANCH

Product Demand Planning Engine

Release 11i

Last Updated 28-SEP-2007


NOTE:

If you have already applied the Patch 6200268 (RUP#9), then you should apply also the patch:

Patch 6416414

Description ERROR "(MXUPDATE03) AW MSD.MSD3581A1 IS READ-ONLY" WHILE LOGGING IN

Product Demand Planning

Release 11i

Last Updated 20-SEP-2007


This is because code for Patch 6370304 got into RUP#9 requiring the subsequent patch to be needed.

Query:

select * from applsys.ad_bugs where bug_number='6416414';

ODP DP RUP#10

select * from applsys.ad_bugs where bug_number='6353791';

Patch 6353791

Description DP RUP#10 PATCH FOR 11.5.10 BRANCH

Product Demand Planning Engine

Release 11i

Last Updated 12-DEC-2007


ODP DP RUP#11

select * from applsys.ad_bugs where bug_number='6625130';


Patch 6625130

Description DP RUP#11 PATCH FOR 11.5.10 BRANCH

Product Demand Planning Engine

Release 11i

Last Updated 24-MAR-2008


ODP DP RUP#12

select * from applsys.ad_bugs where bug_number='6751750';

Patch 6751750

Description APS DP RUP#12 PATCH FOR 11.5.10 BRANCH

Product Demand Planning Engine

Release 11i

Last Updated 10-AUG-2008


ODP DP RUP#13

select * from applsys.ad_bugs where bug_number='7115768';

Description APS DP RUP#13 PATCH FOR 11.5.10 BRANCH

Product Demand Planning Engine

Release 11i

Last Updated 28-OCT-2008


d) To check the Web Agent version, you can use, the following:


d.1) Using Express Monitor in ODP:

aw attach apps.xwdevkit first

rpr w 30 _xwd_devkitver


Example:


_XWD_DEVKITVER

------------------------------

9.48.0.0


d.2) Using SQL*PLUS:


set serveroutput on

exec dbms_aw.execute('aw attach apps.xwdevkit ro');

exec dbms_aw.execute('rpr w 30 _xwd_devkitver');

exec dbms_aw.execute('aw detach apps.xwdevkit');


Example:

_XWD_DEVKITVER

------------------------------

9.48.0.0


If you do not see the following directory on your system: cwmlite\admin
You need to install the oracle olap 9.2.0.1.0 from the product cd following

these steps:1. Insert the 9.2.0.1 Disk 1 into the cd drive
2. Choose install/deinstall
3. Click Next
4. Choose your oracle home --
this will be where you already installed the 9i database without the olap option
5. Select Oracle 9i Database 9.2.0.1.0 Enterprise Edition
6. Click Next
7. Select Custom
9. Click Next
10. Under Enterprise Edition Options.Check Oracle OLAP 9.2.0.1.0
Once the scripts are installed, perform the following steps to install the cwmlitemetadata:
In sql*plus, connected to the database as a dba and issue the following:
(Replacing the ORACLE_HOME with your Oracle Home and mysid with your database sid
ORACLE_BASE is the root directory of all your oracle product installs,
if you areusing Oracle Flexible Architecture (OFA))
If the cwmlite/admin directory is NOT present issue the following SQL/PLUS commands:
sql> connect SYS/sys_password as SYSDBA
sql> CREATE TABLESPACE "CWMLITE" LOGGING DATAFILE '/ORACLE_BASE/oradata/mysid/cwmlite01.dbf'
SIZE 20M REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;
sql> @?/ORACLE_HOME/olap/admin/olap.sql mysid
sql> connect SYS/sys_password as SYSDBA
sql> @?/cwmlite/admin/oneinstl.sql CWMLITE TEMP;
If the cwmlite/admin directory is present and the CWMLITE tablespace already exist,
simply issue the following commands:In sql*plus:
sql> connect SYS/sys_password as SYSDBA
sql> @?/olap/admin/olap.sql mysid
sql> @?/cwmlite/admin/oneinstl.sql CWMLITE TEMP;
oneinstl.sql takes two arguments, the destination tablespace (in this case CWMLITE)
and the temporary tablespace (TEMP)

For 10G:
--------
There is no need to create the cwmlite tablespace as the catalog is now stored in the SYSAUX tablespace.
All one needs to do in 10g is run this script on Enterprise instances that require the OLAP option.
sql>?\olap\admin\olap.sql SYSAUX TEMP

Apply 3553738 (OPatch) (Mandatory)

Applications Part:

Download and Apply : Patch no. 3930903

Then Apply: 4398235 Patch .


Now check with Demand Planning Administrator Responsibility to get the relevant menus.

Related Posts Plugin for WordPress, Blogger...

Let us be Friends...

Share |

Popular Posts

Recent Comments