DBA > Articles

New and Improved Methods for Administering Your Database

By: Howard Horowitz
To read more DBA articles, visit http://dba.fyicenter.com/article/

New and Improved Methods for Administering Your Database


Objective

Expose you to some of the database features available in 10g and compare them to the lengthy workarounds that were used in previous Oracle versions.

10g Provides faster, cleaner, and more efficient ways to administer your database.
-9i Mountains = More effort and resources to accomplish similar tasks in 10g
-10g Mole Hills = Less effort and resources to accomplish similar tasks in 9i.


Popular Features
Automatic Shared Memory Management (ASMM)
Data Pump
SQL Tuning Advisor
Flashback Database
RMAN - Backupset Compression

Note: If you were contemplating about moving to Oracle or upgrading from a previous version, contemplate no more. Now’s the time to make the move. Oracle 10g has made major strides in automating some of the laborious tasks of an Oracle DBA/Developer.


Automatic Shared Memory Management (ASMM)
8i method for automating SGA management
There is no method.
Workaround
You have to shutdown the database and manually change the values. This could be done programmatically with multiple init<SID>.ora files. Each file containing different values for the SGA parameters and automated via shell and Cron/Autosys.

9i method for automating SGA management.
Still not doable, however, you can dynamically change many of the values without shutting down the database.
Workaround
You have to use the alter system/session commands and also rely on the v$shared_pool_advice and db_cache_advice views for proper settings. Manual / programmatic effort is required if the behavior of your database changes and SGA changes are needed. Cron and Autosys to automate

10g method for automating SGA management.
alter system set sga_target=‘x’;

sga_target -- This parameter is new in Oracle Database 10g and reflects the total size of memory an SGA can consume.
Shared pool
Buffer cache
Java Pool
Large Pool
Automatically adapts to workload changes
Maximizes memory utilization
Single Parameter makes it easier to use
Helps eliminate out of memory errors
Can help improve performance
Requires an SPFILE and SGA_TARGET> 0. Can not exceed sga_max_size.
Does not apply to the following parameters.
--Log Buffer
--Other Buffer Caches (KEEP/RECYCLE, other block sizes)
--Streams Pool (new in Oracle Database 10g)
--Fixed SGA and other internal allocations
Can be adjusted via EM or command line.
A new background process named Memory Manager (MMAN) manages the automatic shared memory.

Note: If you already have a parameter set, than sga_target will not allocate memory less than the existing value. Unless you’re a control freak, this takes the guessing out of memory allocation. It DYNAMICALLY allocates and reallocates memory as it’s needed. Oracle handles this for you.
Prior to ASMM it was the DBAs job to schedule cron or dbms_jobs to dynamically modify the database_buffer_cache and shared_pool_size via persistent SPFILE commands for OLTP versus Batch loads. This is a tedious task and leaves room for error.
To do this in 8i required you to have multiple ifiles (init.ora) files with different values for your buffer cache and shared_pool_size. This required a shutdown of the database for this to take effect.
Dependency: Statistics_level=Typical or All (parameter value)
Still need to manually manage the log buffer, keep buffer pool, recycle buffer pool.
Can use alter system set sga_target=x. Still need sga_max_size
OEM provides you with the ability to do this.
Parameter values can be viewed in the spfile or the v$parameter file. Name in ‘_java_pool_size’ / sid._java_pool_size..
Very easy to modify via EM or “alter system set sga_target=183500800 scope=both;”.
The benefits to the customer is less down-time and important to financial firms when Millions of dollars are on the line..
The large pool is configured by the LARGE_POOL_SIZE init.ora parameter. It is used for allocation of "big" chunks of memory such as used by MTS, Parallel Query, and RMAN .
The System Global Area (SGA) is the memory region, shared by all users, that contains data and control information for a particular Oracle instance. The SGA is internally divided into memory components that represent pools of memory used to satisfy each category of memory allocation requests, such as requests for storing the most recently used blocks of data or logging the changes made to the database. .
Tuning the sizes of these caches for optimal performance is not an easy task, even with advisory mechanisms. There is always the risk of either under-sizing a component, leading to memory allocation failures, or over-sizing a component, which wastes memory that could be used by other caches..
Oracle10G introduces self-tuning SGA that allows administrators to specify only the total size of the SGA, and leaves to Oracle the responsibility to internally determine the optimal distribution of memory across the SGA pools. With this new feature, distribution of memory to the various SGA caches will change dynamically over time to accommodate changes in the workload..
DB_CACHE_SIZE, LARGE_POOL_SIZE, JAVA_POOL_SIZE, SHARED_POOL_SIZE are automatically sized..




Automatic Shared Memory Management

DEMO
Data Pump
8i / 9i method for suspending exports and imports. N/A
8i / 9i method for restarting failed exports and imports
at point of failure. N/A
8i / 9i method for controlling the number of threads/processes. N/A
8i / 9i method for direct mode imports. N/A
8i / 9i method for monitoring export and import’s. N/A
8i / 9i method for importing and exporting data via PL/SQL. N/A
8i / 9i method for exporting/importing pre-defined objects via include or exclude keywords (grants, procedures, functions, tables..etc). Supports like and not like clause. N/A
8i / 9i method for remapping tablespaces and datafiles. N/A
High performance import and export
60% faster than 9i export (single thread)
15x-45x faster than 9i import (single thread)

The reason it is so much faster is that Conventional Import uses only conventional mode inserts, whereas Data Pump Import uses the Direct Path method of loading. As with Export, the job can be parallelized for even more improvement dynamically. Creates a separate dump file for each degree of parallelism.

Time is money. Data Pump has cut down data movement/processing times significantly.

SQL Tuning Advisor

Most database and application related problems are the result of poorly written SQL and missing or misused indexes.
Oracle provides an interface through OEM or via the PL/SQL stored packages dbms_advisor and dbms_sqltune to analyze existing SQL statements, provide tuning recommendations and implement those recommendations.
http://servername.com:5501/em/
Navigate to performance
Then to Advisor Central
SQL Tuning Advisor
Top SQL
Additional advisors: redo log size advisor, sql access advisor, undo advisor, and segment advisor.


Flashback Database
8i / 9i method for point-in-time recovery
1. Shutdown the database
2. Restore all of the datafiles from last backup
3. Startup the database in mount state
4. Recover database until (SCN or Time)
5. Apply the necessary redo/archive logs
6. Open the database – open resetlogs

10g method for point-in-time recovery
1. Shutdown the database
2. Startup the database in mount state
3. SQL> flashback database to timestamp to_timestamp(‘2004-12-16 16:10:00’, ‘YYYY-MM-DD HH24:MI:SS’);
4. Open the database – open resetlogs

New strategy for point-in-time recovery
Flashback Log captures old versions of changed blocks.
Think of it as a continuous backup
Replay log to restore DB to time
Restores just changed blocks
It’s fast - recovers in minutes, not hours. More over, this feature removes the need for database incomplete recoveries that require physical movement of datafiles/restores.
It’s easy - single command restore
SQL> Flashback Database to scn 1329643


Restrictions
Not used for Media failure errors. Used for Logical/User errors.
The database control file has been restored or re-created.
Previous tablespace has been dropped.
The database data file that contains the object to be queried has been shrunk.
A recovery through the resetlogs command has occurred.

Views for Monitoring
V$Database
V$Flashback_Database_Log
V$Flashback_Database_Stat



RMAN – Backupset Compression
8i / 9i method for compressing backups (Compression utility)
gzip *.bak, *.arc, *.ctl….etc;
10g method for compressing backups
-- RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COMPRESSED BACKUPSET;
-- RMAN>BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG;

Prior to Oracle 10g, RMAN reduced the size of backup images by backing up only used blocks. This was great for databases that were over-sized, however, this didn’t help for large databases with little free space.
The AS COMPRESSED BACKUPSET option of the BACKUP command allows RMAN to perform binary compression of backupsets. The resulting backupsets do not need to be uncompressed during recovery.

Pros:
Backupsets were compressed by 78% when compared to a regular backupset.
Cons:
Creating compressed backupsets imposes some extra CPU overhead during backup and restore, which can slow the backup process. If you have more than one CPU, you can use increased parallelism to run jobs on multiple CPUs and thus improve performance.
Hidden Gems
Renaming Tablespaces
Dictionary View Improvements
Flush Buffer Cache
Automated Statistics Collection
Flashback Drop
Flashback Table
Flashback Transaction Query
Diamonds in the rough

Rename Tablespace
8i / 9i method for renaming tablespaces
Create a new tablespace with the same size as the original one. (You have to make sure you have enough room on disk to store a duplicate copy). Space pending, this might require additional analysis of the original tablespace to determine if the new tablespace can be resized/reorged.
Move objects from the original tablespace to the new one. (This could take a while, depending on the size of the tablespace).
Drop the original tablespace and datafile(s) after the objects are moved to the newly named tablespace.
10g method for renaming tablespaces
SQL> alter tablespace users rename to users3;
Oracle allows the renaming of tablespaces in 10g.
A simple alter tablespace command is all you need.
SQL>alter tablespace users rename to users3;
 Tablespace altered.
Elapsed: 00:00:00.05

SQL> alter tablespace users3 rename to users;
 Tablespace altered.
 Elapsed: 00:00:00.02

Rename tablespace feature has lessened the workload for TTS operations. There’s no need to delete tablespaces on the target prior to impdp metadata.
Doesn’t Support System or Sysaux tablespaces
Supports Default, Temporary, and Undo Tablespaces (dynamically changes the spfile).



Dictionary View Improvements
8i / 9i method for monitoring blocking locks
UTLLOCKT.sql – requires catblock.sql to be run
OR
--Blocking Locks Info
SELECT
bs.username "Blocking User",
bs.username "DB User",
ws.username "Waiting User",
bs.sid "SID",
ws.sid "WSID",
bs.sql_address "address",
bs.sql_hash_value "Sql hash",
bs.program "Blocking App",
ws.program "Waiting App",
bs.machine "Blocking Machine",
ws.machine "Waiting Machine",
bs.osuser "Blocking OS User",
ws.osuser "Waiting OS User",
bs.serial# "Serial#",
DECODE(wk.TYPE,
'MR', 'Media Recovery', 'RT', 'Redo Thread', 'UN', 'USER Name',
'TX', 'Transaction', 'TM', 'DML', 'UL', 'PL/SQL USER LOCK',
'DX', 'Distributed Xaction', 'CF', 'Control FILE', 'IS', 'Instance State',
'FS', 'FILE SET', 'IR', 'Instance Recovery', 'ST', 'Disk SPACE Transaction',
……………………………….
FROM
v$lock hk, v$session bs,
…………………………….
……………………………..



10g method for monitoring blocking locks
New columns in v$session allow you to easily identify sessions that are blocking and waiting for other sessions. V$session also contains information from v$session_wait view. No need to join the two views.

--Display blocked session and their blocking session details.

SELECT sid, serial#, blocking_session_status, blocking_session
FROM v$session
WHERE blocking_session IS NOT NULL;

or
SELECT blocking_session_status, blocking_session
FROM v$session
WHERE sid = 444; /* Blocked Session */


8i/9i method for monitoring rollback activity
In addition to monitoring how long a SQL session will take, now you can monitor rollback activity from one dictionary view. Prior to 10g, you had to do the following:

Select b.sid, a.used_ublk
From v$transaction a, v$session b
Where a.addr=b.taddr and b.username = ‘user-name’

V$session.used_ublk (Used Undo Block) will give you the count of number of blocks to be rolled back. Take counts every five minutes to figure out how long it will take to rollback a transaction. This is a manual process, leaving room for error.

10g method for monitoring rollback activity
Select time_remaining
from v$session_longops
where sid = 444; /* rollback session */
Or
select time_remaining,sofar,elapsed_seconds
from v$session_longops l, v$session s
where l.sid=s.sid and l.serial# = s.serial#
and s.module='long_proc‘;
SIDE NOTE:
You can also use dbms_appliation_info.set_module and set_client_info to track the progress of a procedure and query the results from v$session_longops. I used to use these packages to monitor procedures via v$session.


Flush Buffer Cache
8i/9i method for flushing the buffer cache
Prior to 10g, this wasn’t possible without shutting down and restarting the database or using the following undocumented commands:
SQL> alter session set events = 'immediate trace name flush_cache';
alter tablespace offline/online to flush the buffer cache of blocks relating to that tablespace (As per Tom Kytes Article).
Side-Note - You were able to flush the shared pool
SQL> ALTER SYSTEM FLUSH SHARED_POOL;

10g method for flushing the buffer cache

10g has provided the ability to flush the buffer cache. This isn’t suggested for a production environment, but might be useful for QA/Testing. The bigger the cache, the larger the LRU and dirty list becomes. That results in longer search times. However, if the buffer cache is undersized, than running the following command can improve performance and take the burden off the DBWR. In addition to decreasing free buffer waits.
SQL> ALTER SYSTEM FLUSH BUFFER_CACHE;



Automatic Statistics Collection
8i/9i method for gathering statistics
Required a scheduled process that called DBMS_STATS or DBMS_UTIL packages. For finer granularity, master-slave scripts may have been created that called dbms_stats/analyze commands based on the percentage of table/index changes.

for sid in test1 test2 test3
do
echo "connecting to $sid"
sqlplus /nolog >> EOF
SET ECHO ON
SET SERVEROUT ON SIZE 1000000
CONNECT system/manager@$sid

BEGIN
FOR uname IN
( SELECT
username FROM dba_users
WHERE username NOT IN ('SYS', 'SYSTEM', 'OUTLN', 'DBSNMP')
)
LOOP
DBMS_OUTPUT.PUT_LINE (
'Analyzing USER :'|| uname.username);

DBMS_STATS.GATHER_SCHEMA_STATS(
uname.username, estimate_percent =< '25', block_sample =< TRUE,
method_opt =< 'FOR ALL INDEXED COLUMNS SIZE 1',
degree =< '2', granularity =< 'ALL', cascade =< TRUE,
options =< 'GATHER');
END LOOP;
END;
/
exit
EOF
done



10g method for gathering statistics
Database statistics are automatically collected using the dbms_stats.gather_database_stats_job_proc procedure.
By default this job runs within a maintenance window between 10 P.M. to 6 A.M. week nights and all day on weekends.
DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC is an internal procedure which gathers statistics for tables with either empty or stale statistics, similar to the DBMS_STATS.GATHER_DATABASE_STATS procedure using the GATHER AUTO option. The main difference is that the internal job prioritizes the work such that tables most urgently requiring statistics updates are processed first.

You can also prevent statistics from being overwritten via dbms_stats.lock_schema_stats and unlock. In 10g you can also restore statistics to any point in time, in case the new statistics that were collected cause a sub optimal plan to be generated. You can restore statistics for a table, schema, fixed database objects, or the entire database.


Flashback Drop
8i/9i method for undoing a dropped table
Table level import from logical backup
Restore backup to another location and export table from backup database. Import dropped table from backup database to original database.
Point-in-time recovery prior to the dropped table
The first method is the fastest but will require a fair amount of time, depending on the table size. The second method is an extension of the first if a logical backup wasn’t taken. The third method requires a database shutdown.
10g method for undoing a dropped table
SQL> flashback table emp to before drop;
Recycle Bin (Sounds familiar)……….A logical representation of the dropped object. The dropped/renamed table is still occupying space in it’s original tablespace. You can still query it after it’s dropped.

You can empty out the recycle bin by purging the objects.
select object_name, original_name, type
from user_recycle_bin; or show recycle;
Purge table mm$$55777$table$1;
You can permanently drop a table without writing to recycle bin.
Drop table emp purge;

Has a few quirks
Doesn’t restore foreign key constraints or materialized views.
Restores indexes, constraints, and triggers with it’s klingon language - mm$$55777$table$1
(Requires a rename of triggers and constraints).



Flashback Table
8i method for restoring data
Point-in-time recovery from backupset
Logical import from export
Restore database to new location and import or direct path insert into source table via dblink.

9i method for restoring data
INSERT INTO emp
(SELECT *
FROM emp AS OF TIMESTAMP
TO_TIMESTAMP(’16-Sep-04
1:00:00’,’DD-MON-YY HH24:MI:SS’)
MINUS
SELECT * FROM emp
);
Side-Note – This might not work if DDL operations are performed or constraints are altered.

10g method for restoring data
Flashback table emp to TIMESTAMP
TO_TIMESTAMP(’16-Sep-04 1:00:00’,’DD-MON-YY HH24:MI:SS’)

Make sure you enable row movement prior to the restore.
SQL> alter table emp enable row movement;

The before image data is stored in your undo_tablespace. Make sure you allocate enough space. Also make sure you allocate a big retention_size if you decide not to let Oracle automatically manage it. Setting undo_retention=0 tells oracle to automatically manage this parameter.
I’m off on a tangent again, but it’s a good one.
Oracle will prevent you from getting those annoying ora-01555 snapshot too old errors by including “retention guarantee” in your create or alter statements. This does come at a cost if your running a transaction that needs undo space. You can disable it “alter tablespace retention noguarantee;”



Flashback Transaction Query
8i/9i method for generating sql undo statements
Log Miner (Good luck parsing through those logs).
.
10g method for generating sql undo statements.
SELECT undo_sql .
FROM flashback_transaction_query .
WHERE table_owner=‘SCOTT’ .
AND table_name=‘EMP’.
AND start_scn between 21553 and 44933; .
.
(You can also use timestamp).
.
Flashback Transaction Query provides the ability to generate the SQL statements for undoing DML ..



Diamonds in the rough
Additional 10g features worth mentioning
Drop database command (includes datafiles, control files, archive logs, backups, and spfile).
RMAN> drop database including backups;
Freeing up unused space in tables/indexes and adjusting the high-water mark. Also requires “alter table emp enable row movement”
SQL> alter table emp shrink space cascade compact;
utl_mail (no need to reference utl_smtp protocol. It’s built in)
utl_compress (compression of binary data (blobs and raw data). Similar to gzip.
Support of regular expressions (Unix commands in PL/SQL)
Default temporary and user tablespaces


Putting it all together
10g has made major strides in preventing DBAs and developers from making mountains out of mole hills. These gains have led to more efficient code and better database administration. The underlying result is savings in cost and time for your organization.

Side-Note – This might not work if DDL operations are performed or constraints are altered.




Note:
8i / 9i method for suspending exports and imports.
8i / 9i method for restarting failed exports and imports at point of failure.
Ctrl+c and stop_job, start_job, parallel, status,….etc;
8i / 9i method for controlling the number of threads/processes.
Ctrl+c and parallel and parallel hint
8i / 9i method for direct mode imports.Automatic
8i / 9i method for monitoring export and import.
DBA_DATAPUMP_JOBS, DBA_DATAPUMP_SESSIONS, DATAPUMP_PATHS -> joined with v$session
8i / 9i method for importing and exporting data via PL/SQL.
DBMS_DATAPUMP
8i / 9i method for remapping tablespaces and datafiles
Allows you to map objects to tablespaces other than the ones the objects were originally assigned to. Remap_datafile, remap_tablespace
Exports aren’t too bad but imports are like watching paint dry. This will alleviate a lot of the pain in performing those nasty imports. I might even abandon my TTS solution and revisit export and imports…….A.K.A Data Pump (export and import on steroids).

Exclude and Include keywords are great when you want to export or import a large percentage of tables. Using the exclude keyword via par file requires less coding and room for error. There wasn’t a way to export/import just procedures and functions. You needed to finagle your export/import scripts or recreate the procedures from the DDL.

Ability to suspend and resume an export/import job at will
Ability to restart failed jobs from the point of failure
Support for direct mode imports.
It’s a no-brainer

Pre configuration steps
===================
Create directory pump_dir as
‘/……..’;
Grant read on directory pump_dir to scott;
Export Command
===============
oracle@testserver-test1>expdp hh/xxxxx dumpfile=pump_dir:test_tbs_full_db.dmp logfile=pump_dir:test_tbs_full_db.log
Import Command
=============== oracle@testserver-test2>impdp hh/xxxxx dumpfile=pump_dir:test_tbs_full_db.dmp schemas=devuser remap_tablespace=system:users

elementk has some excellent examples.
In the past it was up to the DBA to study SQL and explain plans and identify inefficient execution plans. We’re not out of the woods yet, however, SQL Tuning Advisor lends a hand in trouble-shooting missing indexes, missing statistics, poor joins,….etc. With a little bit of PL/SQL and Shell programming, scripts can be written against the dbms_sqltune package to automate the mundane task in identifying inefficient SQL and missing indexes.

Can also be done via RMAN – No need to restore datafiles – use restore database command in script

Shutdown database
Startup mount

Run {
set until time ‘Dec 16 2004 12:25:00’;
Allocate channel D1 Type disk;
Restore Database;
Recover Database;
SQL ‘Alter Database Open Reset Logs’;
}
Reset Database;

Excellent for QA environments/testing. Provides an easy/quick way to restore base-line data for repetitive testing.
Nice High Availability Solution. Incurs add’l I/O – New background process (RVWR) to write flashback data to the database logs.
Recovery time is critical for a 24/7 business.
This feature has aided in cutting down the MTTR for logical/user database errors.
This is VERY important to your organization in ensuring that database recoveries are seamless.

Great feature for database environments that require application code testing with the ability to quickly restore data for repetitive testing. Restoring data to point in time can be a daunting task, requiring several steps whether you use alter database begin backup with “split mirror” approach or RMAN solutions.

Nice replacement for incomplete recoveries/point-in-time for lost data.
Restrictions:
No restore of the database control file.
Previous tablespace has been dropped
Data file has been shrunk
Reset logs command was used

Note: - Can be used with SQL*Plus or RMAN
==========================
Shutdown database
Mount database
Alter database archivelog;

Configure database for flashback database
alter system set db_recovery_file_dest=‘////’ scope=both;
alter system set db_recovery_file_size=2335825920 scope=both;
alter system set db_flashback_retention_target=1400 scope=both;

Alter database flashback on; (verify by querying v$database)
Alter database open;
Select current_scn from v$database
Modify some tables (drop, CTAS, insert, and update)
Shutdown database
Startup mount

Flashback database to SCN or timestamp
flashback database to timestamp to_timestamp(‘2004-12-16 16:10:00’, ‘YYYY-MM-DD HH24:MI:SS’);
flashback database to scn 2034455;
Alter database open read only
View changes and flash forward if you’d like
Alter database open resetlogs

Backupset Compression
===============
The AS COMPRESSED BACKUPSET option of the BACKUP command allows RMAN to perform binary compression of backupsets. The resulting backupsets do not need to be uncompressed during recovery. It is most useful in the following circumstances: You are performing disk-based backup with limited disk space.
You are performing backups across a network where network bandwidth is limiting.
You are performing backups to tape, CD or DVD where hardware compression is not available.

The following examples assume that some persistent parameters are configured in a similar manner to those listed below: RMAN nocatalog;
Connect target;
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/backups/MYSID/%d_DB_%u_%s_%p';
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COMPRESSED BACKUPSET;
Backup database plus archivelog;

Use vendors MML over RMANs Compression. Don’t mix and match.
Includes SPFILE and also able to drop database.
Backup SPFILE. 9i included control file.
If you are backing up to tape and your tape device performs its own compression, you should not use both RMAN backupset compression and the media manager vendor's compression. In most instances you will get better results using the media manager's compression.
Oracle has developed many new features for the Database Server. Unfortunately, several of the lesser known 10g features are over-shadowed by the popular/commercialized ones. Unless you dig below the surface, many of them are unknown and under utilized. It’s my goal to expose these features and describe the reasons why a DBA can’t live without them.

Renaming a tablespace saves time when running TTS. Now you don’t have to delete the tablespaces on your target before RCP/FTP/SCP the data files from the source. Space permitting, you can rollback to your previous tablespaces.
Another useful purpose to rename tablespace is if your exporting objects that belong to a tablespace that doesn’t exist on the target. This normally requires changes to the objects/storage options and creation of new tablespaces. Now you only need to rename the tablespace on the source, prior to running your export and rename back to the original name after. Than again, you can also use data dump remap_tablespace keyword to accomplish this task. Another 10g feature.
select property_value from database_properties where property_name = 'DEFAULT_PERMANENT_TABLESPACE';
returns USERS. After the following statement is run
alter tablespace users rename to user_data;
Default users tablespace, instead of System tablespace for permanent objects
Blocking locks can bring application processing to a standstill. For example, if a long-running transaction takes a shared mode lock on a key application table, then all updates to that table must wait.
There are numerous ways of attempting to diagnose blocking lock situations, normally with the intention of killing the offending session.
Blocking locks are almost always TX (transaction) locks or TM (table) locks . When a session waits on a TX lock, it is waiting for that transaction to either commit or roll back. The reason for waiting is that the transaction has modified a data block, and the waiting session needs to modify the same part of that block. In such cases, the row wait columns of V$SESSION can be useful in identifying the database object, file, and block numbers concerned, and even the row number in the case of row locks. V$LOCKED_OBJECT can then be used to obtain session information for the sessions holding DML locks on the crucial database object. This is based on the fact that sessions with blocking TX enqueue locks always hold a DML lock as well, unless DML locks have been disabled.

--Blocking Lock Script 1
SELECT
bs.username "Blocking User",
bs.username "DB User",
ws.username "Waiting User",
bs.sid "SID",
ws.sid "WSID",
bs.sql_address "address",
bs.sql_hash_value "Sql hash",
bs.program "Blocking App",
ws.program "Waiting App",
bs.machine "Blocking Machine",
ws.machine "Waiting Machine",
bs.osuser "Blocking OS User",
ws.osuser "Waiting OS User",
bs.serial# "Serial#",
DECODE(wk.TYPE,
'MR', 'Media Recovery', 'RT', 'Redo Thread', 'UN', 'USER Name',
'TX', 'Transaction', 'TM', 'DML', 'UL', 'PL/SQL USER LOCK',
'DX', 'Distributed Xaction', 'CF', 'Control FILE', 'IS', 'Instance State',
'FS', 'FILE SET', 'IR', 'Instance Recovery', 'ST', 'Disk SPACE Transaction',
'TS', 'Temp Segment', 'IV', 'Library Cache Invalidation',
'LS', 'LOG START OR Switch', 'RW', 'ROW Wait','SQ', 'Sequence Number',
'TE', 'Extend TABLE', 'TT', 'Temp TABLE', wk.TYPE) lock_type,
DECODE(hk.lmode, 0, 'None', 1, 'NULL', 2, 'ROW-S (SS)', 3, 'ROW-X (SX)',
4, 'SHARE', 5, 'S/ROW-X (SSX)', 6, 'EXCLUSIVE', TO_CHAR(hk.lmode)) mode_held,
DECODE(wk.request, 0, 'None', 1, 'NULL', 2, 'ROW-S (SS)', 3, 'ROW-X (SX)',
4, 'SHARE', 5, 'S/ROW-X (SSX)', 6, 'EXCLUSIVE', TO_CHAR(wk.request)) mode_requested,
TO_CHAR(hk.id1) lock_id1,
TO_CHAR(hk.id2) lock_id2
FROM
v$lock hk, v$session bs,
v$lock wk, v$session ws
WHERE
hk.block = 1
AND hk.lmode != 0
AND hk.lmode != 1
AND wk.request != 0
AND wk.TYPE (+) = hk.TYPE
AND wk.id1 (+) = hk.id1
AND wk.id2 (+) = hk.id2
AND hk.sid = bs.sid(+)
AND wk.sid = ws.sid(+)
AND bs.username <> 'PERFSTAT'
ORDER BY 1;

--Blocking Lock Script 1
column username format a15
column program format a25
column command format a25
column sid format 999
column type format a4
column blocker format a3
column waiter format a3

select a.sid, a.username, a.osuser,
decode(a.COMMAND, 1, 'CREATE TABLE',
2, 'INSERT',
3, 'SELECT',

4, 'CREATE CLUSTER',
5, 'ALTER CLUSTER',
6, 'UPDATE',
7, 'DELETE',
8, 'DROP CLUSTER',
9, 'CREATE INDEX',
10, 'DROP INDEX',
11, 'ALTER INDEX',
12, 'DROP TABLE',
13, 'CREATE SEQUENCE',
14, 'ALTER SEQUENCE',
15, 'ALTER TABLE',
16, 'DROP SEQUENCE',
17, 'GRANT',
18, 'REVOKE',
19, 'CREATE SYNONYM',
20, 'DROP SYNONYM',
21, 'CREATE VIEW',
22, 'DROP VIEW',
23, 'VALIDATE INDEX',
24, 'CREATE PROCEDURE',
25, 'ALTER PROCEDURE',
26, 'LOCK TABLE',
27, 'NO OPERATION',
28, 'RENAME',
29, 'COMMENT',
30, 'AUDIT',
31, 'NOAUDIT', 32, 'CREATE DATABASE LINK',
33, 'DROP DATABASE LINK',
34, 'CREATE DATABASE',
35, 'ALTER DATABASE',
36, 'CREATE ROLLBACK SEGMENT',
37, 'ALTER ROLLBACK SEGMENT',
38, 'DROP ROLLBACK SEGMENT',
39, 'CREATE TABLESPACE',
40, 'ALTER TABLESPACE',
41, 'DROP TABLESPACE',
42, 'ALTER SESSION',
43, 'ALTER USE',
44, 'COMMIT',
45, 'ROLLBACK',
46, 'SAVEPOINT',
47, 'PL/SQL EXECUTE',
48, 'SET TRANSACTION',
49, 'ALTER SYSTEM SWITCH LOG',
50, 'EXPLAIN',
51, 'CREATE USER',
25, 'CREATE ROLE',
53, 'DROP USER',
54, 'DROP ROLE',
55, 'SET ROLE',
56, 'CREATE SCHEMA',
57, 'CREATE CONTROL FILE',
58, 'ALTER TRACING',
59, 'CREATE TRIGGER',
60, 'ALTER TRIGGER',
61, 'DROP TRIGGER',
62, 'ANALYZE TABLE',
63, 'ANALYZE INDEX',
64, 'ANALYZE CLUSTER',
65, 'CREATE PROFILE',
66, 'DROP PROFILE',
67, 'ALTER PROFILE',
68, 'DROP PROCEDURE',
69, 'DROP PROCEDURE',
70, 'ALTER RESOURCE COST',
71, 'CREATE SNAPSHOT LOG',
72, 'ALTER SNAPSHOT LOG',
73, 'DROP SNAPSHOT LOG',
74, 'CREATE SNAPSHOT',
75, 'ALTER SNAPSHOT',
76, 'DROP SNAPSHOT',
79, 'ALTER ROLE',
85, 'TRUNCATE TABLE',
86, 'TRUNCATE COUSTER',
88, 'ALTER VIEW',
91, 'CREATE FUNCTION',
92, 'ALTER FUNCTION',
93, 'DROP FUNCTION',
94, 'CREATE PACKAGE',
95, 'ALTER PACKAGE',
96, 'DROP PACKAGE',
97, 'CREATE PACKAGE BODY',
98, 'ALTER PACKAGE BODY',
99, 'DROP PACKAGE BODY',
'UNKNOWN') command,
b.*
from v$session a, (select sid,
decode(b.block,'0','no', 'yes') blocker,
decode(b.request, 0, 'no', 'yes') waiter, b.type, b.ctime
from v$lock b
where
request>0 or block>0 ) b
where a.sid=b.sid



v$session_longops
Use v$session_longops if you have a long running pl/sql procedure and want to give feedback on how far the procedure proceeded. If the following Procedure is run, it will report its progress in v$session_longops. The Procedure will also set the module attribute in v$session which makes it possible to find the sid and serial# of the session.
create table f(g number);

create or replace procedure long_proc as
rindex pls_integer := dbms_application_info.set_session_longops_nohint;
slno pls_integer;
-- Name of task
op_name varchar2(64) := 'long_proc';
target pls_integer := 0; -- ie. The object being worked on
context pls_integer; -- Any info
sofar number; -- how far proceeded
totalwork number := 1000000; -- finished when sofar=totalwork
-- desc of target
target_desc varchar2(32) := 'A long running procedure';
units varchar2(32) := 'inserts'; -- unit of sofar and totalwork
begin
dbms_application_info.set_module('long_proc',null);
dbms_application_info.set_session_longops (
rindex,
slno);
for sofar in 0..totalwork loop
insert into f values (sofar);
if mod(sofar,1000) = 0 then
dbms_application_info.set_session_longops (
rindex,
slno,
op_name,
target,
context,
sofar,
totalwork,
target_desc,
units);
end if;
end loop;
end long_proc;

If the procedure long_proc is run, you can issue the following query to get feedback on its progress:
select time_remaining,sofar,elapsed_seconds
from v$session_longops l, v$session s
where l.sid=s.sid and l.serial# = s.serial# and s.module='long_proc'

It is possible DBWR is very active because of the cache is too small. Investigate whether this is a probable cause by looking to see if the buffer cache hit ratio is low. Also use the V$DB_CACHE_ADVICE view to determine whether a larger cache size would be advantageous.
There’s also an increase in Physical I/O.


Why is this important. It removes the need to schedule analyze routines for changed objects such as an index rebuild. The compute statistics clause of the create index and alter index commands is now obsolete.

You can also prevent statistics from being overwritten via dbms_stats.lock_schema_stats and unlock. In 10g you can restore statistics to any point in time, in case the new statistics that were collected cause a sub optimal plan to be generated. You can restore statistics for a table, schema, fixed database objects, or the entire database.


Automatic Optimizer Statistics Collection
By default Oracle 10g automatically gathers optimizer statistics using a scheduled job called GATHER_STATS_JOB. By default this job runs within a maintenance windows between 10 P.M. to 6 A.M. week nights and all day on weekends. The job calls the DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC internal procedure which gathers statistics for tables with either empty or stale statistics, similar to the DBMS_STATS.GATHER_DATABASE_STATS procedure using the GATHER AUTO option. The main difference is that the internal job prioritizes the work such that tables most urgently requiring statistics updates are processed first. In some cases automatically gathering statistics can cause problems. Highly volatile tables and load tables may have their statistics gathered when there is an unrepresentative number of rows present. These situations can be avoided by using one of two methods:
The current statistics can be deleted and locked to prevent DBMS_STATS from gathering new statistics. If the OPTIMIZER_DYNAMIC_SAMPLING parameter is set to 2 (the default) or higher the necessary statistics will be gathered as part of the query optimization stage (See Dynamic Sampling): BEGIN DBMS_STATS.delete_table_stats('MY_SCHEMA','LOAD_TABLE'); DBMS_STATS.lock_table_stats('MY_SCHEMA','LOAD_TABLE'); END; /
The statistics can be gathered then locked at a time when the table contains the appropriate data:BEGIN DBMS_STATS.gather_table_stats('MY_SCHEMA','LOAD_TABLE'); DBMS_STATS.lock_table_stats('MY_SCHEMA','LOAD_TABLE'); END; / System statistics and statistics for fixed object, such as dynamic performance tables, are not gathered automatically.

Prior to Flashback table you had to use a minus set operator to restore previously deleted data. Required more code and room for error. Now you only need to do is “Alter table enable row movement”

UTL_MAIL
The UTL_MAIL package provides a simple API to allow email to be sent from PL/SQL. In prior versions this was possible using the UTL_SMTP package, but this required knowledge of the SMTP protocol.

The package is loaded by running the following scripts:

CONN sys/password AS SYSDBA
@$ORACLE_HOME/rdbms/admin/utlmail.sql
@$ORACLE_HOME/rdbms/admin/prvtmail.plb
In addition the SMTP_OUT_SERVER parameter must be set to identify the SMTP server:

CONN sys/password AS SYSDBA
ALTER SYSTEM SET smtp_out_server='smtp.domain.com' SCOPE=SPFILE;
SHUTDOWN IMMEDIATE
STARTUP
With the configuration complete we can now send a mail using:

BEGIN
UTL_MAIL.send(sender => 'me@domain.com',
recipients => 'person1@domain.com,person2@domain.com',
cc => 'person3@domain.com',
bcc => 'myboss@domain.com',
subject => 'UTL_MAIL Test', message => 'If you get this message it worked!');
END;
/


Other Related Articles

... to read more DBA articles, visit http://dba.fyicenter.com/article/