Oracle 10053 trace viewer

There are two trace files that I use when tuning.  Cary Millsap @CaryMillsap and Method-R @MethodR have a great tool for analyzing the 10046 trace file. There is the profiler and my personal favorite mrskew http://method-r.com/store .

But the 10053 trace file is cumbersome to read and interpret.  Jonathan Lewis @JLOracle posted this 10053 trace file viewer on oaktable.net. http://www.oaktable.net/contribute/10053-viewer .

If you are an Oracle DBA, Developer or Performance Engineer, these should be in your tool box.

What about NULL

What is NULL?  Is NULL a space? Is NULL a NULL string?

NULL is unknown. If null is unknown then you can not do anything with null. You can not perform any operation on null:

Y := 5+UNKNOWN then Y=UNKNOWN
You can not compare null to anything:

IF UNKNOWN<5 THEN … END IF.

Well that will never evaluate to true because just don’t know.

This is part of what makes NULL so difficult to work with.  NULL complicates logic. So just say no to null.

Beyond the basic select statement: Using the WITH clause inline SUBQUERY

We looked at the with statement with embedded PL/SQL, now lets look at another use of the with statement. This time we are going to use it with embedded subqueries.

Query A This query searches a stocks table, returns the median closing price for a quarter, the standard deviation and plus and minus one standard deviation.

The top two arrows point to the subquerys and the bottom arrow points to referencing the subqueries.  The subqueries are named SDEV and MED, those can then be referenced in the from clause.

image

Query B shows the normal subquery we are accustom to seeing and returns the same values as Query A.

image

Query A and Query B are equivalent.

So, what execution plan did the CBO come up with for Query A

image

And what plan did the CBO come up with for Query B

image

I’ll dig into how the CBO came up with these execution plans later.

Beyond the basic select statement: Using the WITH clause inline PL/SQL

For years I used the basic select statement:

SELECT A.C1, A.C2, B.C1, B.C2, <…> FROM TAB1 A, TAB2 B WHERE A.C1 = B.C1;

Sometimes this basic select statement would not answer my question. You can make a select statement quite complicated, but then it gets unreadable and difficult to maintain. The WITH clause helps out quite a bit

A

B

WITH
FUNCTION monthly_amt(pamt number)
RETURN number IS
x number;
   BEGIN
     x := pamt*2;
     RETURN x;
   END;
SELECT pay22, monthly_amt(pay22)
  FROM policys;

create or replace function
  monthly_amt (pamt number) return number is
  x number;
BEGIN
  x := pamt*2;
  return x;
END;
/
select pay22, monthly_amt(pay22)
from policys;

We are going to start this discussion with the assumption that Query A and Query B are equivalent with caveats. The rules associated with using an inline function is the inline function will always take precedence over a stored function.

The results of Query A returned pay22 and the result of the inline function monthly_amt.

p1

Query B Returns pay22 and the results of the stored function monthly_amt.

p2

But what if the stored function returns a different value the argument * 2?

First we will redefine the stored function monthly_amt to return argument * 10. When using WITH to include an inline function, the inline function will always take precedence over a stored function with the same name.

p3

Now we will call the stored function monthly_amt and not making any reference to an inline function. As expected the stored function returns argument * 10.

p4

How would we use this? PL/SQL supports overloading functions; however both the stored function and the inline function have the same name and use the same arguments. I see using the inline function when: A) creating a stored function is not an option. I worked in one shop a while back where it was against policy to store PL/SQL in the database. B) when you have to overload a stored function. C) When you need / want to read the PL/SQL being executed.

Exploring the CBO

Lets start with a simple experiment on what the CBO can do with query optimization. We are going to build on the structure as we go to see if we can help out the CBO with more information.

Test 1) we will start with two tables bigtable and owners and see what plan the CBO comes up with.

Test 2) we will analyze the tables and see what plan the CBO comes up with.

Test 3) we will add primary key and foreign key constraints and see what plan the CBO comes up with.

Test 4) we will add an index on the foreign key constraint and see what plan the CBO comes up with.

 

We know that these two queries are equivalent. So, can the CBO get from query A to query B

A

B

SELECT COUNT(*)

FROM OWNERS O,

BIGTABLE BT

WHERE O.USERNAME = BT.OWNER

and o.username like ‘RL%’;

SELECT COUNT(*)

FROM BIGTABLE BT

WHERE BT.OWNER LIKE ‘RL%’;

 

 

The first pass we are going to use bigtable and owners with no statistics, no indexes and no primary key foreign key restraints.

drop table bigtable;

drop table owners;

create table bigtable as select * from dba_objects;

create table owners as select * from dba_users;

 

alter session set tracefile_identifier = owi_trace;

— This line will generate a trace file of what the CBO did.

alter session set events ’10053 trace name context forever, level 1′;

 

 

SELECT COUNT(*)

FROM OWNERS O,

BIGTABLE BT

WHERE O.USERNAME = BT.OWNER

and o.username like ‘RL%’;

 

What did the CBO come up with?

 

============

Plan Table

============

—————————————+———————————–+

| Id | Operation | Name | Rows | Bytes | Cost | Time |

—————————————+———————————–+

| 0 | SELECT STATEMENT | | | | 437 | |

| 1 | SORT AGGREGATE | | 1 | 15 | | |

| 2 | HASH JOIN | | 62 | 930 | 437 | 00:00:06 |

| 3 | TABLE ACCESS FULL | OWNERS | 1 | 9 | 3 | 00:00:01 |

| 4 | TABLE ACCESS FULL | BIGTABLE| 2398 | 14K | 434 | 00:00:06 |

—————————————+———————————–+

Predicate Information:

———————-

2 – access(“O”.”USERNAME”=”BT”.”OWNER”)

3 – filter(“O”.”USERNAME” LIKE ‘RL%’)

4 – filter(“BT”.”OWNER” LIKE ‘RL%’)

This is intresting, the CBO decided to apply two filters.

 

Well this is not the best execution plan, so lets see what happens when we analyze the tables.

 

analyze table bigtable compute statistics;

analyze table owners compute statistics;

 

alter session set tracefile_identifier = owi_trace;

alter session set events ’10053 trace name context forever, level 1′;

 

 

SELECT COUNT(*)

FROM OWNERS O,

BIGTABLE BT

WHERE O.USERNAME = BT.OWNER

and o.username like ‘RL%’;

============

Plan Table

============

—————————————+———————————–+

| Id | Operation | Name | Rows | Bytes | Cost | Time |

—————————————+———————————–+

| 0 | SELECT STATEMENT | | | | 437 | |

| 1 | SORT AGGREGATE | | 1 | 13 | | |

| 2 | HASH JOIN | | 62 | 806 | 437 | 00:00:06 |

| 3 | TABLE ACCESS FULL | OWNERS | 1 | 8 | 3 | 00:00:01 |

| 4 | TABLE ACCESS FULL | BIGTABLE| 2398 | 12K | 434 | 00:00:06 |

—————————————+———————————–+

Predicate Information:

———————-

2 – access(“O”.”USERNAME”=”BT”.”OWNER”)

3 – filter(“O”.”USERNAME” LIKE ‘RL%’)

4 – filter(“BT”.”OWNER” LIKE ‘RL%’)

 

That was not much help. Lets see what happens when we add in primary key, foreign key restraints.

 

alter table bigtable add constraint bigtable_pk primary key (object_id);

alter table owners add constraint owners_pk primary key (username);

alter table bigtable add constraint bigtable_fk foreign key (owner) references owners(username);

 

alter session set tracefile_identifier = owi_trace;

alter session set events ’10053 trace name context forever, level 1′;

 

 

SELECT COUNT(*)

FROM OWNERS O,

BIGTABLE BT

WHERE O.USERNAME = BT.OWNER

and o.username like ‘RL%’;

============

Plan Table

============

————————————–+———————————–+

| Id | Operation | Name | Rows | Bytes | Cost | Time |

————————————–+———————————–+

| 0 | SELECT STATEMENT | | | | 434 | |

| 1 | SORT AGGREGATE | | 1 | 5 | | |

| 2 | TABLE ACCESS FULL | BIGTABLE| 2398 | 12K | 434 | 00:00:06 |

————————————–+———————————–+

————————————–+———————————–+

Predicate Information:

———————-

2 – filter((“BT”.”OWNER” LIKE ‘RL%’ AND “BT”.”OWNER” IS NOT NULL))

Well this is better. By adding the foreign key constraint, the CBO eliminated the owners table. But we can do better. Let’s add a index on bigtable.owner and see what happens.

 

create index bigtable_idx1 on bigtable(owner);

analyze index bigtable_idx1 compute statistics;

alter session set tracefile_identifier = owi_trace;

alter session set events ’10053 trace name context forever, level 1′;

 

 

SELECT COUNT(*)

FROM OWNERS O,

BIGTABLE BT

WHERE O.USERNAME = BT.OWNER

and o.username like ‘RL%’;

============
Plan Table
============
------------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
------------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 7 | |
| 1 | SORT AGGREGATE | | 1 | 5 | | |
| 2 | INDEX RANGE SCAN | BIGTABLE_IDX1| 2398 | 12K | 7 | 00:00:01 |
------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - access("BT"."OWNER" LIKE 'RL%')
2 - filter(("BT"."OWNER" LIKE 'RL%' AND "BT"."OWNER" IS NOT NULL))

 

Now lets see what the CBO did to get here.
SQL:******* UNPARSED QUERY IS *******

SELECT COUNT(*) “COUNT(*)” FROM “OPS$ORACLE”.”OWNERS” “O”,”OPS$ORACLE”.”BIGTABLE” “BT” WHERE “BT”.”OWNER”=”O”.”USERNAME” AND “O”.”USERNAME” LIKE ‘RL%’

JE: eliminate table: OWNERS (O)

JE: Replaced column: O.USERNAME with column: BT.OWNER

Registered qb: SEL$84E079A4 0xadec2578 (JOIN REMOVED FROM QUERY BLOCK SEL$1; SEL$1; “O”@”SEL$1″)

**************************

Predicate Move-Around (PM)

**************************

PM: PM bypassed: Outer query contains no views.

PM: PM bypassed: Outer query contains no views.

query block SEL$84E079A4 (#0) unchanged

FPD: Considering simple filter push in query block SEL$84E079A4 (#0)

“BT”.”OWNER” IS NOT NULL AND “BT”.”OWNER” LIKE ‘RL%’

try to generate transitive predicate from check constraints for query block SEL$84E079A4 (#0)

finally: “BT”.”OWNER” IS NOT NULL AND “BT”.”OWNER” LIKE ‘RL%’

Final query after transformations:******* UNPARSED QUERY IS *******

SELECT COUNT(*) “COUNT(*)” FROM “OPS$ORACLE”.”BIGTABLE” “BT” WHERE “BT”.”OWNER” IS NOT NULL AND “BT”.”OWNER” LIKE ‘RL%’

Well that’s special. The CBO came up with a query that is close to query B.

 

B

C

SELECT COUNT(*)

FROM BIGTABLE BT

WHERE BT.OWNER ‘RL%’;

SELECT COUNT(*) “COUNT(*)” FROM “OPS$ORACLE”.”BIGTABLE” “BT” WHERE “BT”.”OWNER” IS NOT NULL AND “BT”.”OWNER” LIKE ‘RL%’
The only real difference between query B and query C is the CBO added bt.owner is not null. This is curious because if we are looking for where owner like ‘RL%’ then by definition owner would be not null.
What did we learn?

1) When we only have tables with no supporting structures the CBO selected full table scans on the two tables then did a hash join.

2) When we analyzed the tables to get good statistics, the CBO still did full table scans and a hash join.

3) When we added in the primary key foreign key constraints, the CBO knew that if bigtable.owner existed then there was a matching row in owners.username. Because of this, the CBO was able to eliminate the join condition bt.owner = o.username and eliminate the owners table from the query. This got us a full table scan on bigtable.

4) When we added the index on bigtable.owner the CBO decided an index range scan on bigtable_idx1 would return the correct results. This is because all the information needed to satisfy the query.

 

something about the cbo was bothering me

A query similar to this was getting a full table scan. Now we know the NDR in archived, deleted and flag is 1. My initial thought was because of this we would get a full table scan. I was wrong, because we are using owner like ‘S%’ this would trigger the full table scan.

image

As you see this first run did a full table scan.

image

Now look at what happens when we change owner like ‘S%’ to owner = ‘SYSTEM’. We now get an index range scan. Much better.

image

1 – filter(“ARCHIVED”=0 AND “DELETED”=0 AND “FLAG”=’1′)

2 – access(“OWNER”=’SYSTEM’)

Foreign Keys and indexes

 

In this test case we are going to use two tables with a foreign key.  The question is, if we create a foreign key, do we need to add an index on that foreign key column?

OPS$ORACLE@VETDEV > create table bigtable as select * from dba_objects;

Table created.

OPS$ORACLE@VETDEV > create table owners as select * from dba_users;

Table created.

OPS$ORACLE@VETDEV > analyze table bigtable compute statistics;

Table analyzed.

OPS$ORACLE@VETDEV > analyze table owners compute statistics;
Table analyzed.

OPS$ORACLE@VETDEV > alter table bigtable add primary key (object_id);

Table altered.

OPS$ORACLE@VETDEV > alter table owners add primary key (username);

Table altered.

OPS$ORACLE@VETDEV > alter table bigtable add constraint bigtable_fk1
  2* foreign key (owner) references owners(username);
Table altered.

We are going to write a query against both tables joining them on the foreign key. Start by setting autotrace on with explain and statistics.  In the following explain plan we are cutting out statistics to focus on what jumps out at us. Even though we have a foreign key constraint we still got a full table scan on both big table and owners.

TC1

Okay, lets try and limit the number of rows returned by adding a filter predicate.

set autotrace trace

select b.owner, b.object_type, o.account_status
from owners o
,   bigtable b
where o.username = b.owner
and b.owner = ‘SUZANNE’;

Now first you will notice we put the filter predicate on b.owner and not o.username.  We know from the data distribution we have multiple owners and unique usernames. Now what does the explain plan look like.

image
I’m about to get off on a tangent; but bare with me.  There is something interesting going on here.  Look at the index unique scan against the username primary key column and the filter predicate on bigtable.owner.  What happened?  We are going to use the 10053 trace to find out what the optimized did.

alter session set events ’10053′;

set autotrace trace

select b.owner, b.object_type, o.account_status
from owners o
,   bigtable b
where o.username = b.owner
and b.owner = ‘SUZANNE’;

Lets find the trace file we just generated and dig through it.  The trace file VETDEV_mmon_6474.trc will tell us just what the optimizer did.

[oracle@owirdb1 trace]$ ls -ltr
total 600
-rw-r—– 1 oracle oinstall 594496 Jan 16 07:00 alert_VETDEV.log
-rw-r—– 1 oracle oinstall    384 Jan 16 12:02 VETDEV_mmon_6474.trm
-rw-r—– 1 oracle oinstall   4111 Jan 16 12:02 VETDEV_mmon_6474.trc
[oracle@owirdb1 trace]$

There is a bunch of information in this file and it’s beyond the scope of what I wanted to discuss; what we are looking for is how did the optimizer transform our query.

**************************
Predicate Move-Around (PM)
**************************
PM:     PM bypassed: Outer query contains no views.
PM:     PM bypassed: Outer query contains no views.
query block SEL$1 (#0) unchanged
FPD: Considering simple filter push in query block SEL$1 (#0)
“O”.”USERNAME”=”B”.”OWNER” AND “B”.”OWNER”=’SUZANNE’
try to generate transitive predicate from check constraints for query block SEL$1 (#0)
finally: “O”.”USERNAME”=”B”.”OWNER” AND “B”.”OWNER”=’SUZANNE’ AND “O”.”USERNAME”=’SUZANNE’

Check this out, the optimizer rewrote the query to reduce the number of rows returned in the filter.

OPS$ORACLE@VETDEV > select count(*) from owners where username = ‘SUZANNE’;

  COUNT(*)
———-
         1

OPS$ORACLE@VETDEV > select count(*) from bigtable where owner = ‘SUZANNE’;

  COUNT(*)
———-
       199

So it makes more sense to return one row from owners and using that for the filter as apposed to using owners from bigtable.

Well what would happen if we put an index on the foreign key?

OPS$ORACLE@VETDEV > create index bigtable_idx1 on bigtable(owner);

Index created.

OPS$ORACLE@VETDEV > analyze index bigtable_idx1 compute statistics;

Index analyzed.

Now lets run the test again and without the filter predicate.
TC1
Same issue, but why. Well lets look at the query and ask what is getting returned.  Well, there are no filter predicates, so every row is going to be returned.  This is shown under predicate information and the rows column.  Therefore in this case, the full table scan with a hash join is the most efficient way to get the data. 
What would happen if we add a filter predicate to the query?
TC1
Well that’s nice, we have the index range scan on bigtable_idx1 and an index unique scan on the owners primary key index. 
So, what did we learn, 1) not having an index on the foreign key pretty much guarantees a full table scan on the joined table. 2) having an index does not guarantee the index will be used.  If the database determines that it would be cheaper to do a full table scan then that is what it’s going to do. 3) a filter predicate to the query reduced the total number of rows returned to the point where it would be cheaper to do an index unique scan. And 4) the optimizer will re-write a query to improve performance.

How do You succeed?

This does not have much to do with Oracle consulting, this is more some life lessons on how do succeed at work. I have learned many lessons in life, one of them is the freedom that comes from being independent and not having to depend on others. 

1) If you want to do a lot of things real bad then try multitasking.  The fact is humans are not good executing more then one task at a time.  Humans are good at switching from one task to another.  Think about writing an email and talking to someone at the same time. You can’t do both.  There is a lot of issues with rapid task switching.  I have called this context switching for many years. Context switching is moving from one line of thought to another line of thought.  Every time you do a context switch, you have to change your line of thought.  When multitasking, mistakes will increase and time to complete a task will increase. In my business Oraclewizard, inc I wear every hat, I’m accounts receivable, accounts payable, the receiving department, sales department and quite a few other departments. If I’m working on the company books and I think of something that needs to be done, I write it down in my todo book and keep with the current task until it’s complete. Between tasks I review my todo book, re-prioritize and start the next task.

“People can’t multitask very well, and when people say they can, they’re deluding themselves,” “The brain is very good at deluding itself.” neuroscientist Earl Miller.

I have learned over the years that if I’m designing a database system to step away from the computer, sit down at a desk with templates and a lot of paper then start the design.  When I’m attached to a computer, email and the internet is constantly trying to get my attention. The key is when I’m doing something that requires all my attention, I remove all the distractions.  There is no law that says “you must answer that text, email or phone call right now.”

2) Always improve yourself.

Spend one hour a day doing professional development.  My morning always starts with studying for an hour.  Note, not everything revolves around Oracle in my world.  Yes Oracle is a big port of my professional life; however I am also a pilot, technical diver and public speaker.

Attend at least one professional conference a year.  In my professional area, there are many opportunities to attend conferences. These are great ways to meet people in your arena and network.

If you really want to improve yourself, then volunteer to speak at an event.  A good friend of mine drilled into me, “Proper Preparation Prevents Poor Performance.”  Okay that is a cliché, but it’s true.

3) Your employer is your customer and the customer of your customer / employer is your customer. If you are the customer, how do you want to be treated? If you want to get ahead at work, provide good customer service to your employer.

4) You own your brand.  What is your brand?  Your brand is what do people think of you.  Way back when your brand was limited to maybe a few hundred people.  However; with the internet and mass communications you can influence your brand world wide. 

One of the things I like about “my brand” is when asked to do something, I don’t make excuses, I figure out how to do it.  Case in point, our Connect Direct / Connect Enterprise expert left the contract where I was working.  A program manger asked me to take it over.  Connect Direct / Connect Enterprise if more about secure data transmission then data storage / manipulation.  I could have have pushed back and told the program manager that was not my specialty but instead I accepted the challenge and made the program manger aware that there may be some rocky places along the way; but I will give it my best shot. 

You can also influence your brand by helping people when they have problems. People will remember it.

5) The success you enjoy will be from learning from your mistakes. To quote my fourteen year old niece “OMG you did that.” Let me tell you I have made some doozies of mistakes.  There was the time I truncated sys’s tables and could not figure out why the database did not start back up.  Then there was the time I ripped an employee of a customer a new one because her incompetence that was having an impact on me doing my job. And there were a lot of other mistakes.  But the key is each time I made a mistake, I learned from it.  You will make mistakes and some will go down in history but if you analyze the mistake and learn from it, you will climb another rung on the ladder of success.

6) Don’t get overwhelmed by your good ideas. This still happens to me, but I have gotten better at writing down my “good ideas” then letting them stew for a week or two before I decide wither they are worth starting. Frequently after a week or so, what seemed like a good idea at the time turns out to not be such a good idea.  The ones that are good ideas, I can take to the next level and figure out what it is going to take to get an idea off the ground.

7) Set goals and work on your goals daily.  This almost goes without saying; but I’m surprised by the number of people who don’t set goals.  Goals should be realistic and you should be able to measure the results.

Encrypt those backups

April 2005 Ameratrade loses a backup tape containing information on 200,000 customers.

February 2005 Bank of America loses backup tapes containing information on 1.2 million charge cards.

September 2011, SAIC loses backup tapes of 4.9 Million members of the military who sought medical treatment in the San Antonio area. The data contained name, social security numbers, phone numbers and medical information. This data was not encrypted.

SAIC made the following statement: “Retrieving data on the tapes, which were stolen from a company employee’s car is not likely to happen because doing so requires knowledge of and access to specific hardware and software and knowledge of the system and data structures.” Excuse me if this does not make me feel better. I can get on eBay to get the hardware needed and download the software from any number of vendors to do the restore. Yes if the backup was done from Oracle or DB2 or MS SQL Server then you would need the software from the vendor. What if this theft was targeted and the thief knew what they were after?

I can go on and on about backup tapes that are lost out of the back seat of an employees’ car. And to be honest; I have transported tapes in my car too. However; when I reflect on transporting critical information in my car, I now get the hebegebes. Now we use a bonded courier to transport backup tapes.

Backup tapes are also being shipped to someplace like Iron Mountain. But lets face it, the people who are handling your backup tapes are low paid employees who could be influenced to look the other way. If someone really wants your backup tapes there is a way for someone to get your backup tape.

What are the options for encrypting backups.

  1. Use rman encryption.

  2. Encrypt the backup files on the OS.

For option 1, using rman to encrypt. There are a few options you can use a password to encrypt the backup or you can use a wallet to encrypt the backup.

If the backup is being sent offsite, using a password to encrypt the backup may be your better option.

If the backup is being sent to a Disaster Recovery site to build a standby database, using the wallet may be the better option.

Right now we are addressing sending a backup offsite so lets walk through the process of building an encrypted backup using a password.

First find out what encryption algorithms are supported.

SQL> select ALGORITHM_NAME, ALGORITHM_DESCRIPTION

2 from V$RMAN_ENCRYPTION_ALGORITHMS;

ALGORITHM_ ALGORITHM_DESCRIPTION

———- —————————————————————-

AES128 AES 128-bit key

AES192 AES 192-bit key

AES256 AES 256-bit key

SQL>

Of the algorithms that are available, AES256 is the strongest one available. So we are going to select AES256 for our encryption.

RMAN> set encryption algorithm ‘aes256′ identified by A_Passphrase_that_you_select;

executing command: SET encryption

using target database control file instead of recovery catalog

Using “set encryption algorithm’ command we did two things. One we set the algorithm that will be used for the backup and we set the passphrase that we need to decrypt the backup.

Next we are going to run the backup like we would normally do.

RMAN> backup as compressed backupset database format ‘/home/oracle/backup/encrypted_with_password%u%d.backup’;

Starting backup at 02-AUG-12

using channel ORA_DISK_1

channel ORA_DISK_1: starting compressed full datafile backup set

channel ORA_DISK_1: specifying datafile(s) in backup set

input datafile file number=00003 name=/opt/oracle/oradata/orcl/sysaux01.dbf

input datafile file number=00001 name=/opt/oracle/oradata/orcl/system01.dbf

input datafile file number=00002 name=/opt/oracle/oradata/orcl/example01.dbf

input datafile file number=00004 name=/opt/oracle/oradata/orcl/undotbs01.dbf

input datafile file number=00006 name=/opt/oracle/oradata/orcl/users01.dbf

channel ORA_DISK_1: starting piece 1 at 02-AUG-12

channel ORA_DISK_1: finished piece 1 at 02-AUG-12

piece handle=/home/oracle/backup/encrypted_with_password0dnhl9n6ORCL.backup tag=TAG20120802T170333 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:02:35

channel ORA_DISK_1: starting compressed full datafile backup set

channel ORA_DISK_1: specifying datafile(s) in backup set

including current control file in backup set

including current SPFILE in backup set

channel ORA_DISK_1: starting piece 1 at 02-AUG-12

channel ORA_DISK_1: finished piece 1 at 02-AUG-12

piece handle=/home/oracle/backup/encrypted_with_password0enhl9s1ORCL.backup tag=TAG20120802T170333 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01

Finished backup at 02-AUG-12

RMAN>

How do we decrypt the backup when we need to restore. It’s that simple.

RMAN> set decryption identified by A_Passphrase_that_you_select;

executing command: SET decryption

using target database control file instead of recovery catalog

RMAN> restore database;

Starting restore at 02-AUG-12

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=20 device type=DISK

skipping datafile 1; already restored to file /opt/oracle/oradata/orcl/system01.dbf

skipping datafile 2; already restored to file /opt/oracle/oradata/orcl/example01.dbf

skipping datafile 3; already restored to file /opt/oracle/oradata/orcl/sysaux01.dbf

skipping datafile 4; already restored to file /opt/oracle/oradata/orcl/undotbs01.dbf

skipping datafile 6; already restored to file /opt/oracle/oradata/orcl/users01.dbf

restore not done; all files read only, offline, or already restored

Finished restore at 02-AUG-12

RMAN>

Okay, I did not need to restore the database, but it’s good to know that this works.

Now you don’t have an excuse not encrypt your backups.

Oracle Database, Oracle Tools, Database Security, and other stuff that happens

%d bloggers like this: