Beyond the basic select statement: Using the WITH clause inline PL/SQL

For years I used the basic select statement:

SELECT A.C1, A.C2, B.C1, B.C2, <…> FROM TAB1 A, TAB2 B WHERE A.C1 = B.C1;

Sometimes this basic select statement would not answer my question. You can make a select statement quite complicated, but then it gets unreadable and difficult to maintain. The WITH clause helps out quite a bit

A

B

WITH
FUNCTION monthly_amt(pamt number)
RETURN number IS
x number;
   BEGIN
     x := pamt*2;
     RETURN x;
   END;
SELECT pay22, monthly_amt(pay22)
  FROM policys;

create or replace function
  monthly_amt (pamt number) return number is
  x number;
BEGIN
  x := pamt*2;
  return x;
END;
/
select pay22, monthly_amt(pay22)
from policys;

We are going to start this discussion with the assumption that Query A and Query B are equivalent with caveats. The rules associated with using an inline function is the inline function will always take precedence over a stored function.

The results of Query A returned pay22 and the result of the inline function monthly_amt.

p1

Query B Returns pay22 and the results of the stored function monthly_amt.

p2

But what if the stored function returns a different value the argument * 2?

First we will redefine the stored function monthly_amt to return argument * 10. When using WITH to include an inline function, the inline function will always take precedence over a stored function with the same name.

p3

Now we will call the stored function monthly_amt and not making any reference to an inline function. As expected the stored function returns argument * 10.

p4

How would we use this? PL/SQL supports overloading functions; however both the stored function and the inline function have the same name and use the same arguments. I see using the inline function when: A) creating a stored function is not an option. I worked in one shop a while back where it was against policy to store PL/SQL in the database. B) when you have to overload a stored function. C) When you need / want to read the PL/SQL being executed.

Posted in Beyond the basic select statement | Tagged , , | Leave a comment

Exploring the CBO

Lets start with a simple experiment on what the CBO can do with query optimization. We are going to build on the structure as we go to see if we can help out the CBO with more information.

Test 1) we will start with two tables bigtable and owners and see what plan the CBO comes up with.

Test 2) we will analyze the tables and see what plan the CBO comes up with.

Test 3) we will add primary key and foreign key constraints and see what plan the CBO comes up with.

Test 4) we will add an index on the foreign key constraint and see what plan the CBO comes up with.

 

We know that these two queries are equivalent. So, can the CBO get from query A to query B

A

B

SELECT COUNT(*)

FROM OWNERS O,

BIGTABLE BT

WHERE O.USERNAME = BT.OWNER

and o.username like ‘RL%’;

SELECT COUNT(*)

FROM BIGTABLE BT

WHERE BT.OWNER LIKE ‘RL%’;

 

 

The first pass we are going to use bigtable and owners with no statistics, no indexes and no primary key foreign key restraints.

drop table bigtable;

drop table owners;

create table bigtable as select * from dba_objects;

create table owners as select * from dba_users;

 

alter session set tracefile_identifier = owi_trace;

— This line will generate a trace file of what the CBO did.

alter session set events ‘10053 trace name context forever, level 1’;

 

 

SELECT COUNT(*)

FROM OWNERS O,

BIGTABLE BT

WHERE O.USERNAME = BT.OWNER

and o.username like ‘RL%’;

 

What did the CBO come up with?

 

============

Plan Table

============

—————————————+———————————–+

| Id | Operation | Name | Rows | Bytes | Cost | Time |

—————————————+———————————–+

| 0 | SELECT STATEMENT | | | | 437 | |

| 1 | SORT AGGREGATE | | 1 | 15 | | |

| 2 | HASH JOIN | | 62 | 930 | 437 | 00:00:06 |

| 3 | TABLE ACCESS FULL | OWNERS | 1 | 9 | 3 | 00:00:01 |

| 4 | TABLE ACCESS FULL | BIGTABLE| 2398 | 14K | 434 | 00:00:06 |

—————————————+———————————–+

Predicate Information:

———————-

2 – access(“O”.”USERNAME”=”BT”.”OWNER”)

3 – filter(“O”.”USERNAME” LIKE ‘RL%’)

4 – filter(“BT”.”OWNER” LIKE ‘RL%’)

This is intresting, the CBO decided to apply two filters.

 

Well this is not the best execution plan, so lets see what happens when we analyze the tables.

 

analyze table bigtable compute statistics;

analyze table owners compute statistics;

 

alter session set tracefile_identifier = owi_trace;

alter session set events ‘10053 trace name context forever, level 1’;

 

 

SELECT COUNT(*)

FROM OWNERS O,

BIGTABLE BT

WHERE O.USERNAME = BT.OWNER

and o.username like ‘RL%’;

============

Plan Table

============

—————————————+———————————–+

| Id | Operation | Name | Rows | Bytes | Cost | Time |

—————————————+———————————–+

| 0 | SELECT STATEMENT | | | | 437 | |

| 1 | SORT AGGREGATE | | 1 | 13 | | |

| 2 | HASH JOIN | | 62 | 806 | 437 | 00:00:06 |

| 3 | TABLE ACCESS FULL | OWNERS | 1 | 8 | 3 | 00:00:01 |

| 4 | TABLE ACCESS FULL | BIGTABLE| 2398 | 12K | 434 | 00:00:06 |

—————————————+———————————–+

Predicate Information:

———————-

2 – access(“O”.”USERNAME”=”BT”.”OWNER”)

3 – filter(“O”.”USERNAME” LIKE ‘RL%’)

4 – filter(“BT”.”OWNER” LIKE ‘RL%’)

 

That was not much help. Lets see what happens when we add in primary key, foreign key restraints.

 

alter table bigtable add constraint bigtable_pk primary key (object_id);

alter table owners add constraint owners_pk primary key (username);

alter table bigtable add constraint bigtable_fk foreign key (owner) references owners(username);

 

alter session set tracefile_identifier = owi_trace;

alter session set events ‘10053 trace name context forever, level 1’;

 

 

SELECT COUNT(*)

FROM OWNERS O,

BIGTABLE BT

WHERE O.USERNAME = BT.OWNER

and o.username like ‘RL%’;

============

Plan Table

============

————————————–+———————————–+

| Id | Operation | Name | Rows | Bytes | Cost | Time |

————————————–+———————————–+

| 0 | SELECT STATEMENT | | | | 434 | |

| 1 | SORT AGGREGATE | | 1 | 5 | | |

| 2 | TABLE ACCESS FULL | BIGTABLE| 2398 | 12K | 434 | 00:00:06 |

————————————–+———————————–+

————————————–+———————————–+

Predicate Information:

———————-

2 – filter((“BT”.”OWNER” LIKE ‘RL%’ AND “BT”.”OWNER” IS NOT NULL))

Well this is better. By adding the foreign key constraint, the CBO eliminated the owners table. But we can do better. Let’s add a index on bigtable.owner and see what happens.

 

create index bigtable_idx1 on bigtable(owner);

analyze index bigtable_idx1 compute statistics;

alter session set tracefile_identifier = owi_trace;

alter session set events ‘10053 trace name context forever, level 1’;

 

 

SELECT COUNT(*)

FROM OWNERS O,

BIGTABLE BT

WHERE O.USERNAME = BT.OWNER

and o.username like ‘RL%’;

============
Plan Table
============
------------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
------------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 7 | |
| 1 | SORT AGGREGATE | | 1 | 5 | | |
| 2 | INDEX RANGE SCAN | BIGTABLE_IDX1| 2398 | 12K | 7 | 00:00:01 |
------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - access("BT"."OWNER" LIKE 'RL%')
2 - filter(("BT"."OWNER" LIKE 'RL%' AND "BT"."OWNER" IS NOT NULL))

 

Now lets see what the CBO did to get here.
SQL:******* UNPARSED QUERY IS *******

SELECT COUNT(*) “COUNT(*)” FROM “OPS$ORACLE”.”OWNERS” “O”,”OPS$ORACLE”.”BIGTABLE” “BT” WHERE “BT”.”OWNER”=”O”.”USERNAME” AND “O”.”USERNAME” LIKE ‘RL%’

JE: eliminate table: OWNERS (O)

JE: Replaced column: O.USERNAME with column: BT.OWNER

Registered qb: SEL$84E079A4 0xadec2578 (JOIN REMOVED FROM QUERY BLOCK SEL$1; SEL$1; “O”@”SEL$1”)

**************************

Predicate Move-Around (PM)

**************************

PM: PM bypassed: Outer query contains no views.

PM: PM bypassed: Outer query contains no views.

query block SEL$84E079A4 (#0) unchanged

FPD: Considering simple filter push in query block SEL$84E079A4 (#0)

“BT”.”OWNER” IS NOT NULL AND “BT”.”OWNER” LIKE ‘RL%’

try to generate transitive predicate from check constraints for query block SEL$84E079A4 (#0)

finally: “BT”.”OWNER” IS NOT NULL AND “BT”.”OWNER” LIKE ‘RL%’

Final query after transformations:******* UNPARSED QUERY IS *******

SELECT COUNT(*) “COUNT(*)” FROM “OPS$ORACLE”.”BIGTABLE” “BT” WHERE “BT”.”OWNER” IS NOT NULL AND “BT”.”OWNER” LIKE ‘RL%’

Well that’s special. The CBO came up with a query that is close to query B.

 

B

C

SELECT COUNT(*)

FROM BIGTABLE BT

WHERE BT.OWNER ‘RL%’;

SELECT COUNT(*) “COUNT(*)” FROM “OPS$ORACLE”.”BIGTABLE” “BT” WHERE “BT”.”OWNER” IS NOT NULL AND “BT”.”OWNER” LIKE ‘RL%’
The only real difference between query B and query C is the CBO added bt.owner is not null. This is curious because if we are looking for where owner like ‘RL%’ then by definition owner would be not null.
What did we learn?

1) When we only have tables with no supporting structures the CBO selected full table scans on the two tables then did a hash join.

2) When we analyzed the tables to get good statistics, the CBO still did full table scans and a hash join.

3) When we added in the primary key foreign key constraints, the CBO knew that if bigtable.owner existed then there was a matching row in owners.username. Because of this, the CBO was able to eliminate the join condition bt.owner = o.username and eliminate the owners table from the query. This got us a full table scan on bigtable.

4) When we added the index on bigtable.owner the CBO decided an index range scan on bigtable_idx1 would return the correct results. This is because all the information needed to satisfy the query.

 

Posted in Cost Based Optimizer, Database Stuff | Leave a comment

something about the cbo was bothering me

A query similar to this was getting a full table scan. Now we know the NDR in archived, deleted and flag is 1. My initial thought was because of this we would get a full table scan. I was wrong, because we are using owner like ‘S%’ this would trigger the full table scan.

image

As you see this first run did a full table scan.

image

Now look at what happens when we change owner like ‘S%’ to owner = ‘SYSTEM’. We now get an index range scan. Much better.

image

1 – filter(“ARCHIVED”=0 AND “DELETED”=0 AND “FLAG”=’1′)

2 – access(“OWNER”=’SYSTEM’)

Posted in Database Stuff | Tagged | Leave a comment

Foreign Keys and indexes

 

In this test case we are going to use two tables with a foreign key.  The question is, if we create a foreign key, do we need to add an index on that foreign key column?

OPS$ORACLE@VETDEV > create table bigtable as select * from dba_objects;

Table created.

OPS$ORACLE@VETDEV > create table owners as select * from dba_users;

Table created.

OPS$ORACLE@VETDEV > analyze table bigtable compute statistics;

Table analyzed.

OPS$ORACLE@VETDEV > analyze table owners compute statistics;
Table analyzed.

OPS$ORACLE@VETDEV > alter table bigtable add primary key (object_id);

Table altered.

OPS$ORACLE@VETDEV > alter table owners add primary key (username);

Table altered.

OPS$ORACLE@VETDEV > alter table bigtable add constraint bigtable_fk1
  2* foreign key (owner) references owners(username);
Table altered.

We are going to write a query against both tables joining them on the foreign key. Start by setting autotrace on with explain and statistics.  In the following explain plan we are cutting out statistics to focus on what jumps out at us. Even though we have a foreign key constraint we still got a full table scan on both big table and owners.

TC1

Okay, lets try and limit the number of rows returned by adding a filter predicate.

set autotrace trace

select b.owner, b.object_type, o.account_status
from owners o
,   bigtable b
where o.username = b.owner
and b.owner = ‘SUZANNE’;

Now first you will notice we put the filter predicate on b.owner and not o.username.  We know from the data distribution we have multiple owners and unique usernames. Now what does the explain plan look like.

image
I’m about to get off on a tangent; but bare with me.  There is something interesting going on here.  Look at the index unique scan against the username primary key column and the filter predicate on bigtable.owner.  What happened?  We are going to use the 10053 trace to find out what the optimized did.

alter session set events ‘10053’;

set autotrace trace

select b.owner, b.object_type, o.account_status
from owners o
,   bigtable b
where o.username = b.owner
and b.owner = ‘SUZANNE’;

Lets find the trace file we just generated and dig through it.  The trace file VETDEV_mmon_6474.trc will tell us just what the optimizer did.

[oracle@owirdb1 trace]$ ls -ltr
total 600
-rw-r—– 1 oracle oinstall 594496 Jan 16 07:00 alert_VETDEV.log
-rw-r—– 1 oracle oinstall    384 Jan 16 12:02 VETDEV_mmon_6474.trm
-rw-r—– 1 oracle oinstall   4111 Jan 16 12:02 VETDEV_mmon_6474.trc
[oracle@owirdb1 trace]$

There is a bunch of information in this file and it’s beyond the scope of what I wanted to discuss; what we are looking for is how did the optimizer transform our query.

**************************
Predicate Move-Around (PM)
**************************
PM:     PM bypassed: Outer query contains no views.
PM:     PM bypassed: Outer query contains no views.
query block SEL$1 (#0) unchanged
FPD: Considering simple filter push in query block SEL$1 (#0)
“O”.”USERNAME”=”B”.”OWNER” AND “B”.”OWNER”=’SUZANNE’
try to generate transitive predicate from check constraints for query block SEL$1 (#0)
finally: “O”.”USERNAME”=”B”.”OWNER” AND “B”.”OWNER”=’SUZANNE’ AND “O”.”USERNAME”=’SUZANNE’

Check this out, the optimizer rewrote the query to reduce the number of rows returned in the filter.

OPS$ORACLE@VETDEV > select count(*) from owners where username = ‘SUZANNE’;

  COUNT(*)
———-
         1

OPS$ORACLE@VETDEV > select count(*) from bigtable where owner = ‘SUZANNE’;

  COUNT(*)
———-
       199

So it makes more sense to return one row from owners and using that for the filter as apposed to using owners from bigtable.

Well what would happen if we put an index on the foreign key?

OPS$ORACLE@VETDEV > create index bigtable_idx1 on bigtable(owner);

Index created.

OPS$ORACLE@VETDEV > analyze index bigtable_idx1 compute statistics;

Index analyzed.

Now lets run the test again and without the filter predicate.
TC1
Same issue, but why. Well lets look at the query and ask what is getting returned.  Well, there are no filter predicates, so every row is going to be returned.  This is shown under predicate information and the rows column.  Therefore in this case, the full table scan with a hash join is the most efficient way to get the data. 
What would happen if we add a filter predicate to the query?
TC1
Well that’s nice, we have the index range scan on bigtable_idx1 and an index unique scan on the owners primary key index. 
So, what did we learn, 1) not having an index on the foreign key pretty much guarantees a full table scan on the joined table. 2) having an index does not guarantee the index will be used.  If the database determines that it would be cheaper to do a full table scan then that is what it’s going to do. 3) a filter predicate to the query reduced the total number of rows returned to the point where it would be cheaper to do an index unique scan. And 4) the optimizer will re-write a query to improve performance.
Posted in Database Stuff | Leave a comment

How do You succeed?

This does not have much to do with Oracle consulting, this is more some life lessons on how do succeed at work. I have learned many lessons in life, one of them is the freedom that comes from being independent and not having to depend on others. 

1) If you want to do a lot of things real bad then try multitasking.  The fact is humans are not good executing more then one task at a time.  Humans are good at switching from one task to another.  Think about writing an email and talking to someone at the same time. You can’t do both.  There is a lot of issues with rapid task switching.  I have called this context switching for many years. Context switching is moving from one line of thought to another line of thought.  Every time you do a context switch, you have to change your line of thought.  When multitasking, mistakes will increase and time to complete a task will increase. In my business Oraclewizard, inc I wear every hat, I’m accounts receivable, accounts payable, the receiving department, sales department and quite a few other departments. If I’m working on the company books and I think of something that needs to be done, I write it down in my todo book and keep with the current task until it’s complete. Between tasks I review my todo book, re-prioritize and start the next task.

“People can’t multitask very well, and when people say they can, they’re deluding themselves,” “The brain is very good at deluding itself.” neuroscientist Earl Miller.

I have learned over the years that if I’m designing a database system to step away from the computer, sit down at a desk with templates and a lot of paper then start the design.  When I’m attached to a computer, email and the internet is constantly trying to get my attention. The key is when I’m doing something that requires all my attention, I remove all the distractions.  There is no law that says “you must answer that text, email or phone call right now.”

2) Always improve yourself.

Spend one hour a day doing professional development.  My morning always starts with studying for an hour.  Note, not everything revolves around Oracle in my world.  Yes Oracle is a big port of my professional life; however I am also a pilot, technical diver and public speaker.

Attend at least one professional conference a year.  In my professional area, there are many opportunities to attend conferences. These are great ways to meet people in your arena and network.

If you really want to improve yourself, then volunteer to speak at an event.  A good friend of mine drilled into me, “Proper Preparation Prevents Poor Performance.”  Okay that is a cliché, but it’s true.

3) Your employer is your customer and the customer of your customer / employer is your customer. If you are the customer, how do you want to be treated? If you want to get ahead at work, provide good customer service to your employer.

4) You own your brand.  What is your brand?  Your brand is what do people think of you.  Way back when your brand was limited to maybe a few hundred people.  However; with the internet and mass communications you can influence your brand world wide. 

One of the things I like about “my brand” is when asked to do something, I don’t make excuses, I figure out how to do it.  Case in point, our Connect Direct / Connect Enterprise expert left the contract where I was working.  A program manger asked me to take it over.  Connect Direct / Connect Enterprise if more about secure data transmission then data storage / manipulation.  I could have have pushed back and told the program manager that was not my specialty but instead I accepted the challenge and made the program manger aware that there may be some rocky places along the way; but I will give it my best shot. 

You can also influence your brand by helping people when they have problems. People will remember it.

5) The success you enjoy will be from learning from your mistakes. To quote my fourteen year old niece “OMG you did that.” Let me tell you I have made some doozies of mistakes.  There was the time I truncated sys’s tables and could not figure out why the database did not start back up.  Then there was the time I ripped an employee of a customer a new one because her incompetence that was having an impact on me doing my job. And there were a lot of other mistakes.  But the key is each time I made a mistake, I learned from it.  You will make mistakes and some will go down in history but if you analyze the mistake and learn from it, you will climb another rung on the ladder of success.

6) Don’t get overwhelmed by your good ideas. This still happens to me, but I have gotten better at writing down my “good ideas” then letting them stew for a week or two before I decide wither they are worth starting. Frequently after a week or so, what seemed like a good idea at the time turns out to not be such a good idea.  The ones that are good ideas, I can take to the next level and figure out what it is going to take to get an idea off the ground.

7) Set goals and work on your goals daily.  This almost goes without saying; but I’m surprised by the number of people who don’t set goals.  Goals should be realistic and you should be able to measure the results.

Posted in Database Stuff | Leave a comment

Encrypt those backups

April 2005 Ameratrade loses a backup tape containing information on 200,000 customers.

February 2005 Bank of America loses backup tapes containing information on 1.2 million charge cards.

September 2011, SAIC loses backup tapes of 4.9 Million members of the military who sought medical treatment in the San Antonio area. The data contained name, social security numbers, phone numbers and medical information. This data was not encrypted.

SAIC made the following statement: “Retrieving data on the tapes, which were stolen from a company employee’s car is not likely to happen because doing so requires knowledge of and access to specific hardware and software and knowledge of the system and data structures.” Excuse me if this does not make me feel better. I can get on eBay to get the hardware needed and download the software from any number of vendors to do the restore. Yes if the backup was done from Oracle or DB2 or MS SQL Server then you would need the software from the vendor. What if this theft was targeted and the thief knew what they were after?

I can go on and on about backup tapes that are lost out of the back seat of an employees’ car. And to be honest; I have transported tapes in my car too. However; when I reflect on transporting critical information in my car, I now get the hebegebes. Now we use a bonded courier to transport backup tapes.

Backup tapes are also being shipped to someplace like Iron Mountain. But lets face it, the people who are handling your backup tapes are low paid employees who could be influenced to look the other way. If someone really wants your backup tapes there is a way for someone to get your backup tape.

What are the options for encrypting backups.

  1. Use rman encryption.

  2. Encrypt the backup files on the OS.

For option 1, using rman to encrypt. There are a few options you can use a password to encrypt the backup or you can use a wallet to encrypt the backup.

If the backup is being sent offsite, using a password to encrypt the backup may be your better option.

If the backup is being sent to a Disaster Recovery site to build a standby database, using the wallet may be the better option.

Right now we are addressing sending a backup offsite so lets walk through the process of building an encrypted backup using a password.

First find out what encryption algorithms are supported.

SQL> select ALGORITHM_NAME, ALGORITHM_DESCRIPTION

2 from V$RMAN_ENCRYPTION_ALGORITHMS;

ALGORITHM_ ALGORITHM_DESCRIPTION

———- —————————————————————-

AES128 AES 128-bit key

AES192 AES 192-bit key

AES256 AES 256-bit key

SQL>

Of the algorithms that are available, AES256 is the strongest one available. So we are going to select AES256 for our encryption.

RMAN> set encryption algorithm ‘aes256’ identified by A_Passphrase_that_you_select;

executing command: SET encryption

using target database control file instead of recovery catalog

Using “set encryption algorithm’ command we did two things. One we set the algorithm that will be used for the backup and we set the passphrase that we need to decrypt the backup.

Next we are going to run the backup like we would normally do.

RMAN> backup as compressed backupset database format ‘/home/oracle/backup/encrypted_with_password%u%d.backup’;

Starting backup at 02-AUG-12

using channel ORA_DISK_1

channel ORA_DISK_1: starting compressed full datafile backup set

channel ORA_DISK_1: specifying datafile(s) in backup set

input datafile file number=00003 name=/opt/oracle/oradata/orcl/sysaux01.dbf

input datafile file number=00001 name=/opt/oracle/oradata/orcl/system01.dbf

input datafile file number=00002 name=/opt/oracle/oradata/orcl/example01.dbf

input datafile file number=00004 name=/opt/oracle/oradata/orcl/undotbs01.dbf

input datafile file number=00006 name=/opt/oracle/oradata/orcl/users01.dbf

channel ORA_DISK_1: starting piece 1 at 02-AUG-12

channel ORA_DISK_1: finished piece 1 at 02-AUG-12

piece handle=/home/oracle/backup/encrypted_with_password0dnhl9n6ORCL.backup tag=TAG20120802T170333 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:02:35

channel ORA_DISK_1: starting compressed full datafile backup set

channel ORA_DISK_1: specifying datafile(s) in backup set

including current control file in backup set

including current SPFILE in backup set

channel ORA_DISK_1: starting piece 1 at 02-AUG-12

channel ORA_DISK_1: finished piece 1 at 02-AUG-12

piece handle=/home/oracle/backup/encrypted_with_password0enhl9s1ORCL.backup tag=TAG20120802T170333 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01

Finished backup at 02-AUG-12

RMAN>

How do we decrypt the backup when we need to restore. It’s that simple.

RMAN> set decryption identified by A_Passphrase_that_you_select;

executing command: SET decryption

using target database control file instead of recovery catalog

RMAN> restore database;

Starting restore at 02-AUG-12

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=20 device type=DISK

skipping datafile 1; already restored to file /opt/oracle/oradata/orcl/system01.dbf

skipping datafile 2; already restored to file /opt/oracle/oradata/orcl/example01.dbf

skipping datafile 3; already restored to file /opt/oracle/oradata/orcl/sysaux01.dbf

skipping datafile 4; already restored to file /opt/oracle/oradata/orcl/undotbs01.dbf

skipping datafile 6; already restored to file /opt/oracle/oradata/orcl/users01.dbf

restore not done; all files read only, offline, or already restored

Finished restore at 02-AUG-12

RMAN>

Okay, I did not need to restore the database, but it’s good to know that this works.

Now you don’t have an excuse not encrypt your backups.

Posted in Security | Tagged , , | Leave a comment

Life of an Oracle DBA – Test environment.

It’s been a few weeks since I have posted. The short story is my work load has been keeping me from posting.

Today I had an interesting conversation regarding a test environment that we are building in a technology refresh. Early on I was asked what are the space requirements for the test environment? I came up with a answer based on:

  1. Current size of my production database. 350G
  2. Enough space to store two full backups. 700G
  3. Enough space to stage an export to refresh. 350G
  4. Growth of 20%/year

So the minimum space required would be 1.4T with out including growth. I get a phone call asking me about why I need so much space from one of the unix admins. Why would you backup a test environment? You can just refresh from production if you need to.

This is actually a good question. Why backup the test environment?

  1. You need a place to test your backup and restore scripts. There is a system where there is no way to test your backup and restore scripts outside of production. This week a change was needed to the backup script, the change was made and the job was put into cron. The backup failed and I had to run it manually the next morning.
  2. Testers use the testing system. I had an instance where a test system was restored from production. After doing the restore, we needed to apply the archive logs to recover the database and make it available for the testers. The recover failed because there were objects in the production database that were in nologging. This caused us to have to shutdown the production database and get a cold backup because of the errors caused by the nologging. The end result, testers were down for a full day and production was down for the four In the end, I got the space I wanted for the test system, however the unix admin was still pushing back about actually backing up a test database. She still believes backing up test systems is a waist of time.
Posted in Database Stuff, Uncategorized | Leave a comment

Life of an Oracle DBA – The Basics Access Paths: Index Skip Scan

Normally for Oracle to use an index, the leading column in the index must be in the predicate. There are conditions, where the leading edge of the index is not in the predicate but the remaining columns in the index can satisfy the predicate. If this condition is true then Oracle can skip over the leading column in the index and use other columns in the index. Note: If the leading column has a low cardinality (few distinct rows) then it may be cheaper for Oracle to skip the leading edge of the index and read starting at the second value in the index.

image

Lets start by cleaning up the indexes we have already created and build a composite index where the leading column has a low cardinality.

image

So we now have a composite index on t1 and the leading edge has a very low cardinality; there is only one distinct value in the status column.

image

image

So what happened? The Oracle skipped over the status column in the index and used the owner column. But what happens if you have a high cardinality (many district values) for the leading column in the index. Note the cost of this query.

image

The index skip scan is still being used but now, note the cost of this query. In the first example with a low cardinality column leading the index we had 8 consistent gets. Then when we used a high cardinality column in the leading column we went up to 274 conditions gets. In all it is much more expensive to do the skip scan if the leading column has a high cardinality.

Posted in Uncategorized | Leave a comment

Life of an Oracle DBA – The Basics Access Paths: Index Range Scan

A index range scan gets the rowid by getting the adjacent index entries.  Once Oracle has gathered the rowid’s the rows are returned in ascending order. Unless you use sort descending.

image

From our previous example we saw that if you use a equity predicate on a non-unique index, Oracle will do a index range scan.

image

An index range scan will also be done if your query has a range in the predicate.

Lets drop the normal btree index we created and create a unique index on t1 (object_id).

image

We used a between clause and would up getting a index range scan on t1_idx.

But what if we did this a little different, lets create an index on owner and use an equity predicate.

image

The first thing we notice is we are still doing an index range scan.  We analyzed the table, we analyzed the index but still the optimizer got the cardinality estimate all wrong. We are going to come back and revisit this problem in a later post.  You really need to get the cardinality right, Maria Colgan of Oracle Optimizer fame speaks frequently on this subject.

What are the where clauses that can cause the optimizer to do a index range scan?

LIKE, >, <, =, BETWEEN

You should also realize a index range scan can be bound or un-bound.

Examples of bound:

WHERE column = ‘x’

WHERE column > 1 and column < 5

WHERE column LIKE ‘x%’

WHERE column between 1 and 5

Examples of unbound:

WHERE column > 5

WHERE column < 5

We are going to come back after we address access paths, join methods and join orders to talk about histograms and getting cardinality just right.

Posted in Database Stuff, Life of a Oracle DBA | Leave a comment

Life of an Oracle DBA – The Basics Access Paths: Index Unique Scan

The index unique scan is good for getting one row from and index and returning the rowid. You will get a index unique scan if you have a unique index or primary key on a table.

image
In this example; we created an unique index on object_id.  When we executed the query, the optimizer did a index unique scan.
image
And the index unique scan did just what it was designed to do, select one row.
Now lets try this with a normal btree index.

SQL> drop index t1_idx;

Index dropped.

SQL> create index t1_idx on t1(object_id);

Index created.

SQL> analyze index t1_idx compute statistics;

Index analyzed.

As you can see here, because we uses a normal index thimagee The optimizer decided to do a index range scan.

Next we will be covering Index range scan.

Posted in Database Stuff, Life of a Oracle DBA | Leave a comment

Life of an Oracle DBA – The Basics Access Paths: Table Access By Rowid.

Dang, that just sounds fast, and it is. Rowid points to the datafile:block:location in block. So, what is this good for? How do you get Oracle to access by rowid? If you want to access a specific row or set of rows in the database and can derive the rowid you can get a lot of speed accessing the data. To get the rowid you need to either pass the rowid to the query like I did here or by accessing it through an index.

But what is a rowed? In Oracle 8 and above rowed is 10 bytes that breaks down to:

Bits 1 – 32: object_id

Bits 33 – 44: file number

Bits 45 – 64: block number

Bits 65 – 80: row number inside of block

You can use the package dbms_rowid to get interesting information from rowid.

SQL> variable vrowid varchar2(20);

SQL> execute :vrowid := ‘AABAkfAAIAAIdZKAAA’

PL/SQL procedure successfully completed.

SQL> select dbms_rowid.rowid_object(:vrowid) “object”,

2 dbms_rowid.rowid_relative_fno(:vrowid) “file”,

3 dbms_rowid.rowid_block_number(:vrowid) “block”,

4 dbms_rowid.rowid_row_number(:vrowid) “row”

5 from dual;

object file block row

———- ———- ———- ———-

264479 8 2217546 0

Lets’ look at getting a row from the database by using rowed. One thing you will notice is we did not have to make a round trip to the index t1_idx to find the rowed because we already had it. Therefore Oracle was able to go directly to the row in the database and return it to us. I’m thinking I could have done this better because using trace files and autotrace will create a lot of additional recursive sql in the trace file; such as delete from plan_table and insert into plan_table.

alter session set timed_statistics = true;

alter session set statistics_level = all ;

alter session set sql_trace = true ;

alter session set max_dump_file_size=unlimited;

alter session set tracefile_identifier = owi_trace;

alter session set events ‘10046 trace name context forever, level 12’;

set autotrace on

select object_type, object_name from t1 where rowid = :vrowid;

OBJECT_TYPE OBJECT_NAME

———————————————————————-TABLE FILE$

Execution Plan

———————————————————-

Plan hash value: 2759577407

———————————————————————————–

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |

———————————————————————————–

| 0 | SELECT STATEMENT | | 1 | 44 | 1 (0)| 00:00:01 |

|* 1 | TABLE ACCESS BY USER ROWID| T1 | 1 | 44 | 1 (0)| 00:00:01 |

———————————————————————————–

Predicate Information (identified by operation id):

—————————————————

1 – access(CHARTOROWID(:VROWID))

Statistics

———————————————————-

6 recursive calls

4 db block gets

5 consistent gets

0 physical reads

760 redo size

596 bytes sent via SQL*Net to client

488 bytes received via SQL*Net from client

2 SQL*Net roundtrips to/from client

0 sorts (memory)

0 sorts (disk)

1 rows processed

select object_type, object_name

from

t1 where rowid = :vrowid

call count cpu elapsed disk query current rows

——- —— ——– ———- ———- ———- ———- ———-

Parse 1 0.00 0.00 0 0 0 0

Execute 2 0.00 0.00 0 0 13 0

Fetch 2 0.00 0.00 0 1 0 1

——- —— ——– ———- ———- ———- ———- ———-

total 5 0.00 0.00 0 1 13 1

Misses in library cache during parse: 0

Optimizer mode: ALL_ROWS

Parsing user id: 56

Rows Row Source Operation

——- —————————————————

1 TABLE ACCESS BY USER ROWID T1 (cr=1 pr=0 pw=0 time=71 us)

Elapsed times include waiting on following events:

Event waited on Times Max. Wait Total Waited

—————————————- Waited ———- ————

SQL*Net message to client 3 0.00 0.00

SQL*Net message from client 3 1.58 1.58

********************************************************************************

What does this tell me: I got the data in 71 micro seconds. That’s fast. Now I’m leaving out all of the recursive work that Oracle did to execute the command.

Lets look at some more timing information.

mrskew.exe \stage\*.trc > H:\tmp\x.txt

CALL-NAME DURATION % CALLS MEAN MIN MAX

SQL*Net message from client 11.256462 98.9% 4 2.814115 0.001859 9.667408

EXEC 0.120000 1.1% 3 0.040000 0.000000 0.070000

log file sync 0.001551 0.0% 2 0.000776 0.000513 0.001038

SQL*Net message to client 0.000022 0.0% 4 0.000005 0.000004 0.000006

PARSE 0.000000 0.0% 2 0.000000 0.000000 0.000000

FETCH 0.000000 0.0% 2 0.000000 0.000000 0.000000

TOTAL (6) 11.378035 100.0% 17 0.669296 0.000000 9.667408

Using mrskew from Method-R I get some more interesting information from the trace file. What is this telling me? sqlnet message from client was the big wait event; tanking 98.9% of the time. Well that’s network io. After that EXEC was my next highest wait event with 3 calls taking 1.1% of the execution time. BTW: this data includes all recursive sql. So what does the EXEC timing look like?

mrskew.exe \stage\*.trc –name=”EXEC” –ebucket > H:\tmp\x.txt

RANGE {min <= e < max} DURATION % CALLS MEAN MIN MAX

0.000000 0.000001 0.000000 0.0% 1 0.000000 0.000000 0.000000

0.000001 0.000010 0.000000 0.0% 0

0.000010 0.000100 0.000000 0.0% 0

0.000100 0.001000 0.000000 0.0% 0

0.001000 0.010000 0.000000 0.0% 0

0.010000 0.100000 0.120000 100.0% 2 0.060000 0.050000 0.070000

0.100000 1.000000 0.000000 0.0% 0

1.000000 10.000000 0.000000 0.0% 0

10.000000 100.000000 0.000000 0.0% 0

100.000000 1000.000000 0.000000 0.0% 0

1000.000000 Infinity 0.000000 0.0% 0

TOTAL (2) 0.120000 100.0% 3 0.040000 0.000000 0.070000

Posted in Database Stuff, Life of a Oracle DBA | Leave a comment

Life of an Oracle DBA – The Basics Access Paths: Full Table Scan

For the next few weeks I am going to be going over some of the basics of what you need to know to be a good Oracle DBA. Feel Free to speak up if there is something you think I should cover. I should let you know fast is not always cheap and cheap is not always fast. Think of cheap as Oracle is going to use as few resources as possible to satisfy the query. But if you are using parallel query, then Oracle sets the goal to finish the query as fast as possible and to heck with the resources. Now be carful here; you don’t want to be running an OLTP application with 70+ thousand people hitting it and all of them running parallel query. If you do, you are going to starve the machine of CPU. I’ve seen it happen, it’s not pretty.

Yes Virginia there is a Maria Colgan who is always happy to explain the difference between cheap and fast. If you don’t know Maria, she is one of the smartest people in the Oracle community and heads up the Oracle Optimizer Group. Her blog is at: https://blogs.oracle.com/optimizer/

For my first set of installments I am going to go over access paths (how does Oracle get to the data you want.)

Full Table Scan otherwise know as “Dang it; why won’t oracle use my indexes” There are a number of factors for the optimizer consider to select the best access path to the data.

What predicates have been passed in the query?

What indexes exists?

What degree of parallelism is being used?

What is db_multi_block_read_count?

What hints have been put on the query?

Cardinality – you are going to be reading a lot about that, so start studying.

The Oracle cost based optimizer determines the fastest or most efficient way to access data, sometimes the optimizer will select the full table scan. If you pass a predicate in the query that is not indexed then Oracle has no choice then to do a full table scan. If the cost based optimizer decides it is cheaper to do a full table scan over using an index then Oracle will use a full table scan. I know this is hard to believe but it’s true. Here is a simple example, if Oracle needs to read 100 blocks of an index then do another 100 IOs’ to the table to get your data as apposed doing a full table scan of 150 blocks; the cost based optimizer may decide it would just be cheaper to do a full table scan.

SQL> create table t1 as select

* from dba_objects;

Table created.

SQL> analyze table t1 compute statistics;

Table analyzed.

SQL> create index t1_idx on t1(object_type);

Index created.

SQL> analyze index t1_idx compute statistics;

Index analyzed.

SQL> set autotrace on

SQL> select owner, object_name

2 from t1

3 where object_type in (‘TABLE’,’INDEX’);

————snip a lot————-

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |

————————————————————————–

| 0 | SELECT STATEMENT | | 2440 | 90280 | 109 (34)| 00:00:01 |

|* 1 | TABLE ACCESS FULL| T1 | 2440 | 90280 | 109 (34)| 00:00:01 |

————————————————————————–

Predicate Information (identified by operation id):

—————————————————

1 – filter(“OBJECT_TYPE”=’INDEX’ OR “OBJECT_TYPE”=’TABLE’)

Hay Maria, Oracle forgot that I had an index on T1. Waaaaaaaa. Oh, it’s cheaper to just read the whole table.

select /*+ index (t1 t1_idx) */

owner, object_name

from t1

where object_type in (‘TABLE’,’INDEX’);

—————————————————————————————

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |

—————————————————————————————

| 0 | SELECT STATEMENT | | 2440 | 90280 | 121 (3)| 00:00:01 |

| 1 | INLIST ITERATOR | | | | | |

| 2 | TABLE ACCESS BY INDEX ROWID| T1 | 2440 | 90280 | 121 (3)| 00:00:01 |

|* 3 | INDEX RANGE SCAN | T1_IDX | 2440 | | 10 (10)| 00:00:01 |

—————————————————————————————

Hay Maria, I got Oracle to use my index but it cost more. Waaaaaaa, Oh I had to do more IOs’ to get my data.

As db_multi_block_read_count increases the cost of a full table scan decreases. As parallelism increases, Oracle makes the determination of executing the query as fast as possible as apposed to a cheap as possible. If you put hints into your sql query then those hints will be used as apposed to what the Oracle optimizer believes would be the best way to access the data.

When oracle does a full table scan, blocks are read into the buffer cache and rows that don’t meet the predicate are rejected.

Posted in Database Stuff, Life of a Oracle DBA | Leave a comment

Life of an Oracle DBA: Big Data restore

Over the years my opinion of what big data is has changed. Way back in the VAX days, a 1G database was not only big data it was fricken huge data. Now words like gig, tara, and peta are thrown around casualty. Exa is still reserved for really fricken big.

Now, back to the restore. The customer has a data warehouse that has been populated with many years worth of data and the consumers of the data are always asking for more. The easy way to keep a backup of this massive amount of data is to create a standby database. So we have a standby database, archive logs are shipped to the standby database and I get a report every morning of the health of the standby database. Everything is honkey dorey, or so I thought.

The customer was doing a big data push and for reasons that are beyond my control, it was decided that dataguard would be shutdown. I resisted, and was over ruled. After two months of the standby databases being shutdown I was finally told to bring the standby instance back in sync with the primary database. Time for the gremlins to come into the picture. Issues I faced:

  1. A dba did an open resetlogs on the primary database instance. I learned this after copying down 2.5T of archive logs down the pipe.

  2. So, I did a diff of the database starting at the oldest SCN in the standby database. Again after copying down 2.5T of data, one of the managers got a bit anxious and decided to break the replication of the san before the files were finished replicating. The files were used to do an rman recover to the standby database and it was a failure.

  3. Do a complete cold backup of the data warehouse.

    1. Send some large fast drives to the data center.

    2. Encrypt the drives

    3. Courier the drives to the DR site

    4. Decrypt and uncompress the backup

    5. Restore the backup

    6. scp the archive logs to the DR site.

    7. Recover the standby database

  4. Have a beer and get some sleep.

Lessons learned:

  1. Under no circumstances shale anyone shutdown the standby database.

  2. Under no circumstances shale anyone do an open resetlogs on any database that has a dataguard instance.

  3. Do a cost benefit of shipping drives out to the primary site before doing long copies over a pipe that is shared by many. For turn around time, ship the drives out. We have now decided to keep a set of drives at the data center so we can do a much quicker restore of the database. The decision you make will be based on your patients, and you pain tolerance for having a bad standby database.

    1. If there is a hand full of archive logs missing, scp is a wonderful thing.

    2. If there is much more then a bunch of archive logs that need to be transferred, put them on the drive and ship it to the DR site to recover the standby database.

Posted in Database Stuff, Life of a Oracle DBA | Leave a comment

Life of a Oracle DBA. What is choking my Oracle database server?

The week was dominated by turning. The database server was pegged at 100% CPU all week. I have tuned the offending queries to within an inch of their lives. The customer wants to know, why is this happening? And I’m asking the business end, what is driving more users to be active on website.

Early in my career I learned about the top command. With the top command I learn about a few things.

CPU is pegged at 100%. And I this example the CPU has been pegged all day long. The user experience (UE) is suffering.

ldap is the top process.

Oracle has all the other processes.

load averages: 90.29, 92.29, 76.555 13:48:45

390 processes: 297 sleeping, 76 running, 17 on cpu

CPU states: 0.0% idle, 95.7% user, 4.3% kernel, 0.0% iowait, 0.0% swap

Memory: 32G real, 1285M free, 28G swap in use, 5354M swap free

PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND

1069 ds5user 45 32 0 0K 0K run 31:26 3.04% ns-slapd

5416 oracle 11 23 0 0K 0K run 4:14 1.15% oracle

4322 oracle 11 22 0 0K 0K run 7:59 1.13% oracle

4916 oracle 11 22 0 0K 0K run 4:41 1.11% oracle

5937 oracle 11 23 0 0K 0K run 1:05 1.10% oracle

5494 oracle 11 22 0 0K 0K run 2:13 1.09% oracle

4910 oracle 11 22 0 0K 0K run 4:59 1.08% oracle

Lets dig into a few of the top oracle processes and see the query that is running.

SQL> @pid 5416

old 10: and spid = &1

new 10: and spid = 5416

SPID SID SERIAL# USERNAME SQL_ID

———— ———- ———- ———- ————-

5416 953 55 IFSUSER butxjxuvw2m3y

Here is the pid.sql query I use.

SQL> l

1 select p.spid, s.sid, s.serial#, s.username, s.sql_id from v$process p, v$session s where s.paddr = p.addr and s.status = ‘ACTIVE’ and spid = &1

SQL>

So, sql_id butxjxuvw2m3y is my top CPU consumer. I want to see what the query is and what the plan for the query is.

SQL_ID butxjxuvw2m3y

——————–

select .. from VINBOX this_ where (this_.STATUS<>:1 and this_.RECEIVED>=:2 and (this_.ASSIGNEE=:3 or (this_.RESO in (:4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14) and (this_.ASSIGNEE is null)))) order by lower(this_.RECEIVED) desc

Plan hash value: 3079052936

———————————————————————————————————————–

| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |

———————————————————————————————————————–

| 0 | SELECT STATEMENT | | | | | 6300 (100)| |

| 1 | PX COORDINATOR | | | | | | |

| 2 | PX SEND QC (ORDER) | :TQ10001 | 873 | 122K| | 6300 (8)| 00:00:13 |

| 3 | SORT ORDER BY | | 873 | 122K| 312K| 6300 (8)| 00:00:13 |

| 4 | PX RECEIVE | | 1 | 77 | | 3 (0)| 00:00:01 |

| 5 | PX SEND RANGE | :TQ10000 | 1 | 77 | | 3 (0)| 00:00:01 |

| 6 | TABLE ACCESS BY INDEX ROWID | INSTANCE | 1 | 77 | | 3 (0)| 00:00:01 |

| 7 | NESTED LOOPS | | 873 | 122K| | 6270 (8)| 00:00:13 |

| 8 | NESTED LOOPS | | 871 | 58357 | | 3831 (11)| 00:00:08 |

| 9 | PX BLOCK ITERATOR | | | | | | |

| 10 | INDEX FAST FULL SCAN | FORM_IDX06 | 871 | 36582 | | 1185 (33)| 00:00:03 |

| 11 | TABLE ACCESS BY INDEX ROWID| CUSTOM_FORM | 1 | 25 | | 3 (0)| 00:00:01 |

| 12 | INDEX RANGE SCAN | CUSTOM_FORM_IDX03 | 1 | | | 2 (0)| 00:00:01 |

| 13 | INDEX RANGE SCAN | INSTANCE_IDX03 | 1 | | | 2 (0)| 00:00:01 |

———————————————————————————————————————–

I did this for a few of the top processes and found everyone is clicking on the inbox link. Now this has been tuned in the past; so how am I going to deal with this? Well first thing, as the app sits right now, a person can click on the link over and over again. We are putting in a change to disable the link until the inbox comes up. This will save me from users that keep clicking the link when they don’t get their inbox fast enough. The next thing I need to do is move through the query and see where I can optimize it.

Now let’s look to see what the user load is on oracle.

SQL> get users

select username, sql_id, count(*) from v$session where status = ‘ACTIVE’ group by username, sql_id order by 1,3

SQL> /

USERNAME SQL_ID COUNT(*)

———- ————- ———-

APPUSER 9vztbhnd0r47b 1

=================SNIP==============

APPUSER 4c176cpxqvyrs 5

APPUSER 24mdr8u4u6vyz 10

APPUSER 3dkmsrzyjbuj5 44

RLOCKARD 5a921h5xhxxcg 1

8820rg9jt4c14 2

25

18 rows selected.

There are two versions of inbox running right now totaling 54 instances. Well inbox is fairly static during the day; can we use a materialized view? There is a cost, what type of refresh should we be using? 1 hour, 30 minutes, 5 minutes.

There will come a time when the hardware you have in place will not support the user load. In the past month we have seen a 300% increase in the number of queries hitting the database and a 100% increase in the user base.

The customer wants a breakdown by date/hour/user/sql_id/exec count to see what is hammering the system. from this report I can look at date, time of day and user and find out what query is getting executed and how many times. This is one of those reports that is very verbose so I spool it to a file and bring it into Excel to present to the customer. I also ran the same report without sql_id to show date/hour/user/sql count to the database. Both of these report will give you a week worth of data to help you drill down into usage times.

set linesize 256

set pagesize 10000

select to_char(trunc((sample_time),’HH’),’MMDDHH24:MI’), username, sql_id, count(*) from DBA_HIST_ACTIVE_SESS_HISTORY h, dba_users u where h.user_id = u.user_id and u.username = ‘IFSUSER’ group by to_char(trunc((sample_time),’HH’),’MMDDHH24:MI’), username, sql_id order by 1,2;

081709:00 APPUSER 0ut13pyz2862b 1090

081709:00 APPUSER 624p6ufu0vf67 1590

081709:00 APPUSER 0887h37jcvrq0 1677

081709:00 APPUSER 3dkmsrzyjbuj5 14854

Well the sql_id 3dkmsrzyjbuj5 is executing a lot so I wanted to dig into that to see what it is. No surprise, it’s the inbox query … AGAIN.

We have ordered more hardware to put the database on. it’s bigger, more CPUs’, more memory, yada yada yada. When the hardware is in, we will migrate this database from Solaris / Oracle 10g to Linux/Oracle 11G.

Posted in Database Stuff, Life of a Oracle DBA | Leave a comment

Life of an Oracle DBA: Grumps, dataguard, upgrades and tuning oh my

There are very few things that will get my dander up. One is waisting my time; the other is “Not My Job.” Last week I experienced both; it took a bit but the person who told me it was not their job wound up doing what needed to be done. This caused quite a bit of my time to be wasted. This gets back to my mantra: “You may be asked to do things that our outside of your job description, if so, embrase it and do the best job you can.” You will make yourself more valuable to your employer, customers or team mate.

I also experienced a problem with dataguard that I had not seen before.

ORA-00342: archived log does not have expected resetlogs SCN 137433238

ORA-00334: archived log: ‘/var/opt/data/sor01/arch/1_191_768933256.dbf’

This was weird. This ora error has to do with a different incarnation of the database. What caused the issue? My hunch; someone did a open resetlogs.

I checked what logs have checked the max log on the primary and the max log applied on the standby.

primary> select thread#, max(sequence#) “Last Primary Seq Generated”

from v$archived_log val, v$database vdb where val.resetlogs_change# = vdb.resetlogs_change# group by thread# order by 1;

THREAD# Last Primary Seq Generated

———- ————————–

1 847

standby> select thread#, max(sequence#) “Last Standby Seq Received” from v$archived_log val, v$database vdb where val.resetlogs_change# = vdb.resetlogs_change# group by thread# order by 1;

THREAD# Last Standby Seq Received

———- ————————-

1 847

But wait threes more:

standby > select thread#, max(sequence#) “Last Standby Seq Applied” from v$archived_log val, v$database vdb where val.resetlogs_change# = vdb.resetlogs_change# and val.applied in (‘YES’,’IN-MEMORY’) group by thread# order by 1;

THREAD# Last Standby Seq Applied

———- ————————

1 190

Okay, I see where the standby database wants log 191. But still what is the real issue. I needed more information to figure out what the problem is.

What is the scn that is stored in the controlfile?

select db_unique_name, to_char(current_scn, ‘999999999999999999’) as Standby_Current_SCN from v$database;

DB_UNIQUE_NAME STANDBY_CURRENT_SCN

—————————— ——————-

<radicated> 199126920

I have still not found out and wasted a lot of time trying to get the archive logs to apply. So I deeded to get a incremental backup of the primary database starting with the oldest scn applied to the standby database.

standby> select min(to_char(fhscn, ‘99999999999999999999’)) as SCN_to_be_used from X$KCVFH;

SCN_TO_BE_USED

———————

199126921

So now I have a place to start my incremental backup. But I prefer to live on the conservative side of life. These SCNs’ are not matching up very well, so I’m going to take the lower of the two SCNs’ for my backup.

RMAN> BACKUP INCREMENTAL FROM SCN 199126920 DATABASE FORMAT ‘/var/opt/backup/ForStandby_%U’ tag ‘FORSTANDBY’;

After the backup is done, scp the backup to the standby site and do a database restore. But wait, port 22 is not open so I am going to have to rely on SAN replication. I’ll let you know how the restore goes, the files are still replicating.

Migrate 10G on Solaris to 11G on Linux

There are three small databases that needed to be upgraded 10g Solaris to 11g Linux. Because they were small, I decide the easiest way to do the upgrade was to use datapump. The next set have a lot of data and are moving to a RAC environment, so I will experiment with transportable tablespaces.

Tuning Oracle

We have a web based OLTP application where the database was experiencing 100% cpu. Logins were slowing down, users were backing up. In short it was an extreme terror kinda day. Looking at the Oracle database I found that most of the users were backing up on the same query. My first question was, what was the trigger for 100’s of users to start executing the same query at the same time and for it to be going on all day? The second question was, what can I do to speed things up?

It turns out our 75,000+ user base had received a message to do something. So that explained my first question. The second question was a bit harder to address. There was one index that was getting a fast full index scan for each query and it started showing up in v$session_longops. The index is a compressed index and when I looked at it closely the I found the cardinality of the columns in the index were from higher to lower cardinality. If I wanted to compress the index I can reverse the columns in the index. And because this index mostly experiences fast full index scans that would reduce the number of blocks the index had.

SQL> drop index <schema>.<index_name>;

SQL> create index <schema>.<index_name> on <schema>.<table_name> (lowest_cardinality, next_lowest, highest_cardinality> tablespace indx compress 2;

A generic create compressed example: Sorry I can’t give specifics, NDA and all ya’know.create index app.cust_indx on app.custs (sex, eye_color, cust_id) tablespace indx compress 2;

By changing the order of the columns in the compressed index the number of blocks to store the index decreased by 50%. Well, the load finally returned to quasi normal and requests were getting serviced in a timely mannor. It looks like we have maxed out the oracle database server. Thank goodness we are going to upgrading it to Oracle 11G RAC on Linux from Solaris Oracle 10G single instance.

Posted in Database Stuff, Life of a Oracle DBA | Leave a comment

The life of an Oracle DBA

So you want to be an Oracle DBA?  What does a DBA do? What does my week look like?

Frequently my wife asks me what my days are like and just what do I do at work.  Well, I’m going to start telling you.  My normal answer is a DBAs’ job is extreme boredom punctuated with moments of start terror. Lately there has been more terror then boredom.

So this week started off good enough.  I was in the middle of recovering a standby database when I discovered there were gaps in the archive logs.  I spent most of the weekend trying to get the standby database restored then realized I would have to ship down the archive logs from the primary site.  The problem centered on control_file_record_keep_time being set too low.  Well the archive logs had aged out of the control file.  I had to do a couple of things.

1)      extract the archive logs from ASM.  This can be a trick, then I discovered the package dbms_file_transfer. This package allows you to pull files out of ASM so you can deal with them as real file.   All I had to do was create two directories. One for the ASM source and the second directory the target.

2)      The second part of the problem was getting the files down to the standby database.  Well port 22 is closed and I have to get the SAN Administrator to replicate the files and then get the unix admin to mount the LUN then go to change control.  Bottom line, someone else needed to transfer some files so I was asked to defer transferring the data until the other person is done.  I hope the transfer will be complete by Monday and I can finish fixing the standby database.  So far over a week to fix something that should have took me less then one day.

The following code extracted the logs that I needed to the file system.

declare

x number := 214;

fname varchar2(256);

begin

  while x < 440

  loop

    x := x+1;

    fname := ‘1_’ || to_char(x) || ‘_773923456.dbf’;

    dbms_output.put_line(fname);

    dbms_file_transfer.copy_file (‘ARCHIVE_SOURCE’, fname, ‘ARCHIVE_TARGET’, fname);

  end loop;

end;

/

Next issue for the week. This is what I really love to do. Tune the database.  I got a call from an ole’ friend who’s query was taking a while to execute.  He is a sharp unix admin and knows enough about Oracle and SQL to generate the reports that he needs.  Well his email was short and sweet.  Here’s the code, it’s slow, fix it.  I spent about an hour re-writing his query and took his report from executing in over 2 days down to less then two minutes.  It is important to understand access paths and how the optimizer works. 

Part of my job is to also deal with external customers.  There are some customers who are software vendors that provide services to my customers customer.  (My customer is a very small government agency.  Their customers are banks and other government agencies.)  I don’t know how it happened but if the help desk can not answer the question or address the issue, me and another friend gets the call based on the problem.  All data transfer issues go to me.  All organization issues go to me.  Custom reports go to me.  korn shell scripts got to my friend.  Thank goodness for small favors.

So this week I spent a lot of time on the phone on conference calls with large external customers.  Their issues are simple; we have to get the information securely, accurately and reliably.  I’m happy to say, the issues are getting addressed most of the issues are from the processes that all the organizations have in place.  I sometimes have to remind myself to respect the process.  But there are times when the process is broke and needs to be addressed.

Oh that brings up another problem, I’m not a LAN engineer and I don’t have access to setup VPN tunnels.  So if there is an issue setting up a VPN tunnel then I send that on to someone else.  Ya’ need to understand that.  Well one of the vendors called me and said that one of their customers has been having problems with their VPN tunnel for months.  I sent it on to our LAN engineer and asked him to look into the problem.  I get a terse email back saying “all I can do is open a ticket with <redacted>.” That kinda rubs me the wrong way.  Pick up the phone, call the customer, find out what the problem is?  This does a few things. One it lets the customer know they are not being ignored. Two you have an understanding of what the problems are. This way when someone asks what’s going on with <redacted> then you can answer it with some degree of intelligence.

And that brings up another issue. A couple weeks ago I found out about a customer who has been having troubles sense last January.  It took quite a bit but I pulled together all the resources to address the issue the same day.  But the problem is still dragging on.  I have emailed <redacted> network engineer again today to get a status and he says, “working on it, I’ll let you know.” Well there are a lot of stressed out people who want the problem fixed.  I’m not going to say <redacteds’> network engineer does not know what they are doing but I will say one of our top engineers has serious questions regarding this person’s ability to address the issue.  No one wants him to look bad in front of his boss, so I’m trying to be subtle and have one of our senior engineers help him out with his issues.

Where was I going, oh yea’ the life of a DBA.  You will find you will get tasked with things that seem outside of your core skills.  Embrace them and do the best job you can.

Today I was asked to install Oracle 11.2.0.3 on a blade for our development group.  I got started after the unix group stood up the blade and the problems job kept coming.  Packages were not installed, accounts were not setup correctly.  Even logged in as oracle I could not copy the oracle binaries to the blade.  I’m still not done, I’m hoping the unix group will finish installing the required packages and make the changes to /etc/system so I can finish setting up this database.  Am I frustrated with this, yea’ I am.  Normally I can have a database setup and running in very short order, but having to go back and forth is irritating.  There is a baseline install that gets done for blades.   Perhaps I will ask the unix group to provide me with a copy of the document and make some edits to include setting up for oracle databases.  This way I know what to expect when I get the server.

Other than that, there were many meetings; Change control is an important meeting and you really need to understand what is going on and what the impact of changes are.  There is the daily MMM, some folks call it the morning stand up meeting .  The most important meeting had to do with the technology refresh.  We discussed configuring RAC for one of our critical systems to deal with surge in processing that happens once a year. Our old configuration has three servers. One is the primary database server, a second is a standby database located in another state.  The third is a standby database in the rack with the primary database.  We set it up that way because I would always prefer to have an extra set of data files in my back pocket in case something happened.  This will be changing to configure RAC for performance.

Of course there was the request to get a cold backup of a production database off the standby database to refresh a test environment.  I’ve cloned databases from rman, but now I’m going to figure out if I can create a clone off a standby database.

Posted in Database Stuff, Life of a Oracle DBA | Leave a comment

Encrypt those backups

April 2005 Ameratrade loses a backup tape containing information on 200,000 customers.

February 2005 Bank of America loses backup tapes containing information on 1.2 million charge cards.

September 2011, SAIC loses backup tapes of 4.9 Million members of the military who sought medical treatment in the San Antonio area. The data contained name, social security numbers, phone numbers and medical information. This data was not encrypted.

SAIC made the following statement: “Retrieving data on the tapes, which were stolen from a company employee’s car is not likely to happen because doing so requires knowledge of and access to specific hardware and software and knowledge of the system and data structures.” Excuse me if this does not make me feel better. I can get on eBay to get the hardware needed and download the software from any number of vendors to do the restore. Yes if the backup was done from Oracle or DB2 or MS SQL Server then you would need the software from the vendor. What if this theft was targeted and the thief knew what they were after?

I can go on and on about backup tapes that are lost out of the back seat of an employees’ car. And to be honest; I have transported tapes in my car too. However; when I reflect on transporting critical information in my car, I now get the hebegebes. Now we use a bonded courier to transport backup tapes.

Backup tapes are also being shipped to someplace like Iron Mountain. But lets face it, the people who are handling your backup tapes are low paid employees who could be influenced to look the other way. If someone really wants your backup tapes there is a way for someone to get your backup tape.

What are the options for encrypting backups.

  1. Use rman encryption.
  2. Encrypt the backup files on the OS.

For option 1, using rman to encrypt. There are a few options you can use a password to encrypt the backup or you can use a wallet to encrypt the backup.

If the backup is being sent offsite, using a password to encrypt the backup may be your better option.

If the backup is being sent to a Disaster Recovery site to build a standby database, using the wallet may be the better option.

Right now we are addressing sending a backup offsite so lets walk through the process of building an encrypted backup using a password.

First find out what encryption algorithms are supported.

SQL> select ALGORITHM_NAME, ALGORITHM_DESCRIPTION

2 from V$RMAN_ENCRYPTION_ALGORITHMS;

ALGORITHM_ ALGORITHM_DESCRIPTION

———- —————————————————————-

AES128 AES 128-bit key

AES192 AES 192-bit key

AES256 AES 256-bit key

SQL>

Of the algorithms that are available, AES256 is the strongest one available. So we are going to select AES256 for our encryption.

RMAN> set encryption algorithm ‘aes256’ identified by A_Passphrase_that_you_select;

executing command: SET encryption

using target database control file instead of recovery catalog

Using “set encryption algorithm’ command we did two things. One we set the algorithm that will be used for the backup and we set the passphrase that we need to decrypt the backup.

Next we are going to run the backup like we would normally do.

RMAN> backup as compressed backupset database format ‘/home/oracle/backup/encrypted_with_password%u%d.backup’;

Starting backup at 02-AUG-12

using channel ORA_DISK_1

channel ORA_DISK_1: starting compressed full datafile backup set

channel ORA_DISK_1: specifying datafile(s) in backup set

input datafile file number=00003 name=/opt/oracle/oradata/orcl/sysaux01.dbf

input datafile file number=00001 name=/opt/oracle/oradata/orcl/system01.dbf

input datafile file number=00002 name=/opt/oracle/oradata/orcl/example01.dbf

input datafile file number=00004 name=/opt/oracle/oradata/orcl/undotbs01.dbf

input datafile file number=00006 name=/opt/oracle/oradata/orcl/users01.dbf

channel ORA_DISK_1: starting piece 1 at 02-AUG-12

channel ORA_DISK_1: finished piece 1 at 02-AUG-12

piece handle=/home/oracle/backup/encrypted_with_password0dnhl9n6ORCL.backup tag=TAG20120802T170333 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:02:35

channel ORA_DISK_1: starting compressed full datafile backup set

channel ORA_DISK_1: specifying datafile(s) in backup set

including current control file in backup set

including current SPFILE in backup set

channel ORA_DISK_1: starting piece 1 at 02-AUG-12

channel ORA_DISK_1: finished piece 1 at 02-AUG-12

piece handle=/home/oracle/backup/encrypted_with_password0enhl9s1ORCL.backup tag=TAG20120802T170333 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01

Finished backup at 02-AUG-12

RMAN>

How do we decrypt the backup when we need to restore. It’s that simple.

RMAN> set decryption identified by A_Passphrase_that_you_select;

executing command: SET decryption

using target database control file instead of recovery catalog

RMAN> restore database;

Starting restore at 02-AUG-12

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=20 device type=DISK

skipping datafile 1; already restored to file /opt/oracle/oradata/orcl/system01.dbf

skipping datafile 2; already restored to file /opt/oracle/oradata/orcl/example01.dbf

skipping datafile 3; already restored to file /opt/oracle/oradata/orcl/sysaux01.dbf

skipping datafile 4; already restored to file /opt/oracle/oradata/orcl/undotbs01.dbf

skipping datafile 6; already restored to file /opt/oracle/oradata/orcl/users01.dbf

restore not done; all files read only, offline, or already restored

Finished restore at 02-AUG-12

RMAN>

Okay, I did not need to restore the database, but it’s good to know that this works.

Now you don’t have an excuse not encrypt your backups.

Posted in Database Stuff, Security | Tagged | Leave a comment

Security in the Cloud. Install #1

There are a number of different vendors providing cloud services. You can buy space and processing power from vendors like IBM, or Amazon or many other service providers. In the interest of full disclosure, I use cloud services all the time for email, backups and web services. These are services that are critical to my business.

There are a number of advantages to using the cloud. You don’t have to worry about maintaining a complex environment and keep a staff of highly paid systems administrators on staff. You are not losing sleep about your backups. Well, maybe you should be losing sleep over your backups, we will come back to that. You can also purchase disaster recovery services giving you the ability to get back online quickly with minimal disruption of business. The public cloud can give a businesses the edge they need to succeed.

In the public cloud one down side is you don’t know who your neighbors are. There have been cases where a group like Anonymous or another criminal element gets into the same cloud where you are and your cloud becomes dark and stormy. In Texas the FBI raided a data center taking some businesses off line. http://www.wired.com/threatlevel/2009/04/data-centers-ra/. And more recently the FBI raided a data center in Reston Virginia taking many businesses offline. http://www.pcmag.com/article2/0,2817,2387447,00.asp.

Do you see where I’m going with this? If you don’t have full control of your systems, there can be activity on the systems that can cause law enforcement to come in and take the cloud for evidence; effectively taking your business offline. Larger organizations may want to invest in a private cloud to keep control of who is playing in their back yard.

Another down side is you have not personally vetted your system administrators. An Oracle DBA can access everything in the database. They can make changes to the database. They can extract information from the database. And to make is more interesting, in a lot of cases they can go back and clean up the audit log. You really need to know who is watching the store. Oracle has tools to protect the database from a rogue database administrator. Ask a lot of questions about how the cloud provider protects your data from an insider.

What can you do to protect yourself from an insider threat.
1) Make sure Oracle Database Vault is installed and configured properly. Databases Vault allows the Database Administrator to do their job without accessing production data.
2) Audit trails should go to either syslog or Oracle Audit Vault. I would recommend using Audit Vault that can send you alerts when someone accesses sensitive data.
3) Audit what needs to be audited and review the audit trail.
4) Perform a risk assessment on your data and identify all sensitive data. Once you have identified all sensitive data, encrypt it and audit it.

Another down side, is your sensitive data encrypted? If it is encrypted, is the encryption done once the data gets to the cloud or is the encryption done at the workstation? If the data is encrypted on the cloud, I can read the data as it gets pushed over the internet to the cloud. I can also read the data that is being stored in memory prior to it getting encrypted. If you encrypting the data on the workstation prior to pushing it to the cloud, how are you dealing with key management? Is a copy of the private key located on the workstation?

For your consideration:

If the application / data is critical to your business? Then:
1) Have a backup plan. Putting your data in the cloud does not necessarily protect the data. Amazon users found this out the hard way. You may want to have backups of your applications and data or pay for DR services. When the cloud is shutdown because of a bad neighbor and you have a backup of your applications and data, you can get back online. If you have DR services you can get back online. If you are not in possession of your backups then you are at the mercy of the law enforcement entity that took your data.
2) Consider using a hybrid private / public cloud. Have your critical applications running on a private cloud using a public cloud for surge processing, non-critical applications and backups.
3) Consider using the public cloud as your backup. I do this in my businesses; I backup my critical files to local storage and also backup to to the cloud. Yes’ I believe in wearing a belt and suspenders. By running a backup set to the cloud, if I were to lose all my hardware, I can still recover by pulling my backup down off of the cloud.

What you must do at least once a year.
1) Run through a complete restore of your data. I have seen more then a few times where complex systems were backed up and no-one ever tried to do a restore. Then when a real life disaster happens, people are running around trying to restore services they have never restored before. Believe me; the end result is not pretty.
2) Run through a compete risk assessment. Lets face it; data changes, schemas change, copies are made. Ask yourself this simple question. Can you identify where all your sensitive information is? If not; you are at risk. Okay this does not always apply to the cloud; but it’s something you need to do.
3) Review need to know. Yes this does not apply to the cloud but I thought I would repeat one of my many matras.
4) Go through the SLA. Things change. Yea’ that’s ambiguous, but it’s true.
5) If you are backing up to the cloud, then encrypt your backups. You really want your data sitting out in the open? It’s not that difficult, there is no reason not to encrypt your backups. I really hate reading the Washington Post and learning yet another company or government entity lost a set of unencrypted backups with PII.

Posted in Database Stuff, Security | Tagged | Leave a comment

Fix the standby database.

This has happened in the past. A data file get corrupted, or there is a gap in the archive logs. How do you fix it? In the past I would put the tablespace into backup mode, backup of the datafile, flush the archive logs then SCP the current archive log files and the datafile over to the standby database.

Once over at the standby database; shutdown the standby database, copy the datafile and archive log files to their proper place, recover the standby database then put the standby database back into managed recovery.

But things have changed. Port 22 is closed between the primary database and the standby database. File replication is done through the SAN. Sadly that complicates things. So instead of a simple copy, I now have to engage two SAN engineers to break the replication, one on the primary side and a second on the standby side. I then need to engage the Unix admin to mount the LUN on the standby database server.

The SAN engineers require change control to break the replication, the Unix admin requires change control to mount the LUN. Do you see how what is a quick simple fix can become a problem?

Posted in Uncategorized | Leave a comment

What about NULL?

What is NULL?  Is NULL a space? Is NULL a NULL string?

NULL is unknown. If null is unknown then you can not do anything with null. 
You can not perform any operation on null: Y := 5+UNKNOWN then Y=UNKNOWN
You can not compare null to anything: IF 5>UNKNOWN THEN … END IF. Well that will never evaluate to true because just don’t know.

This is part of what makes NULL so difficult to work with.  NULL complicates logic. So just say no to null.

Posted in Database Stuff | Leave a comment

Social engineering does happen.

There have been a few times where social engineering was attempted with one of my customers.  Sometimes I wind up taking the phone call after someone calls the help desk looking for some information we would not normally give out.  On one occasion someone from the help desk walked back to my office and asked me to take a call because a customer was upset about our security configuration.  The help desk person did what all help desk people do, help the customer with a problem.

Someone doing social engineering may try to make it sound like the information they want is critical and resort to threats and intimidation. Always stop and ask yourself, would I give this information to a bad actor?  What can be done with the information if it fell into the wrong hands.  Was the person who called vetted?

The call went something like this.

Customer: “We are setting up a new account.  I have our security person on the line and he wants to ask you a few questions.  Can you help us out?”

Me: “Sure, whats the problem?”

Security person. “Hi my name is Jim and I am responsible for the security of our information that we send you.  I have to approve sending sensitive information and have a couple of questions.”

Me: “Shoot.”

Jim: “What types of firewalls do you use? What is their patch level? What is the web server?  What is it’s patch level?  What is the database?  What is the patch level? Is the database encrypted? What encryption are you using? Do you encrypt the backups? How do you enforce password security?  How complex are your passwords? How can I extract information for the database once I send it to you?”

Me: “I”m sorry Jim, but I can not give out that information.  If you are setting up an account please read the FAQ. That will answer all the question you need to setup an account.”

Jim: in a stern voice “Listen, if I don’t get this information then I will not approve sending you the data you require.”

Me: “Jim, if you decide not to send the data that is your decision, but I will not be telling you anything about our security, period, end of story.”

Customer: “Please we just need to know how you secure the data so we can send you the data you require.”

Me: interrupting “Madam your security person can explain to you why you do not tell anyone how you secure information. I am going to instruct the help desk that all questions that deal with security be directed to me or security.”

Jim: “Listen I will call the director and have you fired if you don’t give me the information.”

Me: “Really, good by.”

Posted in Security | Tagged | Leave a comment

PII is valuable to criminals

The theft of PII is no longer relegated to petty criminals and hackers; it is now organized crime rings that are targeting PII and selling it on the back market. Some nations even tolerate criminal hacking to steel PII as long as the criminals only steal outside of their nations borders. The buyers of PII are interested in steeling an identity for financial fraud and leaving a person or company to clean up the mess, often to the tune of thousands of dollars in legal fees and hundreds of hours working to explain “It was not me!” The criminals don’t care what damage they do to your customers, business partners and you.

The sad part is, the technologies to protect PII and sensitive data are readily available and mature but many organizations have not implemented these technologies. Is it because of the cost associated with implementing the technologies?

Cost of breaches

What is to cost of information security? On the surface there is the cost of additional hardware, software and people to manage and run the systems. But really there is another side of the equation that we get to read about in the papers. TJ Maxx lost data on 45.7 Million credit and debit cards. Forrester Research estimated that the final cost to TJ Maxx at $500 Million dollars and could approach $1 billion. Now I don’t want to be the person explaining to the CEO, we could have prevented that. How many years will pass before people stop associating TJ Maxx with data breech?

Just so you don’t think I’m picking on TJ Maxx: here are other data breaches that have hit the papers. Citibank lost PII on 200 Thousand card holders. CardSystems lost data on 40 Million cards even though a prior audit stated they were compliant with Payment Card Industry Data Security Standard (PCI). A subsequent audit found CardSystems was no longer compliant with PCI. Health Hospitals Corp lost PII and medical data on 1.7 million patients by losing a unencrypted backup tape that was left in a car.

All of these breaches have a few things in common. Business partners were hit financially by having to cover the cost of financial fraud. Consumers were hit with added stress because their information was available to criminals. Many consumers also had to spend hundreds of hours and thousands of dollars cleaning up the mess left by criminals.

Organizations spend millions of dollars branding their reputation. All it takes is one of these events to tarnish the reputation of any organization; creating the need to spend millions on damage control and litigation or go out of business.

Posted in Security | Tagged | Leave a comment

If you don’t measure it, you can’t protect it

 have always felt safe in my home until a few days ago I had locked myself out of my house. I spent about 30 minutes looking for a spare house key my wife may have left in a car that did not exists. Finally I made the decision I was going to have to break into my own house. Once I made that decision, two minutes later I was standing in my living room. Sense then, I had a security assessment, changed locks and added some features to make my home safer. The security assessment was a key part of securing my house and telling me where I needed to put better locks and improve sensor placement.

Step one: to do a risk assessment of your environment. With this risk assessment you can make intelligent decisions on the mitigation’s you need to put in place to protect your database.

I always start with this simple template.  First name the high level risk element: Backup tape. Then name the risk: Lose . Then measure the likelihood of a tape being lost from 1 – 10. If you score it a 10 then you are saying the risk event is about to happen if you score the likelihood a 0 then you are saying this is not going to happen. Once you have measured the likelihood of the event then measure the impact from 1 – 10. A score of 10 is saying this will put you out of business and a score of 0 is we can ignore this risk. Now that you know what the risk is, the likelihood and the impact you will may hold off on mitigation and strength. But lets fill in mitigation for now. Encrypt backups and we will give that a strength of 8. I rarely give a score of 10 on mitigation because “stuff happens.” We can add other mitigation’s to the same risk event. All backup tapes will be transferred by bonded courier. Strength 7. By combining the two mitigation’s you have decreased both the likelihood or impact of the risk event.

Identify Confidential Data

The risk assessment should identify sensitive information and how the information moves through your systems. Your database has information that your organization would consider sensitive. This can be PII, Financial Data, Sales data and the list goes on. When locking down information you want to place some focus on confidential information. The likelihood of confidential data leakage may not be greater then other information in your database but the impact of the data leaking or getting corrupted would greater.

Posted in Security | Leave a comment