Life of an Oracle DBA: Big Data restore

Over the years my opinion of what big data is has changed. Way back in the VAX days, a 1G database was not only big data it was fricken huge data. Now words like gig, tara, and peta are thrown around casualty. Exa is still reserved for really fricken big.

Now, back to the restore. The customer has a data warehouse that has been populated with many years worth of data and the consumers of the data are always asking for more. The easy way to keep a backup of this massive amount of data is to create a standby database. So we have a standby database, archive logs are shipped to the standby database and I get a report every morning of the health of the standby database. Everything is honkey dorey, or so I thought.

The customer was doing a big data push and for reasons that are beyond my control, it was decided that dataguard would be shutdown. I resisted, and was over ruled. After two months of the standby databases being shutdown I was finally told to bring the standby instance back in sync with the primary database. Time for the gremlins to come into the picture. Issues I faced:

  1. A dba did an open resetlogs on the primary database instance. I learned this after copying down 2.5T of archive logs down the pipe.

  2. So, I did a diff of the database starting at the oldest SCN in the standby database. Again after copying down 2.5T of data, one of the managers got a bit anxious and decided to break the replication of the san before the files were finished replicating. The files were used to do an rman recover to the standby database and it was a failure.

  3. Do a complete cold backup of the data warehouse.

    1. Send some large fast drives to the data center.

    2. Encrypt the drives

    3. Courier the drives to the DR site

    4. Decrypt and uncompress the backup

    5. Restore the backup

    6. scp the archive logs to the DR site.

    7. Recover the standby database

  4. Have a beer and get some sleep.

Lessons learned:

  1. Under no circumstances shale anyone shutdown the standby database.

  2. Under no circumstances shale anyone do an open resetlogs on any database that has a dataguard instance.

  3. Do a cost benefit of shipping drives out to the primary site before doing long copies over a pipe that is shared by many. For turn around time, ship the drives out. We have now decided to keep a set of drives at the data center so we can do a much quicker restore of the database. The decision you make will be based on your patients, and you pain tolerance for having a bad standby database.

    1. If there is a hand full of archive logs missing, scp is a wonderful thing.

    2. If there is much more then a bunch of archive logs that need to be transferred, put them on the drive and ship it to the DR site to recover the standby database.

Life of a Oracle DBA. What is choking my Oracle database server?

The week was dominated by turning. The database server was pegged at 100% CPU all week. I have tuned the offending queries to within an inch of their lives. The customer wants to know, why is this happening? And I’m asking the business end, what is driving more users to be active on website.

Early in my career I learned about the top command. With the top command I learn about a few things.

CPU is pegged at 100%. And I this example the CPU has been pegged all day long. The user experience (UE) is suffering.

ldap is the top process.

Oracle has all the other processes.

load averages: 90.29, 92.29, 76.555 13:48:45

390 processes: 297 sleeping, 76 running, 17 on cpu

CPU states: 0.0% idle, 95.7% user, 4.3% kernel, 0.0% iowait, 0.0% swap

Memory: 32G real, 1285M free, 28G swap in use, 5354M swap free


1069 ds5user 45 32 0 0K 0K run 31:26 3.04% ns-slapd

5416 oracle 11 23 0 0K 0K run 4:14 1.15% oracle

4322 oracle 11 22 0 0K 0K run 7:59 1.13% oracle

4916 oracle 11 22 0 0K 0K run 4:41 1.11% oracle

5937 oracle 11 23 0 0K 0K run 1:05 1.10% oracle

5494 oracle 11 22 0 0K 0K run 2:13 1.09% oracle

4910 oracle 11 22 0 0K 0K run 4:59 1.08% oracle

Lets dig into a few of the top oracle processes and see the query that is running.

SQL> @pid 5416

old 10: and spid = &1

new 10: and spid = 5416


———— ———- ———- ———- ————-

5416 953 55 IFSUSER butxjxuvw2m3y

Here is the pid.sql query I use.

SQL> l

1 select p.spid, s.sid, s.serial#, s.username, s.sql_id from v$process p, v$session s where s.paddr = p.addr and s.status = ‘ACTIVE’ and spid = &1


So, sql_id butxjxuvw2m3y is my top CPU consumer. I want to see what the query is and what the plan for the query is.

SQL_ID butxjxuvw2m3y


select .. from VINBOX this_ where (this_.STATUS<>:1 and this_.RECEIVED>=:2 and (this_.ASSIGNEE=:3 or (this_.RESO in (:4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14) and (this_.ASSIGNEE is null)))) order by lower(this_.RECEIVED) desc

Plan hash value: 3079052936


| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |


| 0 | SELECT STATEMENT | | | | | 6300 (100)| |

| 1 | PX COORDINATOR | | | | | | |

| 2 | PX SEND QC (ORDER) | :TQ10001 | 873 | 122K| | 6300 (8)| 00:00:13 |

| 3 | SORT ORDER BY | | 873 | 122K| 312K| 6300 (8)| 00:00:13 |

| 4 | PX RECEIVE | | 1 | 77 | | 3 (0)| 00:00:01 |

| 5 | PX SEND RANGE | :TQ10000 | 1 | 77 | | 3 (0)| 00:00:01 |

| 6 | TABLE ACCESS BY INDEX ROWID | INSTANCE | 1 | 77 | | 3 (0)| 00:00:01 |

| 7 | NESTED LOOPS | | 873 | 122K| | 6270 (8)| 00:00:13 |

| 8 | NESTED LOOPS | | 871 | 58357 | | 3831 (11)| 00:00:08 |

| 9 | PX BLOCK ITERATOR | | | | | | |

| 10 | INDEX FAST FULL SCAN | FORM_IDX06 | 871 | 36582 | | 1185 (33)| 00:00:03 |

| 11 | TABLE ACCESS BY INDEX ROWID| CUSTOM_FORM | 1 | 25 | | 3 (0)| 00:00:01 |

| 12 | INDEX RANGE SCAN | CUSTOM_FORM_IDX03 | 1 | | | 2 (0)| 00:00:01 |

| 13 | INDEX RANGE SCAN | INSTANCE_IDX03 | 1 | | | 2 (0)| 00:00:01 |


I did this for a few of the top processes and found everyone is clicking on the inbox link. Now this has been tuned in the past; so how am I going to deal with this? Well first thing, as the app sits right now, a person can click on the link over and over again. We are putting in a change to disable the link until the inbox comes up. This will save me from users that keep clicking the link when they don’t get their inbox fast enough. The next thing I need to do is move through the query and see where I can optimize it.

Now let’s look to see what the user load is on oracle.

SQL> get users

select username, sql_id, count(*) from v$session where status = ‘ACTIVE’ group by username, sql_id order by 1,3

SQL> /


———- ————- ———-

APPUSER 9vztbhnd0r47b 1


APPUSER 4c176cpxqvyrs 5

APPUSER 24mdr8u4u6vyz 10

APPUSER 3dkmsrzyjbuj5 44

RLOCKARD 5a921h5xhxxcg 1

8820rg9jt4c14 2


18 rows selected.

There are two versions of inbox running right now totaling 54 instances. Well inbox is fairly static during the day; can we use a materialized view? There is a cost, what type of refresh should we be using? 1 hour, 30 minutes, 5 minutes.

There will come a time when the hardware you have in place will not support the user load. In the past month we have seen a 300% increase in the number of queries hitting the database and a 100% increase in the user base.

The customer wants a breakdown by date/hour/user/sql_id/exec count to see what is hammering the system. from this report I can look at date, time of day and user and find out what query is getting executed and how many times. This is one of those reports that is very verbose so I spool it to a file and bring it into Excel to present to the customer. I also ran the same report without sql_id to show date/hour/user/sql count to the database. Both of these report will give you a week worth of data to help you drill down into usage times.

set linesize 256

set pagesize 10000

select to_char(trunc((sample_time),’HH’),’MMDDHH24:MI’), username, sql_id, count(*) from DBA_HIST_ACTIVE_SESS_HISTORY h, dba_users u where h.user_id = u.user_id and u.username = ‘IFSUSER’ group by to_char(trunc((sample_time),’HH’),’MMDDHH24:MI’), username, sql_id order by 1,2;

081709:00 APPUSER 0ut13pyz2862b 1090

081709:00 APPUSER 624p6ufu0vf67 1590

081709:00 APPUSER 0887h37jcvrq0 1677

081709:00 APPUSER 3dkmsrzyjbuj5 14854

Well the sql_id 3dkmsrzyjbuj5 is executing a lot so I wanted to dig into that to see what it is. No surprise, it’s the inbox query … AGAIN.

We have ordered more hardware to put the database on. it’s bigger, more CPUs’, more memory, yada yada yada. When the hardware is in, we will migrate this database from Solaris / Oracle 10g to Linux/Oracle 11G.

Life of an Oracle DBA: Grumps, dataguard, upgrades and tuning oh my

There are very few things that will get my dander up. One is waisting my time; the other is “Not My Job.” Last week I experienced both; it took a bit but the person who told me it was not their job wound up doing what needed to be done. This caused quite a bit of my time to be wasted. This gets back to my mantra: “You may be asked to do things that our outside of your job description, if so, embrase it and do the best job you can.” You will make yourself more valuable to your employer, customers or team mate.

I also experienced a problem with dataguard that I had not seen before.

ORA-00342: archived log does not have expected resetlogs SCN 137433238

ORA-00334: archived log: ‘/var/opt/data/sor01/arch/1_191_768933256.dbf’

This was weird. This ora error has to do with a different incarnation of the database. What caused the issue? My hunch; someone did a open resetlogs.

I checked what logs have checked the max log on the primary and the max log applied on the standby.

primary> select thread#, max(sequence#) “Last Primary Seq Generated”

from v$archived_log val, v$database vdb where val.resetlogs_change# = vdb.resetlogs_change# group by thread# order by 1;

THREAD# Last Primary Seq Generated

———- ————————–

1 847

standby> select thread#, max(sequence#) “Last Standby Seq Received” from v$archived_log val, v$database vdb where val.resetlogs_change# = vdb.resetlogs_change# group by thread# order by 1;

THREAD# Last Standby Seq Received

———- ————————-

1 847

But wait threes more:

standby > select thread#, max(sequence#) “Last Standby Seq Applied” from v$archived_log val, v$database vdb where val.resetlogs_change# = vdb.resetlogs_change# and val.applied in (‘YES’,’IN-MEMORY’) group by thread# order by 1;

THREAD# Last Standby Seq Applied

———- ————————

1 190

Okay, I see where the standby database wants log 191. But still what is the real issue. I needed more information to figure out what the problem is.

What is the scn that is stored in the controlfile?

select db_unique_name, to_char(current_scn, ‘999999999999999999’) as Standby_Current_SCN from v$database;


—————————— ——————-

<radicated> 199126920

I have still not found out and wasted a lot of time trying to get the archive logs to apply. So I deeded to get a incremental backup of the primary database starting with the oldest scn applied to the standby database.

standby> select min(to_char(fhscn, ‘99999999999999999999’)) as SCN_to_be_used from X$KCVFH;




So now I have a place to start my incremental backup. But I prefer to live on the conservative side of life. These SCNs’ are not matching up very well, so I’m going to take the lower of the two SCNs’ for my backup.


After the backup is done, scp the backup to the standby site and do a database restore. But wait, port 22 is not open so I am going to have to rely on SAN replication. I’ll let you know how the restore goes, the files are still replicating.

Migrate 10G on Solaris to 11G on Linux

There are three small databases that needed to be upgraded 10g Solaris to 11g Linux. Because they were small, I decide the easiest way to do the upgrade was to use datapump. The next set have a lot of data and are moving to a RAC environment, so I will experiment with transportable tablespaces.

Tuning Oracle

We have a web based OLTP application where the database was experiencing 100% cpu. Logins were slowing down, users were backing up. In short it was an extreme terror kinda day. Looking at the Oracle database I found that most of the users were backing up on the same query. My first question was, what was the trigger for 100’s of users to start executing the same query at the same time and for it to be going on all day? The second question was, what can I do to speed things up?

It turns out our 75,000+ user base had received a message to do something. So that explained my first question. The second question was a bit harder to address. There was one index that was getting a fast full index scan for each query and it started showing up in v$session_longops. The index is a compressed index and when I looked at it closely the I found the cardinality of the columns in the index were from higher to lower cardinality. If I wanted to compress the index I can reverse the columns in the index. And because this index mostly experiences fast full index scans that would reduce the number of blocks the index had.

SQL> drop index <schema>.<index_name>;

SQL> create index <schema>.<index_name> on <schema>.<table_name> (lowest_cardinality, next_lowest, highest_cardinality> tablespace indx compress 2;

A generic create compressed example: Sorry I can’t give specifics, NDA and all ya’know.create index app.cust_indx on app.custs (sex, eye_color, cust_id) tablespace indx compress 2;

By changing the order of the columns in the compressed index the number of blocks to store the index decreased by 50%. Well, the load finally returned to quasi normal and requests were getting serviced in a timely mannor. It looks like we have maxed out the oracle database server. Thank goodness we are going to upgrading it to Oracle 11G RAC on Linux from Solaris Oracle 10G single instance.

The life of an Oracle DBA

So you want to be an Oracle DBA?  What does a DBA do? What does my week look like?

Frequently my wife asks me what my days are like and just what do I do at work.  Well, I’m going to start telling you.  My normal answer is a DBAs’ job is extreme boredom punctuated with moments of start terror. Lately there has been more terror then boredom.

So this week started off good enough.  I was in the middle of recovering a standby database when I discovered there were gaps in the archive logs.  I spent most of the weekend trying to get the standby database restored then realized I would have to ship down the archive logs from the primary site.  The problem centered on control_file_record_keep_time being set too low.  Well the archive logs had aged out of the control file.  I had to do a couple of things.

1)      extract the archive logs from ASM.  This can be a trick, then I discovered the package dbms_file_transfer. This package allows you to pull files out of ASM so you can deal with them as real file.   All I had to do was create two directories. One for the ASM source and the second directory the target.

2)      The second part of the problem was getting the files down to the standby database.  Well port 22 is closed and I have to get the SAN Administrator to replicate the files and then get the unix admin to mount the LUN then go to change control.  Bottom line, someone else needed to transfer some files so I was asked to defer transferring the data until the other person is done.  I hope the transfer will be complete by Monday and I can finish fixing the standby database.  So far over a week to fix something that should have took me less then one day.

The following code extracted the logs that I needed to the file system.


x number := 214;

fname varchar2(256);


  while x < 440


    x := x+1;

    fname := ‘1_’ || to_char(x) || ‘_773923456.dbf’;


    dbms_file_transfer.copy_file (‘ARCHIVE_SOURCE’, fname, ‘ARCHIVE_TARGET’, fname);

  end loop;



Next issue for the week. This is what I really love to do. Tune the database.  I got a call from an ole’ friend who’s query was taking a while to execute.  He is a sharp unix admin and knows enough about Oracle and SQL to generate the reports that he needs.  Well his email was short and sweet.  Here’s the code, it’s slow, fix it.  I spent about an hour re-writing his query and took his report from executing in over 2 days down to less then two minutes.  It is important to understand access paths and how the optimizer works. 

Part of my job is to also deal with external customers.  There are some customers who are software vendors that provide services to my customers customer.  (My customer is a very small government agency.  Their customers are banks and other government agencies.)  I don’t know how it happened but if the help desk can not answer the question or address the issue, me and another friend gets the call based on the problem.  All data transfer issues go to me.  All organization issues go to me.  Custom reports go to me.  korn shell scripts got to my friend.  Thank goodness for small favors.

So this week I spent a lot of time on the phone on conference calls with large external customers.  Their issues are simple; we have to get the information securely, accurately and reliably.  I’m happy to say, the issues are getting addressed most of the issues are from the processes that all the organizations have in place.  I sometimes have to remind myself to respect the process.  But there are times when the process is broke and needs to be addressed.

Oh that brings up another problem, I’m not a LAN engineer and I don’t have access to setup VPN tunnels.  So if there is an issue setting up a VPN tunnel then I send that on to someone else.  Ya’ need to understand that.  Well one of the vendors called me and said that one of their customers has been having problems with their VPN tunnel for months.  I sent it on to our LAN engineer and asked him to look into the problem.  I get a terse email back saying “all I can do is open a ticket with <redacted>.” That kinda rubs me the wrong way.  Pick up the phone, call the customer, find out what the problem is?  This does a few things. One it lets the customer know they are not being ignored. Two you have an understanding of what the problems are. This way when someone asks what’s going on with <redacted> then you can answer it with some degree of intelligence.

And that brings up another issue. A couple weeks ago I found out about a customer who has been having troubles sense last January.  It took quite a bit but I pulled together all the resources to address the issue the same day.  But the problem is still dragging on.  I have emailed <redacted> network engineer again today to get a status and he says, “working on it, I’ll let you know.” Well there are a lot of stressed out people who want the problem fixed.  I’m not going to say <redacteds’> network engineer does not know what they are doing but I will say one of our top engineers has serious questions regarding this person’s ability to address the issue.  No one wants him to look bad in front of his boss, so I’m trying to be subtle and have one of our senior engineers help him out with his issues.

Where was I going, oh yea’ the life of a DBA.  You will find you will get tasked with things that seem outside of your core skills.  Embrace them and do the best job you can.

Today I was asked to install Oracle on a blade for our development group.  I got started after the unix group stood up the blade and the problems job kept coming.  Packages were not installed, accounts were not setup correctly.  Even logged in as oracle I could not copy the oracle binaries to the blade.  I’m still not done, I’m hoping the unix group will finish installing the required packages and make the changes to /etc/system so I can finish setting up this database.  Am I frustrated with this, yea’ I am.  Normally I can have a database setup and running in very short order, but having to go back and forth is irritating.  There is a baseline install that gets done for blades.   Perhaps I will ask the unix group to provide me with a copy of the document and make some edits to include setting up for oracle databases.  This way I know what to expect when I get the server.

Other than that, there were many meetings; Change control is an important meeting and you really need to understand what is going on and what the impact of changes are.  There is the daily MMM, some folks call it the morning stand up meeting .  The most important meeting had to do with the technology refresh.  We discussed configuring RAC for one of our critical systems to deal with surge in processing that happens once a year. Our old configuration has three servers. One is the primary database server, a second is a standby database located in another state.  The third is a standby database in the rack with the primary database.  We set it up that way because I would always prefer to have an extra set of data files in my back pocket in case something happened.  This will be changing to configure RAC for performance.

Of course there was the request to get a cold backup of a production database off the standby database to refresh a test environment.  I’ve cloned databases from rman, but now I’m going to figure out if I can create a clone off a standby database.

Encrypt those backups

April 2005 Ameratrade loses a backup tape containing information on 200,000 customers.

February 2005 Bank of America loses backup tapes containing information on 1.2 million charge cards.

September 2011, SAIC loses backup tapes of 4.9 Million members of the military who sought medical treatment in the San Antonio area. The data contained name, social security numbers, phone numbers and medical information. This data was not encrypted.

SAIC made the following statement: “Retrieving data on the tapes, which were stolen from a company employee’s car is not likely to happen because doing so requires knowledge of and access to specific hardware and software and knowledge of the system and data structures.” Excuse me if this does not make me feel better. I can get on eBay to get the hardware needed and download the software from any number of vendors to do the restore. Yes if the backup was done from Oracle or DB2 or MS SQL Server then you would need the software from the vendor. What if this theft was targeted and the thief knew what they were after?

I can go on and on about backup tapes that are lost out of the back seat of an employees’ car. And to be honest; I have transported tapes in my car too. However; when I reflect on transporting critical information in my car, I now get the hebegebes. Now we use a bonded courier to transport backup tapes.

Backup tapes are also being shipped to someplace like Iron Mountain. But lets face it, the people who are handling your backup tapes are low paid employees who could be influenced to look the other way. If someone really wants your backup tapes there is a way for someone to get your backup tape.

What are the options for encrypting backups.

  1. Use rman encryption.
  2. Encrypt the backup files on the OS.

For option 1, using rman to encrypt. There are a few options you can use a password to encrypt the backup or you can use a wallet to encrypt the backup.

If the backup is being sent offsite, using a password to encrypt the backup may be your better option.

If the backup is being sent to a Disaster Recovery site to build a standby database, using the wallet may be the better option.

Right now we are addressing sending a backup offsite so lets walk through the process of building an encrypted backup using a password.

First find out what encryption algorithms are supported.




———- —————————————————————-

AES128 AES 128-bit key

AES192 AES 192-bit key

AES256 AES 256-bit key


Of the algorithms that are available, AES256 is the strongest one available. So we are going to select AES256 for our encryption.

RMAN> set encryption algorithm ‘aes256’ identified by A_Passphrase_that_you_select;

executing command: SET encryption

using target database control file instead of recovery catalog

Using “set encryption algorithm’ command we did two things. One we set the algorithm that will be used for the backup and we set the passphrase that we need to decrypt the backup.

Next we are going to run the backup like we would normally do.

RMAN> backup as compressed backupset database format ‘/home/oracle/backup/encrypted_with_password%u%d.backup’;

Starting backup at 02-AUG-12

using channel ORA_DISK_1

channel ORA_DISK_1: starting compressed full datafile backup set

channel ORA_DISK_1: specifying datafile(s) in backup set

input datafile file number=00003 name=/opt/oracle/oradata/orcl/sysaux01.dbf

input datafile file number=00001 name=/opt/oracle/oradata/orcl/system01.dbf

input datafile file number=00002 name=/opt/oracle/oradata/orcl/example01.dbf

input datafile file number=00004 name=/opt/oracle/oradata/orcl/undotbs01.dbf

input datafile file number=00006 name=/opt/oracle/oradata/orcl/users01.dbf

channel ORA_DISK_1: starting piece 1 at 02-AUG-12

channel ORA_DISK_1: finished piece 1 at 02-AUG-12

piece handle=/home/oracle/backup/encrypted_with_password0dnhl9n6ORCL.backup tag=TAG20120802T170333 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:02:35

channel ORA_DISK_1: starting compressed full datafile backup set

channel ORA_DISK_1: specifying datafile(s) in backup set

including current control file in backup set

including current SPFILE in backup set

channel ORA_DISK_1: starting piece 1 at 02-AUG-12

channel ORA_DISK_1: finished piece 1 at 02-AUG-12

piece handle=/home/oracle/backup/encrypted_with_password0enhl9s1ORCL.backup tag=TAG20120802T170333 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01

Finished backup at 02-AUG-12


How do we decrypt the backup when we need to restore. It’s that simple.

RMAN> set decryption identified by A_Passphrase_that_you_select;

executing command: SET decryption

using target database control file instead of recovery catalog

RMAN> restore database;

Starting restore at 02-AUG-12

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=20 device type=DISK

skipping datafile 1; already restored to file /opt/oracle/oradata/orcl/system01.dbf

skipping datafile 2; already restored to file /opt/oracle/oradata/orcl/example01.dbf

skipping datafile 3; already restored to file /opt/oracle/oradata/orcl/sysaux01.dbf

skipping datafile 4; already restored to file /opt/oracle/oradata/orcl/undotbs01.dbf

skipping datafile 6; already restored to file /opt/oracle/oradata/orcl/users01.dbf

restore not done; all files read only, offline, or already restored

Finished restore at 02-AUG-12


Okay, I did not need to restore the database, but it’s good to know that this works.

Now you don’t have an excuse not encrypt your backups.

Security in the Cloud. Install #1

There are a number of different vendors providing cloud services. You can buy space and processing power from vendors like IBM, or Amazon or many other service providers. In the interest of full disclosure, I use cloud services all the time for email, backups and web services. These are services that are critical to my business.

There are a number of advantages to using the cloud. You don’t have to worry about maintaining a complex environment and keep a staff of highly paid systems administrators on staff. You are not losing sleep about your backups. Well, maybe you should be losing sleep over your backups, we will come back to that. You can also purchase disaster recovery services giving you the ability to get back online quickly with minimal disruption of business. The public cloud can give a businesses the edge they need to succeed.

In the public cloud one down side is you don’t know who your neighbors are. There have been cases where a group like Anonymous or another criminal element gets into the same cloud where you are and your cloud becomes dark and stormy. In Texas the FBI raided a data center taking some businesses off line. And more recently the FBI raided a data center in Reston Virginia taking many businesses offline.,2817,2387447,00.asp.

Do you see where I’m going with this? If you don’t have full control of your systems, there can be activity on the systems that can cause law enforcement to come in and take the cloud for evidence; effectively taking your business offline. Larger organizations may want to invest in a private cloud to keep control of who is playing in their back yard.

Another down side is you have not personally vetted your system administrators. An Oracle DBA can access everything in the database. They can make changes to the database. They can extract information from the database. And to make is more interesting, in a lot of cases they can go back and clean up the audit log. You really need to know who is watching the store. Oracle has tools to protect the database from a rogue database administrator. Ask a lot of questions about how the cloud provider protects your data from an insider.

What can you do to protect yourself from an insider threat.
1) Make sure Oracle Database Vault is installed and configured properly. Databases Vault allows the Database Administrator to do their job without accessing production data.
2) Audit trails should go to either syslog or Oracle Audit Vault. I would recommend using Audit Vault that can send you alerts when someone accesses sensitive data.
3) Audit what needs to be audited and review the audit trail.
4) Perform a risk assessment on your data and identify all sensitive data. Once you have identified all sensitive data, encrypt it and audit it.

Another down side, is your sensitive data encrypted? If it is encrypted, is the encryption done once the data gets to the cloud or is the encryption done at the workstation? If the data is encrypted on the cloud, I can read the data as it gets pushed over the internet to the cloud. I can also read the data that is being stored in memory prior to it getting encrypted. If you encrypting the data on the workstation prior to pushing it to the cloud, how are you dealing with key management? Is a copy of the private key located on the workstation?

For your consideration:

If the application / data is critical to your business? Then:
1) Have a backup plan. Putting your data in the cloud does not necessarily protect the data. Amazon users found this out the hard way. You may want to have backups of your applications and data or pay for DR services. When the cloud is shutdown because of a bad neighbor and you have a backup of your applications and data, you can get back online. If you have DR services you can get back online. If you are not in possession of your backups then you are at the mercy of the law enforcement entity that took your data.
2) Consider using a hybrid private / public cloud. Have your critical applications running on a private cloud using a public cloud for surge processing, non-critical applications and backups.
3) Consider using the public cloud as your backup. I do this in my businesses; I backup my critical files to local storage and also backup to to the cloud. Yes’ I believe in wearing a belt and suspenders. By running a backup set to the cloud, if I were to lose all my hardware, I can still recover by pulling my backup down off of the cloud.

What you must do at least once a year.
1) Run through a complete restore of your data. I have seen more then a few times where complex systems were backed up and no-one ever tried to do a restore. Then when a real life disaster happens, people are running around trying to restore services they have never restored before. Believe me; the end result is not pretty.
2) Run through a compete risk assessment. Lets face it; data changes, schemas change, copies are made. Ask yourself this simple question. Can you identify where all your sensitive information is? If not; you are at risk. Okay this does not always apply to the cloud; but it’s something you need to do.
3) Review need to know. Yes this does not apply to the cloud but I thought I would repeat one of my many matras.
4) Go through the SLA. Things change. Yea’ that’s ambiguous, but it’s true.
5) If you are backing up to the cloud, then encrypt your backups. You really want your data sitting out in the open? It’s not that difficult, there is no reason not to encrypt your backups. I really hate reading the Washington Post and learning yet another company or government entity lost a set of unencrypted backups with PII.