It seems not a day goes by where I’m not humbled in some way or another. Frequently this happens due to limits of my knowledge or something I’d forgotten. There is a wrinkle when you combine Unified Auditing and Virtual Private Databases (VPD).
Here’s the setup.
create or replace package HR.hr_vpd_policy as
function hr_emp_vpd_policy (p_schema varchar2,
-- keep it real simple. just return
-- department_id = 10.
create or replace package body HR.hr_vpd_policy as
function hr_emp_vpd_policy (p_schema varchar2,
return varchar2 is
p_ret_val := 'department_id = 10';
object_schema => 'HR',
object_name => 'EMPLOYEES_VIEW',
policy_name => 'HR_EMP_SEL_POL',
function_schema => 'HR',
policy_function => 'hr_vpd_policy.hr_emp_vpd_policy',
statement_types => 'SELECT'
Here’s the question. How do I know the where clause VPD placed on a sql statement? Well it’s not in the sql_text column (like I thought is was), it is in the rls_info column. Remember, VPD was called “Row Level Security.” Oracle changed it’s name sometime back, I think 10g, but don’t quote me on the version Oracle changed the name.
Where am I speaking this fall? Poland and Croatia conferences have been confirmed. Waiting for Spain, and ECOUG. Let’s get back to normal and be safe about it. Looks like I’ll be busy for the next several months. Life is good.
It’s been a few weeks since I have posted. The short story is my work load has been keeping me from posting.
Today I had an interesting conversation regarding a test environment that we are building in a technology refresh. Early on I was asked what are the space requirements for the test environment? I came up with a answer based on:
Current size of my production database. 350G
Enough space to store two full backups. 700G
Enough space to stage an export to refresh. 350G
Growth of 20%/year
So the minimum space required would be 1.4T with out including growth. I get a phone call asking me about why I need so much space from one of the unix admins. Why would you backup a test environment? You can just refresh from production if you need to.
This is actually a good question. Why backup the test environment?
You need a place to test your backup and restore scripts. There is a system where there is no way to test your backup and restore scripts outside of production. This week a change was needed to the backup script, the change was made and the job was put into cron. The backup failed and I had to run it manually the next morning.
Testers use the testing system. I had an instance where a test system was restored from production. After doing the restore, we needed to apply the archive logs to recover the database and make it available for the testers. The recover failed because there were objects in the production database that were in nologging. This caused us to have to shutdown the production database and get a cold backup because of the errors caused by the nologging. The end result, testers were down for a full day and production was down for the four In the end, I got the space I wanted for the test system, however the unix admin was still pushing back about actually backing up a test database. She still believes backing up test systems is a waist of time.
Normally for Oracle to use an index, the leading column in the index must be in the predicate. There are conditions, where the leading edge of the index is not in the predicate but the remaining columns in the index can satisfy the predicate. If this condition is true then Oracle can skip over the leading column in the index and use other columns in the index. Note: If the leading column has a low cardinality (few distinct rows) then it may be cheaper for Oracle to skip the leading edge of the index and read starting at the second value in the index.
Lets start by cleaning up the indexes we have already created and build a composite index where the leading column has a low cardinality.
So we now have a composite index on t1 and the leading edge has a very low cardinality; there is only one distinct value in the status column.
So what happened? The Oracle skipped over the status column in the index and used the owner column. But what happens if you have a high cardinality (many district values) for the leading column in the index. Note the cost of this query.
The index skip scan is still being used but now, note the cost of this query. In the first example with a low cardinality column leading the index we had 8 consistent gets. Then when we used a high cardinality column in the leading column we went up to 274 conditions gets. In all it is much more expensive to do the skip scan if the leading column has a high cardinality.
This has happened in the past. A data file get corrupted, or there is a gap in the archive logs. How do you fix it? In the past I would put the tablespace into backup mode, backup of the datafile, flush the archive logs then SCP the current archive log files and the datafile over to the standby database.
Once over at the standby database; shutdown the standby database, copy the datafile and archive log files to their proper place, recover the standby database then put the standby database back into managed recovery.
But things have changed. Port 22 is closed between the primary database and the standby database. File replication is done through the SAN. Sadly that complicates things. So instead of a simple copy, I now have to engage two SAN engineers to break the replication, one on the primary side and a second on the standby side. I then need to engage the Unix admin to mount the LUN on the standby database server.
The SAN engineers require change control to break the replication, the Unix admin requires change control to mount the LUN. Do you see how what is a quick simple fix can become a problem?