August 13, 2018: NOTE UPDATE TO POST THIS IS SPECIFIC TO Oracle 12.1 and bellow. Oracle 12.2 and above, you can change an unencrypted tablespace to an encrypted tablespace.
1) When we start talking about securing information, the first thing that always seems to come up is encryption. Everyone has heard about it, but some don’t really understand just what encryption is protecting. When we are discussing Transparent Data Encryption (TDE) we are discussing data at rest. The attack vectors we are protecting from is a bad actor gaining access to the physical hardware.
1a) Now, the easiest and fastest way to implement TDE is to encrypt tablespaces and move the sensitive data into the encrypted tablespace. You need to be careful here, just because you identified the tables that are sensitive, what about objects that are dependent on the table? (Indexes, Materialized Views, etc). Each of these sensitive objects need to be moved into encrypted tablespaces.
Find dependent objects.
set pagesize 1000
set linesize 132
col owner format a30
col name format a30
select d.owner,
d.name,
s.tablespace_name,
t.encrypted
from dba_dependencies d,
dba_segments s,
dba_tablespaces t
where d.owner = s.owner
and d.name = s.segment_name
and s.tablespace_name = t.tablespace_name
and referenced_name IN (
SELECT segment_name
FROM dba_segments
WHERE tablespace_name IN
(SELECT tablespace_name
FROM dba_tablespaces
WHERE tablespace_name = upper('&&tbs')))
UNION
SELECT i.owner,
i.index_name,
i.tablespace_name,
dd.ENCRYPTED
FROM dba_indexes i,
dba_tablespaces dd
WHERE i.tablespace_name = dd.tablespace_name
AND table_name IN (
SELECT segment_name
FROM dba_segments
WHERE tablespace_name IN
(SELECT tablespace_name
FROM dba_tablespaces
WHERE tablespace_name = upper('&&tbs')));
You can not change an unencrypted tablespace into an encrypted tablespace, so you are going to need to first create the encrypted tablespace. UPDATE, in Oracle 12.2 and above you can change an unencrypted tablespace into an encrypted tablespace.
CREATE TABLESPACE sensitive_dat
DATAFILE '/opt/oracle/oradata/DEV/datafile/sensitive_data01.dbf' size 1024M
ENCRYPTION USING 'AES256' DEFAULT STORAGE(ENCRYPT);
CREATE TABLESPACE sensitive_idx
DATAFILE '/opt/oracle/oradata/DEV/datafile/sensitive_idx01.dbf' size 1024M
ENCRYPTION USING 'AES256' DEFAULT STORAGE(ENCRYPT);
Now that we have an encrypted tablespace, we need to start moving all the sensitive data into it. It’s important to know, that to prevent ghost data you we are going to need to move everything out of the tablespace and into a new tablespace. I normally use alter table move, but you can also use dbms_redefination and create table as select. Use the report from dependent objects to make sure you have everything out of the tablespace. Once you have everything out, drop the tablespace then use a utility like shred to over right the data file(s) with random data. Once you have done that, you can safely delete the data file(s).
Here is a link to my demo on moving data to an encrypted tablespace. This demo assumes the base table is already in the encrypted tablespace, now we need to move indexes and materialized views. https://www.youtub
TDE also offers column encryption, the analysis required to properly implement column based encrypted is time consuming. So for now we are going to pass over column encryption.
1b) SQLNet encryption. Information that moves through the network is subject to various attacks including man in the middle, replay and modification attacks. With these data can be leaked, corrupted, or even replayed. So we use sqlnet encryption and integrity to protect our data from leaking, replays and modification. You are going to user net manager to setup. Make an encryption or integrity method either Accepted, Requested, Rejected or Required. You can read more on these in the Oracle Documentation.
https://docs.oracle.com/database/121/DBSEG/asoconfg.htm#DBSEG020
$ORACLE_HOME/bin/netmgr Open local –> profile then select network security and click on the encryption tab. Select the encryption algorithms you need and then enter 256 characters in the encryption seed block.
Select an integrity method. Remember MD5 has several weaknesses. SHA has become the defacto standard.
2) Audit your users and environment. I’ve have heard this one time and time again, “It’s not my job. It’s the auditors job.” The fact remains many breaches exists for weeks, months and even years before they are discovered. Than when a breach is discovered, the auditor request audit logs. We need to do better!. I get audit reports every morning and review them before I do anything else. So what do you want to audit?
2a) Audit login failures. Login failures can be a sign someone is trying to gain access to your system. If you start seeing login failures, investigate. Is the issue user training or is there something else going on.
2b) Audit logins from yesterday. Why you are going to be looking at os_username / username / userhost to see if people are logging in from multiple workstations. This could be an indicator of a username/password being shared. Another reason I do this audit is to check os_username / username. Is the user using their proper account. I have issues in the past where a user was using the application login to do their normal work. This audit showed this and allowed us to correct the situation.
2c) Audit logins for the past 31 days. This gives you a 30,000 foot picture on how often users are connecting and are then disconnecting at the end of the day.
set heading off
set pagesize 1000
set linesize 132
set serveroutput on
col object_name format a24
col object_type format a24
col doctype format a10
col userhost format a40
col os_username format a15
col username format a15
col terminal format a15
spool $HOME/session_audit.txt
select 'login failures' from dual;
select os_username,
username,
userhost,
terminal,
to_char(timestamp, 'dd-mon-rr hh24:mi')
from dba_audit_session
where returncode != 0
and trunc(timestamp) >= trunc(sysdate-1)
and username != 'DUMMY'
order by timestamp
/
select 'logins yesterday' from dual;
select os_username,
username,
userhost,
count(*)
from dba_audit_session
where trunc(TIMESTAMP) >= trunc(sysdate-1)
and username != 'DUMMY'
and action_name != 'LOGOFF BY CLEANUP'
group by os_username,
username,
userhost
order by os_username,
username
/
select 'logins last 31 days' from dual;
select os_username,
username,
userhost,
count(*)
from dba_audit_session
where trunc(TIMESTAMP) >= trunc(sysdate-31)
and username != 'DUMMY'
and action_name != 'LOGOFF BY CLEANUP'
group by os_username,
username,
userhost
order by os_username,
username
/
2d) Audit changes to any database objects. This is a simple query you can start with. You are checking objects based on last_ddl_time and created.
select 'changed / created objects last 24 hours' from dual;
select owner,
object_type,
object_name
from dba_objects
where (trunc(created) >= trunc(sysdate-1)
or trunc(last_ddl_time) >= trunc(sysdate-1))
order by owner, object_type, object_name
/
2e) Use a product like tripwire to check for any changes to ORACLE_HOME. You can also roll your own but getting a checksum of all the files in ORACLE_HOME, than doing the check everyday to see if a file has changed. (there are some files you will want to filter because they change in the normal course of operations)
3) Identify the sensitive data in your database. How can you know what to protect if you don’t know what is sensitive? When you identify what is sensitive, it’s easier to track that data through your enterprise to limit access to the data.
Now there is something most people know but don’t realize they know. You can have a simple piece of data, that by itself is not sensitive, but when combine it with other data that’s not sensitive and now you have sensitive data. IE: I’m Robert, that by itself is not vary valuable. Combine that with my zip code that is not very sensitive and you have narrowed down the universe of Roberts’. Next add that I drive a Ford F150 and a BMW R1150RS, now you have uniquely identified me.
3b) If you have not already done so, you can reverse engineer your application scheme into Oracle SQL Developer Data Modeler. SQL Developer Data Modeler has the ability to mark and report columns that are sensitive.
Heli made a blog entry on how to mark and report on sensitive data in SQL Developer Data Modeler. Sensitive Data From Heli
4) Create trusted paths to your sensitive data. Or at the very least, limit the number of paths to get to your sensitive data. Now that you have a list of sensitive data, document where that data is getting accessed by. You can use unified_audit_trail to get hosts names and users accessing the data. Once you have validated how the data is getting accessed and from where and by who you can setup Virtual Private Database and redaction to limit the paths to get to the sensitive data.
Here is a simple example, if people who are authorized to access the data are in a specific subnet, then you can check the subnet and use the VPD policy to append where 1=2 onto the where clause to anyone querying the data from outside that subnet. You can also use authentication method in this check. Say the data is so sensitive you only want people to access it if they connected using RADIUS. If someone connected using anything else, again you would append where 1=2 onto the where clause to return nothing. This important thing to remember is, Your VPD policy can be anything you can code in PL/SQL.
Lets start this with setting up some support objects in the security schema.
Setup a table of ip_addresses and if the user is granted access to credit card number, social security number and if they user has customer access.