Archive

Archive for July, 2011

Oracle – Startup Phases

July 30, 2011 2 comments

There are 5 states an Oracle database goes through during the startup process.

  1.  Shutdown
    The database is shutdown and cannot be connected to by anyone but SYS.
  2. Startup nomount
    During this phase the database reads the initialization parameters from the spfile or pfile. If you get an error at this stage, it’s best to check this file exists, is accessible and contains no errors.
    This phase does not allow access to the database and is usually only done for database creation, or recreation of control files.
  3. Startup mount
    During this phase the database checks the control file, this is a small binary file that records the physical structure of the database.
    If you receive errors during the mount phase, then it is best to check the control file exists.
    This phase allows for certain DBA activities to take place, but does not allow general access to the database.
  4. Startup open
    During this phase the database tries to start in open mode, ready for all users to access. It checks the datafiles exist and any errors at this stage you should check the datafiles.
    This phase can be run in restricted or unrestricted mode, depending on whether you want all users to have access or not.

Oracle – Error when compiling EAR file for deployment on 10gAS


If you get the following error message when compiling the EAR file for deployment on 10g Application Server then check to make sure the database is up and running, it’s likely the database is either in restricted mode, or shutdown.

$ ./saw_compile.sh
Please wait ...
java.sql.SQLException: J2T-131, SQLException: ORA-01033: ORACLE initialization or shutdown in progress
when registering oracle.jdbc.OracleDriver as JDBC driver
J2T-102, ERROR: Unable to connect to schema "app"
Exception in thread "main" java.lang.NullPointerException: - No EAR file exists to look into - No service element provided , can not determine which classes or interfaces to expose in WSDL
at oracle.j2ee.ws.tools.WsAssmWSDLGenerator.generate(WsAssmWSDLGenerator.java:115)
at oracle.j2ee.ws.tools.WsAssembler.assemble(WsAssembler.java:94)
at oracle.j2ee.ws.tools.WsAssembler.main(WsAssembler.java:66)

Oracle – Optimizer stats not being purged

July 28, 2011 1 comment

I’ve recently been monitoring two databases where a high amount of import/exports are taking place. The SYSAUX and SYSTEM tablespaces have been continually growing.

To resolve this I set the stats retention period to 7 days.

SQL> exec dbms_stats.alter_stats_history_retention(7);

I then continued to monitor the database and found that the SYSAUX tablespace was still continuing to grow. When checking the retention period it showed it to be as set, so I reduced it further to 3 days.

SQL>    select dbms_stats.get_stats_history_retention from dual;
GET_STATS_HISTORY_RETENTION
---------------------------
3

I then tried rebuilding the stats indexes and tables as they would now be fragmented.

SELECT
sum(bytes/1024/1024) Mb,
segment_name,
segment_type
FROM
dba_segments
WHERE
tablespace_name = 'SYSAUX'
AND
segment_type in ('INDEX','TABLE')
GROUP BY
segment_name,
segment_type
ORDER BY Mb;
MB  SEGMENT_NAME                             SEGMENT_TYPE
--  ---------------------------------------  ----------------
2   WRH$_SQLTEXT                             TABLE
2   WRH$_ENQUEUE_STAT_PK                     INDEX
2   WRI$_ADV_PARAMETERS                      TABLE
2   WRH$_SEG_STAT_OBJ_PK                     INDEX
3   WRI$_ADV_PARAMETERS_PK                   INDEX
3   WRH$_SQL_PLAN_PK                         INDEX
3   WRH$_SEG_STAT_OBJ                        TABLE
3   WRH$_ENQUEUE_STAT                        TABLE
3   WRH$_SYSMETRIC_SUMMARY_INDEX             INDEX
4   WRH$_SQL_BIND_METADATA_PK                INDEX
4   WRH$_SQL_BIND_METADATA                   TABLE
6   WRH$_SYSMETRIC_SUMMARY                   TABLE
7   WRH$_SQL_PLAN                            TABLE
8   WRI$_OPTSTAT_TAB_HISTORY                 TABLE
8   I_WRI$_OPTSTAT_TAB_ST                    INDEX
9   I_WRI$_OPTSTAT_H_ST                      INDEX
9   I_WRI$_OPTSTAT_TAB_OBJ#_ST               INDEX
12  I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST           INDEX
12  I_WRI$_OPTSTAT_IND_ST                    INDEX
12  WRI$_OPTSTAT_HISTGRM_HISTORY             TABLE
14  I_WRI$_OPTSTAT_IND_OBJ#_ST               INDEX
20  WRI$_OPTSTAT_IND_HISTORY                 TABLE
306 I_WRI$_OPTSTAT_HH_ST                     INDEX
366 WRI$_OPTSTAT_HISTHEAD_HISTORY            TABLE
408 I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST            INDEX

To reduce these tables and indexes you can issue the following:

SQL> alter table <table name> move tablespace SYSAUX;
SQL> alter index <index name> rebuild online;

If you are only running standard edition then you can only rebuild indexes offline. Online index rebuild is a feature of Enterprise Edition.

To find out the oldest available stats you can issue the following:

SQL> select dbms_stats.get_stats_history_availability from dual;
GET_STATS_HISTORY_AVAILABILITY
---------------------------------------------------------------------------
28-JUN-11 00.00.00.000000000 +01:00

To find out a list of how many stats are gathered for each day between the retention the current date and the oldest stats history issue the following:

SQL> select trunc(SAVTIME),count(1) from WRI$_OPTSTAT_HISTHEAD_HISTORY group by  trunc(SAVTIME) order by 1;
TRUNC(SAV COUNT(1)
--------- ----------
28-JUN-11 2920140
29-JUN-11 843683
30-JUN-11 519834
01-JUL-11 958836
02-JUL-11 3158052
03-JUL-11 287
04-JUL-11 1253952
05-JUL-11 732361
06-JUL-11 507186
07-JUL-11 189416
08-JUL-11 2619
09-JUL-11 1491
10-JUL-11 287
11-JUL-11 126324
12-JUL-11 139556
13-JUL-11 181068
14-JUL-11 4832
15-JUL-11 258027
16-JUL-11 1152
17-JUL-11 287
18-JUL-11 27839
21 rows selected.

What has happened here is that the job run by MMON every 24hrs has checked the retention period and tried to run a purge of all stats older than the retention period. As the job has not compeleted within 5 minutes because of the high number of stats collected on each day, the job has given up and rolled back. Therefore the stats are not being purged.

As each day continues the SYSAUX table is continuing to fill up because the job fails each night and cannot purge old stats.

To resolve this we have to issue a manual purge to clear down the old statistics. This can be UNDO tablespace extensive so it’s best to keep an eye on the amount of UNDO being generated. I suggest starting with the oldest and working fowards.

To manually purge the stats issue the following:

SQL> exec dbms_stats.purge_stats(to_date('10-JUL-11','DD-MON-YY'));PL/SQL procedure successfully completed.

SQL> select trunc(SAVTIME),count(1) from WRI$_OPTSTAT_HISTHEAD_HISTORY group by  trunc(SAVTIME) order by 1;
TRUNC(SAVTIME)         COUNT(1)
-------------------- ----------
29-Jun-2011 00:00:00     843683
30-Jun-2011 00:00:00     519834
01-Jul-2011 00:00:00     958836
02-Jul-2011 00:00:00    3158052
03-Jul-2011 00:00:00        287
04-Jul-2011 00:00:00    1253952
05-Jul-2011 00:00:00     732361
06-Jul-2011 00:00:00     507186
07-Jul-2011 00:00:00     189416
08-Jul-2011 00:00:00       2619
09-Jul-2011 00:00:00       1491
10-Jul-2011 00:00:00        287
11-Jul-2011 00:00:00     126324
12-Jul-2011 00:00:00     139556
13-Jul-2011 00:00:00     181068
14-Jul-2011 00:00:00       4832
15-Jul-2011 00:00:00     258027
16-Jul-2011 00:00:00       1152
17-Jul-2011 00:00:00        287
18-Jul-2011 00:00:00      27839
20 rows selected.

Once the amount of stats has been reduced the overnight job should work, alternatively you can create a job to run this similarly to running manually. Using the following code in a scheduled job:

BEGIN
sys.dbms_scheduler.create_job(
job_name => '"SYS"."PURGE_OPTIMIZER_STATS"',
job_type => 'PLSQL_BLOCK',
job_action => 'begin
dbms_stats.purge_stats(sysdate-3);
end;',
repeat_interval => 'FREQ=DAILY;BYHOUR=6;BYMINUTE=0;BYSECOND=0',
start_date => systimestamp at time zone 'Europe/Paris',
job_class => '"DEFAULT_JOB_CLASS"',
comments => 'job to purge old optimizer stats',
auto_drop => FALSE,
enabled => TRUE);
END;

Finally you will need to rebuild the indexes and move the tables. To do this you can spool a script to a dmp file and then run the dmp file.

SQL> select 'alter index '||segment_name||' rebuild;' FROM dba_segments where tablespace_name = 'SYSAUX' AND segment_type = 'INDEX';

Edit the file to remove the first and last lines (SQL> SELECT…. and SQL> spool off)

Run the file to rebuild the indexes.

You can then do the same with the tables

SQL> select 'alter table '||segment_name||' move tablespace SYSAUX;' FROM dba_segments where tablespace_name = 'SYSAUX' AND segment_type = 'TABLE';

Then you can re-run the original query, mine produces the following now and my SYSAUX table is only a few hundred MB full.

.6875 WRH$_ENQUEUE_STAT                 TABLE
.75 WRH$_SEG_STAT_OBJ                   TABLE
.8125 WRH$_SYSMETRIC_SUMMARY_INDEX      INDEX
.8125 I_WRI$_OPTSTAT_HH_ST              INDEX
.8125 WRH$_SQL_PLAN_PK                  INDEX
1 WRI$_OPTSTAT_HISTHEAD_HISTORY         TABLE
1 SYS$SERVICE_METRICS_TAB               TABLE
2 I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST         INDEX
2 WRH$_SYSMETRIC_SUMMARY                TABLE
2 WRI$_ADV_PARAMETERS                   TABLE
2 WRI$_ADV_PARAMETERS_PK                INDEX
4 WRH$_SQL_PLAN                         TABLE
689 rows selected.

Oracle – Fix unusable indexes

July 27, 2011 1 comment

If you want to find a list of unusable indexes and fix them you can issue the following SQL that will print out a list of alter commands that can be copied and pasted to the SQL command line then executed to fix the indexes.

SQL> select 'alter index '||index_name||' rebuild;' from all_indexes  where status ='UNUSABLE';
'ALTERINDEX'||INDEX_NAME||'REBUILD;'
--------------------------------------------------
alter index I_WRI$_OPTSTAT_HH_ST rebuild;
alter index I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST rebuild;
alter index I_WRI$_OPTSTAT_H_ST rebuild;
alter index I_WRI$_OPTSTAT_OPR_STIME rebuild;

This will help resolve errors such as the following:

SQL>  exec dbms_stats.purge_stats(to_date('28-JUN-11','DD-MON-YY'));
BEGIN dbms_stats.purge_stats(to_date('28-JUN-11','DD-MON-YY')); END;
*
ERROR at line 1:
ORA-01502: index 'SYS.I_WRI$_OPTSTAT_TAB_OBJ#_ST' or partition of such index is in unusable state
ORA-06512: at "SYS.DBMS_STATS", line 1100
ORA-06512: at "SYS.DBMS_STATS", line 17203
ORA-06512: at line 1

Oracle – ORA-01502 Index rebuild error

July 26, 2011 1 comment

I was recently trying to clear down the SYSAUX tablespace that had filled with fragmented statistics retention indexes and tables.

However, when I tried to rebuild one of the indexes I received the following error:

SQL> alter index I_WRI$_OPTSTAT_H_ST rebuild;
alter index I_WRI$_OPTSTAT_H_ST rebuild
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01502: index 'SYS.I_WRI$_OPTSTAT_IND_OBJ#_ST' or partition of such index is in unusable state

I queried the sys.all_indexes table to find which indexes were in an unusable state.

SQL> select index_name, status from all_indexes where status ='UNUSABLE';
INDEX_NAME                     STATUS
------------------------------ --------
I_WRI$_OPTSTAT_IND_OBJ#_ST     UNUSABLE
I_WRI$_OPTSTAT_IND_ST          UNUSABLE
I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST UNUSABLE
I_WRI$_OPTSTAT_H_ST            UNUSABLE

To resolve the issue I rebuild the offending index and all was well and good again.

SQL> alter index I_WRI$_OPTSTAT_IND_OBJ#_ST rebuild online;
Index altered.

PHP – Password encryption with salts


Password encryption is highly important when designing websites. It’s not at all recommended to pass, or store the password as plain text. Instead it is recommended to use a hashing algorithm such as md5 or sha1 to encrypt the password before it is stored in the database.

In addition to the encryption of the password it is also advisable to have a salt. A salt is a small piece of text that can be applied to the password to ensure that, even if the user types a weak password, the password is still strong. So for example the user types “apple”, the salt could be something as complex as “Ju9n#1£xFL” therefore the password plus salt would be appleJu9n#1£xFL

The salt doesn’t necessarily have to go on the end, it could go anywhere, you could even duplicate the salt.

For added security it is recommended to unique id with a random number on at the beginning. You can then encrypt the number with md5 and use substr to alter the salt length.

Once you have created your salt you can then use sha1 to encrypt the salt with the password. Finally, add the salt to the beginning of the created hash. Therefore, if someone tries to decrypt your hash, the decrypted string will be nonsensical without knowing the salt.

It is worth taking note that even with a salt, if the salt falls into the wrong hands, a password can still be hacked. So never store the salt anywhere that you would want others to see it.

Password comparison (Login forms)

It is advisable that you do not try to decrypt the password, instead compare the password given by the user, with the password stored in the database. You should have the salt stored with the password, or if fixed length be able to decipher which part is the salt. You then just need to encrypt the user given password the same way the stored password was, and compare the two.

<?php
function generateHash($plainText, $salt = null){
define('SALT_LENGTH', 11);
if ($salt === null){
$salt = substr(md5(uniqid(rand(), true)), 0, SALT_LENGTH);
}
else{
$salt = substr($salt, 0, SALT_LENGTH);
}
echo $salt . sha1($plainText . $salt)."<br/>";
}
function checkPassword ($hash,$salt) {
$password = '<password from database>';
if ($salt . sha1($password. $salt ) == $hash){
echo '<br/>The password is correct';
}
else{
echo '<br/>Wrong password!';
}
}
?>

PHP – Redirect to previous page

July 25, 2011 2 comments

If you wish to redirect your users back to the page they were on previously, then you need to issue the following command:

<?php
$ref = $_SERVER['HTTP_REFERER'];
header( 'refresh: 10; url='.$ref);
?>

The refresh indicates the time in seconds between being on the current page and redirecting them back to the previous page.

Oracle – Create Logical Standby Database 10g RAC


Backup primary database OASLIVE

(If using ASM then you must use RMAN to create the backup)

$ export ORACLE_SID=OASLIVE1
$ rman
RMAN> CONNECT TARGET /
connected to target database: OASLIVE (DBID=3404061247)
RMAN> BACKUP DATABASE FORMAT ‘/c04/oracle/oraback/OASLIVE/rman/OASLIVE_%U.rbak’

Copy control file from primary database for standby

RMAN> COPY CURRENT CONTROLFILE FOR STANDBY TO '/home/oracle/kerri/OASREP.stby';

Copy backup file and control file to standby node (Node11)

$ cd /home/oracle/kerri
$ scp OASREP.stby node11:/home/oracle/.
$ cd /c04/oracle/oraback/OASLIVE/rman/
$ scp OASLIVE_*.rbak node11:/c04/oracle/oraback/OASLIVE/rman/.

Whilst waiting for the backup to copy across, create standby redo logs in the primary database OASLIVE

(Amount of standby logs per thread = no. of redo logs +1)

node 1$ sqlplus ‘/as sysdba’
SQL> SELECT *
FROM v$log;
SQL> SELECT group#
,           thread#
,           sequence#
,           status
FROM v$standby_log;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1('+REPDATA') SIZE 256M;
SQL> /
SQL> /
SQL> /
SQL> /
SQL> SELECT group#
,           thread#
,           sequence#
,           status
FROM v$standby_log;

Copy the server details of the standby node to the primary node tnsnames.ora

node 11$ cd $TNS_ADMIN

Copy the following from OASSTBY tnsnames.ora to OASLIVE tnsnames.ora

OASSTBY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = dgoh-data-11-vip)(PORT = 1511))
(CONNECT_DATA =
(UR=A)
(SERVER = DEDICATED)
(SERVICE_NAME = OASSTBY.dudleygoh-tr.wmids.nhs.uk)
)
)

Log into ASM and delete all pre-existing OASSTBY files

Check that both OASLIVE nodes have sysdba rman privileges if they don’t then run the following:

node 1$ sqlplus ‘/as sysdba’
SQL> GRANT sysdba TO rman;

Create spfile for physical standby using modified pfile

(see appendix A at bottom of post for pfile details)

node 11$ sqlplus ‘/as sysdba’
SQL> STARTUP NOMOUNT PFILE='/home/oracle/kerri/OASSTBYpfile.ora.bkup';
SQL> CREATE SPFILE FROM PFILE='/home/oracle/kerri/OASSTBYpfile.ora.bkup';
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP NOMOUNT

Log into RMAN and recover the standby database

node 11$ rman
RMAN> CONNECT TARGET rman/o45rm4n@OASLIVE
RMAN> CONNECT AUXILIARY rman/o45rm4n@OASSTBY
RMAN> DUPLICATE TARGET DATABASE FOR STANDBY;

Copy the server details of the standby node to the primary node tnsnames.ora

node 11$ cd $TNS_ADMIN

Copy the following from OASREP tnsnames.ora to OASLIVE tnsnames.ora

OASREP =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = dgoh-data-11-vip)(PORT = 1511))
(CONNECT_DATA =
(UR=A)
(SERVER = DEDICATED)
(SERVICE_NAME = OASREP.dudleygoh-tr.wmids.nhs.uk)
)
)

Log into ASM and delete all pre-existing OASREP files

Create spfile for logical standby using modified pfile

(see appendix B for pfile details)

node 11$ sqlplus ‘/as sysdba’
SQL> STARTUP NOMOUNT PFILE='/home/oracle/kerri/OASREPpfile.ora.bkup';
SQL> CREATE SPFILE FROM PFILE='/home/oracle/kerri/OASREPpfile.ora.bkup';
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP NOMOUNT

Log into RMAN and recover the standby database

node 11$ rman
RMAN> CONNECT TARGET rman/o45rm4n@OASLIVE
RMAN> CONNECT AUXILIARY rman/o45rm4n@OASREP
RMAN> DUPLICATE TARGET DATABASE FOR STANDBY;

Copy across the password file to the standby database

$ scp orapwOASLIVE node11:/u01/oracle/ora102/dbs/orapwOASREP

Recover standby database

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Switch logfile on the Primary database

SQL> ALTER SYSTEM SWITCH LOGFILE;

On the standby verify that redo data has been received and applied

SQL> SELECT thread#
,           sequence#
,           name
,   TO_CHAR(first_time, 'dd-mon-yyyy hh24:mi:ss') First_time
,           applied
,           status
FROM v$archived_log
--WHERE applied='YES'
ORDER BY 3;
SQL> SELECT thread#
,           sequence#
,           applied
,           status
FROM v$archived_log
ORDER BY 1;
SQL> SELECT thread#
,       MAX(sequence#)
FROM v$archived_log
GROUP BY thread#;
SQL> SELECT group#
,           thread#
,           sequence#
,           status
FROM v$standby_log;
SQL> SELECT group#
,           thread#
,           sequence#
,           status
FROM v$log;
SQL> SELECT thread#
,    MAX(al.sequence#) Arch
FROM v$archived_log al
GROUP BY thread#;
SQL> SELECT thread#
,    MAX(lh.sequence#) App
FROM v$log_history lh
GROUP BY thread#;

Determine the latest archive redo log on the Primary

SQL> SELECT thread#
,           sequence#
,           Archived
,           Status
FROM v$log;
SQL> SELECT thread#
,           sequence#
,           Archived
,           Status
FROM v$standby_log;

Determine the most recent archive redo log file on the standby

SQL> SELECT thread#
,       max(sequence#)
FROM v$archived_log
GROUP BY thread#;

Determine the most recent archived logfile at each of the archive destinations on the primary.

SQL> SELECT destination
,           status
,           archived_thread#
,           archived_seq#
FROM v$archive_dest_status
WHERE status <> 'DEFFERED'
AND status <> 'INACTIVE';

Determine the most recent received and applied archived logfile at each of the archive destinations on the standby

SQL> SELECT thread#
,           sequence#
,           first_time
,           next_time
,           applied
FROM v$archived_log
ORDER BY thread#,sequence#;

Verify standby logs are created and running correctly on the standby. Invoke an archive log switch on the primary

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
SQL> Select GROUP#
,           THREAD#
,           SEQUENCE#
,           ARCHIVED
,           STATUS
,           BYTES/1048576
FROM v$standby_log;

Stop the apply on the physical standby to make it into a logical standby

SQL> ALTER DATABASE RECOVER MANAGED STANDBY CANCEL;

Find unsupported data types and skip them:

SQL> SELECT owner
,           table_name
FROM dba_logstdby_not_unique
WHERE (owner
,      table_name) NOT IN (SELECT DISTINCT owner, table_name FROM dba_logstdby_unsupported)
AND bad_column = 'Y';

EXECUTE DBMS_LOGSTDBY.SKIP(stmt => 'SCHEMA_DDL',-
schema_name => '<schema_name>', -
object_name => '<object_name>');

EXECUTE DBMS_LOGSTDBY.SKIP('DML','<schema_name>','<object_name>');

Build a dictionary in the redo data on the primary database

SQL> EXECUTE dbms_logstdby.build;

Convert the physical standby to a logical standby

SQL> ALTER DATABASE RECOVER TO LOGICAL STANDBY <db_name>;

Create new password file

node 11$ orapw file=$ORACLE_HOME/orapw<db_name> force=y entries=10 password=<password>

Open database

SQL> ALTER DATABASE OPEN RESETLOGS;

Open the logical standby database

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

Create scripts for reporting database (Optional)

Create a reporting user and their own tablespace on the logical standby

Create a script to refresh the datasets on a nightly basis

#!/bin/bash
export ORACLE_HOME=/u01/oracle/ora102
export PATH=$ORACLE_HOME/bin:$PATH
export ORACLE_SID=OASREP1
sqlplus '/as sysdba' << EOF
SET SERVEROUPUT ON
SPOOL /home/oracle/kerri/refresh_mviews.lst
SELECT TO_CHAR(SYSDATE, 'Dy DD-Mon-YYYY HH24:MI:SS') FROM DUAL;
EXEC oasis.refresh_mviews_proc;
SELECT TO_CHAR(SYSDATE, 'Dy DD-Mon-YYYY HH24:MI:SS') FROM DUAL;
SPOOL OFF;
EXIT;
EOF

Update the crontab to run at 5am each night

# Refresh Materialized Views on Reporting
00 05 * * * /home/oracle/kerri/refresh_mviews.sh > /home/oracle/kerri/test.txt

Create a trigger for the new user of the logical standby

CREATE OR REPLACE TRIGGER oasdba_LogStbyTrig
AFTER LOGON
ON database
DECLARE
cmmd VARCHAR2(64);
BEGIN
IF user = '<reporting user>' THEN
cmmd:='ALTER session disable guard';
EXECUTE IMMEDIATE cmmd;
INSERT INTO test VALUES(user
,                       SYSDATE
,               TO_CHAR(SYSDATE, 'hh24:mi:ss')
);
END IF;
END;
/

Create a procedure for refreshing materialized views

CREATE OR REPLACE PROCEDURE refresh_mviews_proc IS
CURSOR c1 IS SELECT owner, mview_name FROM dba_mviews;
dbname varchar2(10);
mview varchar2(250);
owner varchar2(250);
snap varchar2(250);
cmmd1 varchar2(64);
cmmd2 varchar2(64);
BEGIN
SELECT name INTO dbname FROM v$database;
cmmd1:='ALTER SESSION DISABLE GUARD';
cmmd2:='ALTER SESSION ENABLE GUARD';
IF dbname='OASREP'
THEN
EXECUTE IMMEDIATE cmmd1;
OPEN c1;
IF c1%ISOPEN THEN
LOOP
FETCH c1 INTO owner, mview;
snap := owner||'.'||mview;
dbms_mview.refresh(''||snap||'','C');
EXIT WHEN c1%NOTFOUND;
END LOOP;
END IF;
EXECUTE IMMEDIATE cmmd2;
END IF;
END;

Appendix A

Physical Standby spfile details for OASSTBY:

OASSTBY1.__db_cache_size=1191182336
OASSTBY.__java_pool_size=16777216
OASSTBY1.__java_pool_size=16777216
OASSTBY.__large_pool_size=16777216
OASSTBY1.__large_pool_size=16777216
OASSTBY.__shared_pool_size=352321536
OASSTBY1.__shared_pool_size=369098752
OASSTBY.__streams_pool_size=0
OASSTBY1.__streams_pool_size=0
*.audit_file_dest='/u01/oracle/admin/OASSTBY/adump'
*.background_dump_dest='/u01/oracle/admin/OASSTBY/bdump'
*.cluster_database=true
*.cluster_database_instances=2
*.compatible='10.2.0.4.0'
*.control_files='+LIVEDATA’
*.core_dump_dest='/u01/oracle/admin/OASSTBY/cdump'
*.db_block_size=8192
*.db_create_file_dest='+LIVEDATA'
*.db_domain='dudleygoh-tr.wmids.nhs.uk'
*.db_file_multiblock_read_count=8
*.db_file_name_convert='OASLIVE','OASSTBY'
*.db_name='OASLIVE'
*.db_unique_name='OASSTBY'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=OASSTBYXDB)'
*.fal_client='OASSTBY1'
*.fal_server='OASLIVE1,OASLIVE2'
OASSTBY1.instance_number=1
OASSTBY2.instance_number=2
*.job_queue_processes=1
OASSTBY2.local_listener='LISTENER_OASSTBY2'
OASSTBY1.local_listener='LISTENER_OASSTBY1'
*.log_archive_config='DG_CONFIG=(OASLIVE,OASSTBY,OASREP)'
*.log_archive_dest_1='LOCATION=+LIVEARCH VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=OASSTBY'
*.log_archive_dest_2='SERVICE=OASLIVE VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=OASLIVE'
*.log_archive_dest_3='SERVICE=OASREP LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=OASREP'
*.log_archive_dest_state_2='enable'
*.log_archive_format='%t_%s_%r.dbf'
*.log_archive_max_processes=10
*.open_cursors=300
*.pga_aggregate_target=2503999488
*.processes=600
*.remote_listener='LISTENERS_OASSTBY'
*.remote_login_passwordfile='exclusive'
*.sga_target=1610612736
*.standby_file_management='AUTO'
OASSTBY2.thread=2
OASSTBY1.thread=1
*.undo_management='AUTO'
OASSTBY1.undo_tablespace='UNDOTBS1'
OASSTBY2.undo_tablespace='UNDOTBS2'
*.user_dump_dest='/u01/oracle/admin/OASSTBY/udump'
*.utl_file_dir='*'

Appendix B

Logical Standby spfile details for OASREP:

OASREP1.__db_cache_size=1073741824
OASREP1.__java_pool_size=33554432
OASREP1.__large_pool_size=16777216
OASREP1.__shared_pool_size=469762048
OASREP1.__streams_pool_size=0
*.archive_lag_target=3600
*.audit_file_dest='/u01/oracle/admin/OASREP/adump'
*.background_dump_dest='/u01/oracle/admin/OASREP/bdump'
*.cluster_database_instances=1
*.cluster_database=FALSE
*.compatible='10.2.0.4.0'
*.control_files='+REPDATA’
*.core_dump_dest='/u01/oracle/admin/OASREP/cdump'
*.db_block_size=8192
*.db_create_file_dest='+REPDATA'
*.db_domain='dudleygoh-tr.wmids.nhs.uk'
*.db_file_multiblock_read_count=8
*.db_file_name_convert='OASLIVE','OASREP'
*.db_name='OASREP'#db_name
*.db_unique_name='OASREP'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=OASREPXDB)'
*.fal_client='OASREP1'
*.fal_server='OASLIVE1,OASLIVE2'
OASREP1.instance_number=1
*.job_queue_processes=0
OASREP1.local_listener='LISTENER_OASREP1'
*.log_archive_config='DG_CONFIG=(OASLIVE,OASSTBY,OASREP)'
*.log_archive_dest_1='LOCATION=+LIVEARCH VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=OASREP'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='%t_%s_%r.dbf'
*.log_archive_max_processes=10
*.open_cursors=300
*.pga_aggregate_target=2503999488
*.processes=600
*.remote_listener='LISTENERS_OASREP'
*.remote_login_passwordfile='exclusive'
*.sga_target=1610612736
*.standby_file_management='AUTO'
OASREP1.thread=1
*.undo_management='AUTO'
OASREP1.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/u01/oracle/admin/OASREP/udump'
*.utl_file_dir='*'

Appendix C

Primary Database pfile details for OASLIVE

OASLIVE1.__db_cache_size=989855744
OASLIVE2.__db_cache_size=1291845632
OASLIVE2.__java_pool_size=16777216
OASLIVE1.__java_pool_size=33554432
OASLIVE2.__large_pool_size=16777216
OASLIVE1.__large_pool_size=16777216
OASLIVE1.__shared_pool_size=553648128
OASLIVE2.__shared_pool_size=268435456
OASLIVE2.__streams_pool_size=0
OASLIVE1.__streams_pool_size=0
*.archive_lag_target=900
*.audit_file_dest='/u01/oracle/admin/OASLIVE/adump'
*.background_dump_dest='/u01/oracle/admin/OASLIVE/bdump'
*.cluster_database_instances=2
*.cluster_database=true
*.compatible='10.2.0.4.0'
*.control_files='+LIVEDATA/oaslive/controlfile/current.289.724539179','+LIVEARCH/oaslive/controlfile/current.3794.724539179'
*.core_dump_dest='/u01/oracle/admin/OASLIVE/cdump'
*.db_block_size=8192
*.db_create_file_dest='+LIVEDATA'
*.db_domain='dudleygoh-tr.wmids.nhs.uk'
*.db_file_multiblock_read_count=8
*.db_name='OASLIVE'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=OASLIVEXDB)'
*.fal_client='OASLIVE1,OASLIVE2'
*.fal_server='OASSTBY'
OASLIVE1.instance_number=1
OASLIVE2.instance_number=2
*.job_queue_processes=1
OASLIVE2.local_listener='LISTENER_OASLIVE2'
OASLIVE1.local_listener='LISTENER_OASLIVE1'
*.log_archive_config='DG_CONFIG=(OASLIVE,OASSTBY,OASREP)'
*.log_archive_dest_1='LOCATION=+LIVEARCH VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=OASLIVE'
*.log_archive_dest_2='SERVICE=OASSTBY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=OASSTBY'
*.log_archive_dest_3='SERVICE=OASREP LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=OASREP'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_dest_state_3='ENABLE'
*.log_archive_format='%t_%s_%r.dbf'
*.log_archive_max_processes=10
*.open_cursors=300
*.pga_aggregate_target=2503999488
*.processes=600
*.remote_listener='LISTENERS_OASLIVE'
*.remote_login_passwordfile='exclusive'
*.sga_target=1610612736
*.standby_file_management='AUTO'
OASLIVE2.thread=2
OASLIVE1.thread=1
*.undo_management='AUTO'
OASLIVE1.undo_tablespace='UNDOTBS1'
OASLIVE2.undo_tablespace='UNDOTBS2'
*.user_dump_dest='/u01/oracle/admin/OASLIVE/udump'
*.utl_file_dir='*'

Oracle – Create Physical Standby Database 10g RAC


Creating a Physical Standby Database OASSTBY

Make sure database is in archivelog mode
(If it’s not then shutdown instance 2, shutdown then startup mount instance 1, enable archivelog mode, startup both instances)

node 1$  ssh node2
node 2$  sqlplus ‘/as sysdba’
SQL>  shutdown immediate
SQL>  exit
$  exit
node 1$  sqlplus ‘/as sysdba’
SQL>  shutdown immediate
SQL>  startup mount
SQL>  alter database archivelog;
SQL>  alter database open;
SQL>  exit
node 1$  ssh node2
node 2$  sqlplus ‘/as sysdba’
SQL>  startup open

Backup primary database OASLIVE

(If using ASM then you must use RMAN to create the backup)

$  export ORACLE_SID=OASLIVE1
$  rman
RMAN> connect target /
connected to target database: OASLIVE (DBID=3404061247)
RMAN> backup database plus archivelog format ‘/c04/oracle/oraback/OASLIVE/rman/OASLIVE_%U.rbak’

Copy control file from primary database for standby

RMAN> copy current controlfile for standby to '/home/oracle/OASSTBY.stby';
Copy backup file and control file to standby node (Node11)
$  cd /home/oracle/kerri
$  scp OASSTBY.stby node11:/home/oracle/.
$  cd /c04/oracle/oraback/OASLIVE/rman/
$  scp OASLIVE_*.rbak node11:/c04/oracle/oraback/OASLIVE/rman/.

Whilst waiting for the backup to copy across, create standby redo logs in the primary database OASLIVE

(Amount of standby logs per thread = no. of redo logs +1)

node 1$ sqlplus ‘/as sysdba’
SQL> select * from v$log;
SQL> select group#, thread#, sequence#, status from v$standby_log;
SQL> alter database add standby logfile thread 1 ('+LIVEDATA') size 256M;
SQL> /
SQL>  /
SQL>  /
SQL>  /
SQL>  alter database add standby logfile thread 2 ('+LIVEDATA') size 250M;
SQL>  /
SQL>  /
SQL>  /
SQL>  /
SQL> select group#, thread#, sequence#, status from v$standby_log;

Copy the server details of the standby node to the primary node tnsnames.ora

node 11$  cd $TNS_ADMIN
Copy the following from OASSTBY tnsnames.ora to OASLIVE tnsnames.ora
OASSTBY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = dgoh-data-11-vip)(PORT = 1511))
(CONNECT_DATA =
(UR=A)
(SERVER = DEDICATED)
(SERVICE_NAME = OASSTBY.dudleygoh-tr.wmids.nhs.uk)
)
)

Log into ASM and delete all pre-existing OASSTBY files

Check that both OASLIVE nodes have sysdba rman privileges if they don’t then run the following:

node 1$  sqlplus ‘/as sysdba’
SQLPLUS>  grant sysdba to rman;

Create spfile for physical standby using modified pfile (see end of post for pfile details)

node 11$  sqlplus ‘/as sysdba’
SQLPLUS>  startup nomount pfile='/home/oracle/kerri/OASSTBYpfile.ora.bkup';
SQL> create spfile from pfile='/home/oracle/kerri/OASSTBYpfile.ora.bkup';
SQL>  shutdown immediate
SQL>  startup nomount

Log into RMAN and recover the standby database

node 11$  rman
RMAN>  connect target rman/o45rm4n@OASLIVE
RMAN>  connect auxiliary rman/o45rm4n@OASSTBY
RMAN>  duplicate target database for standby;

Copy across the password file

$  scp orapwOASLIVE node11:/u01/oracle/ora102/dbs/orapwOASSTBY
Recover database

SQLPLUS>  alter database recover managed standby database disconnect from session;

Appendix

Physical Standby spfile details for OASSTBY:

OASSTBY1.__db_cache_size=1191182336
OASSTBY.__java_pool_size=16777216
OASSTBY1.__java_pool_size=16777216
OASSTBY.__large_pool_size=16777216
OASSTBY1.__large_pool_size=16777216
OASSTBY.__shared_pool_size=352321536
OASSTBY1.__shared_pool_size=369098752
OASSTBY.__streams_pool_size=0
OASSTBY1.__streams_pool_size=0
*.audit_file_dest='/u01/oracle/admin/OASSTBY/adump'
*.background_dump_dest='/u01/oracle/admin/OASSTBY/bdump'
*.cluster_database=true
*.cluster_database_instances=2
*.compatible='10.2.0.4.0'
*.control_files='+LIVEDATA’
*.core_dump_dest='/u01/oracle/admin/OASSTBY/cdump'
*.db_block_size=8192
*.db_create_file_dest='+LIVEDATA'
*.db_domain='dudleygoh-tr.wmids.nhs.uk'
*.db_file_multiblock_read_count=8
*.db_file_name_convert='OASLIVE','OASSTBY'
*.db_name='OASLIVE'
*.db_unique_name='OASSTBY'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=OASSTBYXDB)'
*.fal_client='OASSTBY1'
*.fal_server='OASLIVE1,OASLIVE2'
OASSTBY1.instance_number=1
OASSTBY2.instance_number=2
*.job_queue_processes=1
OASSTBY2.local_listener='LISTENER_OASSTBY2'
OASSTBY1.local_listener='LISTENER_OASSTBY1'
*.log_archive_config='DG_CONFIG=(OASLIVE,OASSTBY,OASREP)'
*.log_archive_dest_1='LOCATION=+LIVEARCH VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=OASSTBY'
*.log_archive_dest_2='SERVICE=OASLIVE VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=OASLIVE'
*.log_archive_dest_3='SERVICE=OASREP LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=OASREP'
*.log_archive_dest_state_2='enable'
*.log_archive_format='%t_%s_%r.dbf'
*.log_archive_max_processes=10
*.open_cursors=300
*.pga_aggregate_target=2503999488
*.processes=600
*.remote_listener='LISTENERS_OASSTBY'
*.remote_login_passwordfile='exclusive'
*.sga_target=1610612736
*.standby_file_management='AUTO'
OASSTBY2.thread=2
OASSTBY1.thread=1
*.undo_management='AUTO'
OASSTBY1.undo_tablespace='UNDOTBS1'
OASSTBY2.undo_tablespace='UNDOTBS2'
*.user_dump_dest='/u01/oracle/admin/OASSTBY/udump'
*.utl_file_dir='*'