Quantcast
Channel: scripts – Oracle DBA – A lifelong learning experience
Viewing all 11 articles
Browse latest View live

Script to tidy archivelogs from ASM and RMAN

$
0
0

We have a pre-production RAC cluster that is kept in archivelog mode to allow true performance monitoring (and to be used for Streams and DataGuard testing). However we do not need the archivelogs for recovery purposes and as we perform high-volume testing the +FRA diskgroup (on solid-state disk) gets full very quickly.

I wrote a script that can be run to quickly free up space. It connects to the ASM instance and removes the logfiles.

It sets the SID and ORACLE_HOME to that of the RAC instance then runs RMAN to perform a crosscheck and delete of the archivelogs.

Not particularly complex but efficient.

export ORACLE_SID=+ASM1
export ORACLE_HOME=/u00/app/asm/product/11.1.0/db_1

asmcmd -p << EOF

ls FRA/RACCLUSTER/ARCHIVELOG/*2008*/*
rm -rf FRA/RACCLUSTER/ARCHIVELOG/*
ls FRA/RACCLUSTER/ARCHIVELOG/*
EOF

export ORACLE_SID=RACSID
export ORACLE_HOME=/u00/app/oracle/product/11.1.0/db_2

rman target / catalog username/password@catdb << EOF1

CHANGE ARCHIVELOG ALL VALIDATE;
DELETE NOPROMPT EXPIRED ARCHIVELOG ALL;
EOF1



The ASM script of all ASM scripts !

$
0
0

The asm information script I use which gives me everything I think I need in one go.

If there are any queries that others find useful please comment on them and I will add them to the script.

 

Credit where credit is due. I think Alan Cooper wrote the original version, although it has been amended since then.

 

set wrap off

set lines 120

set pages 999

col “Group Name”   form a25

col “Disk Name”    form a30

col “State”  form a15

col “Type”   form a7

col “Free GB”   form 9,999

 

prompt

prompt ASM Disk Groups

prompt ===============

select group_number  “Group”

,      name          “Group Name”

,      state         “State”

,      type          “Type”

,      total_mb/1024 “Total GB”

,      free_mb/1024  “Free GB”

from   v$asm_diskgroup

/

 

prompt

prompt ASM Disks

prompt =========

 

col “Group”          form 999

col “Disk”           form 999

col “Header”         form a9

col “Mode”           form a8

col “Redundancy”     form a10

col “Failure Group”  form a10

col “Path”           form a19

 

select group_number  “Group”

,      disk_number   “Disk”

,      header_status “Header”

,      mode_status   “Mode”

,      state         “State”

,      redundancy    “Redundancy”

,      total_mb      “Total MB”

,      free_mb       “Free MB”

,      name          “Disk Name”

,      failgroup     “Failure Group”

,      path          “Path”

from   v$asm_disk

order by group_number

,        disk_number

/

 

prompt

prompt Instances currently accessing these diskgroups

prompt ==============================================

col “Instance” form a8

select c.group_number  “Group”

,      g.name          “Group Name”

,      c.instance_name “Instance”

from   v$asm_client c

,      v$asm_diskgroup g

where  g.group_number=c.group_number

/

 

prompt

prompt Current ASM disk operations

prompt ===========================

select *

from   v$asm_operation

/

 

prompt

prompt free ASM disks and their paths

prompt ===========================

select header_status , mode_status, path from V$asm_disk

where header_status in (‘FORMER’,'CANDIDATE’)

/

 

clear columns


Automatically running sql_advisor tasks from ADDM reports

$
0
0

STOP PRESS – 17 Nov 2009  – updated with latest code which works against both 10g and 11g databases

 I am attaching scripts which I wrote a while ago to automatically pick any sql_ids reported in the latest ADDM and then run sql_advisor to report on any tuning advice. I am not suggesting that the information they provide is not available from EM or indeed every task reported needs resolving but it can be a good heads-up on a system you don’t know very well.

These are enabled every hour (can be less depending upon your snapshot interval) and they create a daily file which can be easily reviewed.

I find the real benefit is not on production databases but on dev and test databases that are being used for development prior to production implementation. This is for two reasons, firstly I hope that the team has a good handle on what is happening in production and are aware of issues and secondly we are most likely  to be able to add most value and benefit in development environments before the code is made live.

A couple of ‘issuettes’. The output from the ADDM report is different between 10g and 11g so I have amended the awk file to cater for both versions. I have been having an ongoing problem with tghe sql_advisor tuning task timing out on some systems and consequently leaving the task created for the next run. I have therefore amended the loop to drop the task at several points which looks untidy in the output file bit does seem to resolve the problem. 

I hoped to attach a zip file containing 4 scripts but cannot see how to do it without a plug-in which is a problem on my works PC. so in the meantime I have pasted the code of each of 4 files.

tuning_recommendations.ksh which is the controlling script

#! /bin/ksh
# loop though the file produced from get_addm_report.sql and put the gathered sql-ids into a flat file
# awk the file to get just the SQL_ID
# for each sql_id create a task, execute that task, run the report and then delete the task
#
# The delete tuning task job is run an additional twice because if the tuning task times out then it does not clean up properly
# Better to see a few failures in this job that not run the sql_tuning_advisor at all.
#if [ -d /home/oracle/logs ]
then
   rm  /home/oracle/logs/tmp*.log
   else
   mkdir /home/oracle/logs
   exit
fi
if [ $# -ne 1 ]
then
    echo "No ORACLE SID  - exiting"
    exit
fi
# execute ORACLE's .profile
#
#. ~/.profile
unset ORAENV_ASK
#
# set up environment variables.
#

ORACLE_SID=$1
. /usr/local/bin/oraenv ${ORACLE_SID}
export ORACLE_HOME=`cat /etc/oratab | grep $ORACLE_SID | awk -F: '{print $2 }`
export PATH=$ORACLE_HOME/bin:$PATH
export ORAENV_ASK=NO
today=`date +%d-%b-%Y`; export today
LOGDIR=$HOME/logs
LOGFILE=$LOGDIR/get_addm_${today}.log
REPORTFILE=$LOGDIR/sql_advisor_report_${ORACLE_SID}_${today}.log

#
#
sqlplus -s /nolog  <<SQLEND
connect / as sysdba
     spool $LOGDIR/tmp_${ORACLE_SID}_1.log
     @/shared/oracle/performance/get_addm_report.sql
     spool off
     exit
SQLEND

<p>&nbsp;</p>

cat $LOGDIR/tmp_${ORACLE_SID}_1.log|awk -f /shared/oracle/performance/tuning_recommendations.awk |awk '!a[$0]++' > $LOGDIR/tmp_${ORACLE_SID}_2.log
cat  $LOGDIR/tmp_${ORACLE_SID}_2.log | awk '$0!~/^$/ {print $0}' > $LOGDIR/tmp_${ORACLE_SID}_3.log

for PLAN in  `cat $LOGDIR/tmp_${ORACLE_SID}_3.log`
do
sqlplus -s /nolog  <<SQLEND >> $REPORTFILE
connect / as sysdba
        begin
        DBMS_SQLTUNE.drop_tuning_task('test_task1');
        end;
        /
     SELECT status FROM USER_ADVISOR_TASKS WHERE task_name = 'test_task1';
     @/shared/oracle/performance/sql_advisor.sql $PLAN
     exit
SQLEND
done

# tidy up the report file
# tidy up reports > 14 days old
find  $LOGDIR -name "sql_advisor_repor*.log" -mtime +14 -print -exec rm -f {} \;

get_addm_report.sql which gets each task from the last snapshot from dba_advisor_tasks

set long  10000000
set pagesize 50000
column get_clob format a80

select dbms_advisor.get_task_report (task_name) as ADDM_report
from dba_advisor_tasks
where task_id = (
        select max(t. task_id)
        from dba_advisor_tasks t, dba_advisor_log l
        where t.task_id = l.task_id
        and t.advisor_name = 'ADDM'
        and l.status = 'COMPLETED');

tuning_recommendations.awk is a short awk script used to process the output from get_addm_report.sql

BEGIN{
#start at the first line
#OUTFILE="$HOME/logs/outfile.log"
}
{
        {
                if (($1=="RATIONALE:") && ($2=="SQL")) #10G ADDM format
                {
                        F1=$6
                }
                if (($1=="Run") && ($2=="SQL") && ($3=="Tuning") && ($4=="Advisor") && ($7=="SQL") && ($10=="SQL_ID")) #11G ADDM format
                {
                        F1=$11
                }

        }
VAR1=substr(F1,2,13)
print VAR1
}
END{
}

sql_advisor.sql runs the sql_advisor package against each task found.

DECLARE
my_task_name   VARCHAR2 (30);
my_sqltext     CLOB;
my_sqlid        varchar2(30);

BEGIN
my_sqlid := '&1';
my_task_name := dbms_sqltune.create_tuning_task (sql_id=> my_sqlid,
       scope         => 'COMPREHENSIVE',
       time_limit    => 300,
       task_name     => 'test_task1',
       description   => 'test_task1'
    );
END;
/

BEGIN
dbms_sqltune.execute_tuning_task (task_name => 'test_task1');
END;
/

SELECT status FROM USER_ADVISOR_TASKS WHERE task_name = 'test_task1';
SET LONG 10000
SET LONGCHUNKSIZE 10000
SET LINESIZE 100
set pages 60
SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK( 'test_task1')
FROM DUAL;
begin
DBMS_SQLTUNE.drop_tuning_task('test_task1');
end;
/

We have a read only NFS mounted disk available on all database servers and the files are placed in there and initiated by a cron entry for each SID on an hourly basis
40 * * * * /shared/oracle/performance/tuning_recommendations.ksh SID >/dev/null 2>&1

Output is created in a folder $HOME/logs and 14 days worth of reports are kept.

A sample output report (only one task shown but certainly on this Peoplesoft database it would show many tasks)

DBMS_SQLTUNE.REPORT_TUNING_TASK('TEST_TASK1')
----------------------------------------------------------------------------------------------------
GENERAL INFORMATION SECTION
-------------------------------------------------------------------------------
Tuning Task Name                  : test_task1
Tuning Task Owner                 : SYS
Scope                             : COMPREHENSIVE
Time Limit(seconds)               : 60
Completion Status                 : COMPLETED
Started at                        : 11/12/2009 07:43:33
Completed at                      : 11/12/2009 07:44:16
Number of Statistic Findings      : 1
Number of Index Findings          : 1

-------------------------------------------------------------------------------
Schema Name: SYSADM
SQL ID     : cy3fmjha2sjnr
SQL Text   : SELECT M.EMPLID, M.EMPL_RCD, M.SCH_PRIM_ALT_IND,
             TO_CHAR(M.DUR,'YYYY-MM-DD'), M.SEQ_NO, M.CHNG_PRIMARY,
             M.SCHEDULE_GRP, M.SETID, M.WRKDAY_ID, M.SHIFT_ID, M.SCHED_HRS,
             M.SCH_CONFIG1, M.SCH_CONFIG2, M.SCH_CONFIG3, M.SCH_CONFIG4,
             TO_CHAR(M.START_DTTM,'YYYY-MM-DD-HH24.MI.SS.&amp;quot;000000&amp;quot;'),
             TO_CHAR(M.END_DTTM,'YYYY-MM-DD-HH24.MI.SS.&amp;quot;000000&amp;quot;'),
             M.SCHED_SOURCE, M.OFFDAY_IND, A.TIMEZONE, A.SCH_CATEGORY From
             PS_SCH_MNG_SCH_TBL M, PS_SCH_ADHOC_DTL A Where M.EMPLID = :1 and
             M.EMPL_RCD = :2 and M.SCH_PRIM_ALT_IND = :3 and M.DUR between
             TO_DATE(:4,'YYYY-MM-DD') and TO_DATE(:5,'YYYY-MM-DD') and
             A.EMPLID = M.EMPLID and A.EMPL_RCD = M.EMPL_RCD and
             A.SCH_PRIM_ALT_IND = M.SCH_PRIM_ALT_IND and A.DUR = M.DUR and
             A.SEQ_NO = M.SEQ_NO and A.SEQNUM = 1 Order By M.DUR Asc,
             M.SCHED_SOURCE Desc, M.SEQ_NO Desc

-------------------------------------------------------------------------------
FINDINGS SECTION (2 findings)
-------------------------------------------------------------------------------

1- Statistics Finding
---------------------
  Optimizer statistics for index &amp;quot;SYSADM&amp;quot;.&amp;quot;PS_SCH_MNG_SCH_TBL&amp;quot; are stale.

  Recommendation
  --------------
  - Consider collecting optimizer statistics for this index.
    execute dbms_stats.gather_index_stats(ownname =&amp;gt; 'SYSADM', indname =&amp;gt;
            'PS_SCH_MNG_SCH_TBL', estimate_percent =&amp;gt;
            DBMS_STATS.AUTO_SAMPLE_SIZE);

  Rationale
  ---------
    The optimizer requires up-to-date statistics for the index in order to
    select a good execution plan.

2- Index Finding (see explain plans section below)
--------------------------------------------------
  The execution plan of this statement can be improved by creating one or more
  indices.

  Recommendation (estimated benefit: 94.67%)
  ------------------------------------------

DBMS_SQLTUNE.REPORT_TUNING_TASK('TEST_TASK1')
----------------------------------------------------------------------------------------------------
  - Consider running the Access Advisor to improve the physical schema design
    or creating the recommended index.
    create index SYSADM.IDX$$_21C0F0001 on
    SYSADM.PS_SCH_ADHOC_DTL(&amp;quot;EMPLID&amp;quot;,&amp;quot;SEQNUM&amp;quot;,&amp;quot;EMPL_RCD&amp;quot;,&amp;quot;SCH_PRIM_ALT_IND&amp;quot;,&amp;quot;DU
    R&amp;quot;);

  - Consider running the Access Advisor to improve the physical schema design
    or creating the recommended index.
    create index SYSADM.IDX$$_21C0F0002 on
    SYSADM.PS_SCH_MNG_SCH_TBL(&amp;quot;EMPLID&amp;quot;,&amp;quot;DUR&amp;quot;);

  Rationale
  ---------
    Creating the recommended indices significantly improves the execution plan
    of this statement. However, it might be preferable to run &amp;quot;Access Advisor&amp;quot;
    using a representative SQL workload as opposed to a single statement. This
    will allow to get comprehensive index recommendations which takes into
    account index maintenance overhead and additional space consumption.

-------------------------------------------------------------------------------
EXPLAIN PLANS SECTION
-------------------------------------------------------------------------------

1- Original
-----------
Plan hash value: 2070933151

----------------------------------------------------------------------------------------------------
------------------------
| Id  | Operation                             | Name               | Rows  | Bytes | Cost (%CPU)| Ti
me     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------
------------------------
|   0 | SELECT STATEMENT                      |                    |     1 |   103 |   387   (1)| 00
:00:01 |       |       |
|*  1 |  FILTER                               |                    |       |       |            |
       |       |       |
|   2 |   SORT ORDER BY                       |                    |     1 |   103 |   387   (1)| 00
:00:01 |       |       |
|   3 |    NESTED LOOPS                       |                    |     1 |   103 |   386   (1)| 00
:00:01 |       |       |
|   4 |     PARTITION RANGE ITERATOR          |                    |    10 |   350 |   371   (1)| 00
:00:01 |   KEY |   KEY |
|   5 |      TABLE ACCESS BY LOCAL INDEX ROWID| PS_SCH_ADHOC_DTL   |    10 |   350 |   371   (1)| 00
:00:01 |   KEY |   KEY |
|*  6 |       INDEX RANGE SCAN                | PS_SCH_ADHOC_DTL   |    10 |       |   369   (1)| 00
:00:01 |   KEY |   KEY |
|   7 |     PARTITION RANGE ITERATOR          |                    |     1 |    68 |     2   (0)| 00
:00:01 |   KEY |   KEY |
|   8 |      TABLE ACCESS BY LOCAL INDEX ROWID| PS_SCH_MNG_SCH_TBL |     1 |    68 |     2   (0)| 00
:00:01 |   KEY |   KEY |
|*  9 |       INDEX UNIQUE SCAN               | PS_SCH_MNG_SCH_TBL |     1 |       |     1   (0)| 00
:00:01 |   KEY |   KEY |
----------------------------------------------------------------------------------------------------
------------------------

Predicate Information (identified by operation id):

DBMS_SQLTUNE.REPORT_TUNING_TASK('TEST_TASK1')
----------------------------------------------------------------------------------------------------
---------------------------------------------------

   1 - filter(TO_DATE(:4,'YYYY-MM-DD')&amp;lt;=TO_DATE(:5,'YYYY-MM-DD'))
   6 - access(&amp;quot;A&amp;quot;.&amp;quot;EMPLID&amp;quot;=:1 AND &amp;quot;A&amp;quot;.&amp;quot;EMPL_RCD&amp;quot;=TO_NUMBER(:2) AND &amp;quot;A&amp;quot;.&amp;quot;SCH_PRIM_ALT_IND&amp;quot;=:3 AND
              &amp;quot;A&amp;quot;.&amp;quot;DUR&amp;quot;&amp;gt;=TO_DATE(:4,'YYYY-MM-DD') AND &amp;quot;A&amp;quot;.&amp;quot;SEQNUM&amp;quot;=1 AND &amp;quot;A&amp;quot;.&amp;quot;DUR&amp;quot;&amp;lt;=TO_DATE(:5,'YYYY
-MM-DD'))
       filter(&amp;quot;A&amp;quot;.&amp;quot;SEQNUM&amp;quot;=1)
   9 - access(&amp;quot;M&amp;quot;.&amp;quot;EMPLID&amp;quot;=:1 AND &amp;quot;M&amp;quot;.&amp;quot;EMPL_RCD&amp;quot;=TO_NUMBER(:2) AND &amp;quot;M&amp;quot;.&amp;quot;SCH_PRIM_ALT_IND&amp;quot;=:3 AND
              &amp;quot;A&amp;quot;.&amp;quot;DUR&amp;quot;=&amp;quot;M&amp;quot;.&amp;quot;DUR&amp;quot; AND &amp;quot;A&amp;quot;.&amp;quot;SEQ_NO&amp;quot;=&amp;quot;M&amp;quot;.&amp;quot;SEQ_NO&amp;quot;)
       filter(&amp;quot;M&amp;quot;.&amp;quot;DUR&amp;quot;&amp;gt;=TO_DATE(:4,'YYYY-MM-DD') AND &amp;quot;M&amp;quot;.&amp;quot;DUR&amp;quot;&amp;lt;=TO_DATE(:5,'YYYY-MM-DD'))

2- Using New Indices
--------------------
Plan hash value: 1209469329

----------------------------------------------------------------------------------------------------
------------------------
| Id  | Operation                             | Name               | Rows  | Bytes | Cost (%CPU)| Ti
me     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------
------------------------
|   0 | SELECT STATEMENT                      |                    |     1 |   103 |    21  (10)| 00
:00:01 |       |       |
|*  1 |  FILTER                               |                    |       |       |            |
       |       |       |
|   2 |   SORT ORDER BY                       |                    |     1 |   103 |    21  (10)| 00
:00:01 |       |       |
|*  3 |    HASH JOIN                          |                    |     1 |   103 |    20   (5)| 00
:00:01 |       |       |
|   4 |     TABLE ACCESS BY GLOBAL INDEX ROWID| PS_SCH_ADHOC_DTL   |    10 |   350 |    10   (0)| 00
:00:01 | ROWID | ROWID |
|*  5 |      INDEX RANGE SCAN                 | IDX$$_21C0F0001    |    10 |       |     4   (0)| 00
:00:01 |       |       |
|*  6 |     TABLE ACCESS BY GLOBAL INDEX ROWID| PS_SCH_MNG_SCH_TBL |    13 |   884 |     9   (0)| 00
:00:01 | ROWID | ROWID |
|*  7 |      INDEX RANGE SCAN                 | IDX$$_21C0F0002    |    13 |       |     3   (0)| 00
:00:01 |       |       |
----------------------------------------------------------------------------------------------------
------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(TO_DATE(:4,'YYYY-MM-DD')&amp;lt;=TO_DATE(:5,'YYYY-MM-DD'))
   3 - access(&amp;quot;A&amp;quot;.&amp;quot;EMPLID&amp;quot;=&amp;quot;M&amp;quot;.&amp;quot;EMPLID&amp;quot; AND &amp;quot;A&amp;quot;.&amp;quot;EMPL_RCD&amp;quot;=&amp;quot;M&amp;quot;.&amp;quot;EMPL_RCD&amp;quot; AND
              &amp;quot;A&amp;quot;.&amp;quot;SCH_PRIM_ALT_IND&amp;quot;=&amp;quot;M&amp;quot;.&amp;quot;SCH_PRIM_ALT_IND&amp;quot; AND &amp;quot;A&amp;quot;.&amp;quot;DUR&amp;quot;=&amp;quot;M&amp;quot;.&amp;quot;DUR&amp;quot; AND
              SYS_OP_DESCEND(&amp;quot;A&amp;quot;.&amp;quot;DUR&amp;quot;)=SYS_OP_DESCEND(&amp;quot;M&amp;quot;.&amp;quot;DUR&amp;quot;) AND &amp;quot;A&amp;quot;.&amp;quot;SEQ_NO&amp;quot;=&amp;quot;M&amp;quot;.&amp;quot;SEQ_NO&amp;quot;)
   5 - access(&amp;quot;A&amp;quot;.&amp;quot;EMPLID&amp;quot;=:1 AND &amp;quot;A&amp;quot;.&amp;quot;SEQNUM&amp;quot;=1 AND &amp;quot;A&amp;quot;.&amp;quot;EMPL_RCD&amp;quot;=TO_NUMBER(:2) AND &amp;quot;A&amp;quot;.&amp;quot;SCH_PRIM_
ALT_IND&amp;quot;=:3 AND
              &amp;quot;A&amp;quot;.&amp;quot;DUR&amp;quot;&amp;gt;=TO_DATE(:4,'YYYY-MM-DD') AND &amp;quot;A&amp;quot;.&amp;quot;DUR&amp;quot;&amp;lt;=TO_DATE(:5,'YYYY-MM-DD'))
   6 - filter(&amp;quot;M&amp;quot;.&amp;quot;SCH_PRIM_ALT_IND&amp;quot;=:3 AND &amp;quot;M&amp;quot;.&amp;quot;EMPL_RCD&amp;quot;=TO_NUMBER(:2))
   7 - access(&amp;quot;M&amp;quot;.&amp;quot;EMPLID&amp;quot;=:1 AND &amp;quot;M&amp;quot;.&amp;quot;DUR&amp;quot;&amp;gt;=TO_DATE(:4,'YYYY-MM-DD') AND &amp;quot;M&amp;quot;.&amp;quot;DUR&amp;quot;&amp;lt;=TO_DATE(:5,'YYY
Y-MM-DD'))

-------------------------------------------------------------------------------

PL/SQL procedure successfully completed.

The routine above works well but I am happy to consider any changes or improvements.

PS If anybody knows how to use a code tag and not have those horrible green wraparound marks please let me know


Maxing out CPUs – script

$
0
0

I have long subscribed to the  ORACLE-L mailing list  and I find it a great source of ideas and views on the management of Oracle databases. As a sidenote for anybody who used it in the past it seems to be much stronger now as a community than previously where there was too much RTFM and other flaming type responses.

In the last couple of days there has been a thread running entitled Stress my CPU’s started by Lee Robertson where he was asking for a way to ‘hammer the CPU’s on the box as we want to test dynamically allocating CPU’s from another partition to handle the increased workload’.

The strength of the list is that there were a number of quality responses but my hat goes off to Tom Dale for producing this gem

set serveroutput on

declare

l_job_out integer;

l_what dba_jobs.what%type;

l_cpus_to_hog CONSTANT integer :=4;

l_loop_count varchar2(10) := '500000000'; begin

/*

** Create some jobs to load the CPU

*/

for l_job in 1..l_cpus_to_hog loop

dbms_job.submit(

job => l_job_out

, what => 'declare a number := 1; begin for i in 1..'||l_loop_count||' loop a := ( a + i )/11; end loop; end;'

);

commit;

dbms_output.put_line( 'job - '|| l_job_out );

select what into l_what from dba_jobs where job = l_job_out;

dbms_output.put_line( 'what - '|| l_what );

end loop;

end;

/

Short, sweet and very effective. I will certainly be using it when I want to look at using resource management.

PS,  if you want to stop the jobs running, although they do finish in a few minutes using the default value of 500 million iterations, then use the following dynamic sql.

select 'execute dbms_job.remove('||job||');' from user_jobs where what like 'declare a number := 1%';

 


The use of functions in a .profile file

$
0
0

My first public presentation is over now and whilst I was very nervous beforehand I felt quite comfortable once I started. To anybody who was there, thanks for putting up with me.

I promised to upload the contents of a .profile we use for the oracle account as that includes a number of useful functions and aliases. This is rolled out to every database server to ensure that we have a similar feel to every server.
We also have the oratab files set up so that the primary database is first (if there is more than one), ASM is next and the Grid agent home is next. That way the default SID setup when logging in is the main database we are likely to be using.

#!/bin/sh
# Standard Oracle .profile.
#
#
##################################################################
# Version Control
##################################################################
# Who Date Description
# Initial Version
# 21/12/2009 Added lsum and lsh functions
# 31/12/2009 Added export USER for agent startup
#
##################################################################

###############################################
# Global Variables NOT exported
###############################################
ORATAB=/etc/oratab

###############################################
# Functions
###############################################

###############################################
# Check for ASM and if present set TNS_ADMIN
###############################################
asmcheck()
{
#
# Check for ASM
# Look for + at start of line to indicate ASM instance
#
QUERY_ASM=`awk ‘BEGIN {FS =”:”} $1 ~ /^[+]/ && $3 ~ /[N-Y]$/ {print $2}’ $ORATAB`
if [ -d $QUERY_ASM/network/admin ]
then
export TNS_ADMIN=$QUERY_ASM/network/admin
else
unset TNS_ADMIN
fi
}

########################################################
# Show list of available Oracle SIDs if on a Terminal
########################################################
showsid ()
{
if tty -s
then
if [ -f $ORATAB ]
then
i=1
echo “”
awk ‘BEGIN {FS =”:”} $1 ~ /^[+,A-Z,a-z]/ && $3 ~ /[N-Y]$/ {print $1 ” : ” $2}’ ${ORATAB} | while read file_line
do
printf “%2d. %s\n” $i “$file_line”
let i=$i+1
done
echo “”
fi
fi
}

########################################################
# Oracle Environment Set Function if it is a terminal
########################################################
setsid ()
{
if tty -s
then
if [ -f $ORATAB ]
then
line_count=`cat $ORATAB | grep -v ^# | sed ‘s/:.*//’ | wc -l`
# check that the oratab file has some contents
if [ $line_count -ge 1 ]
then
sid_selected=0
while [ sid_selected -eq 0 ]
do
sid_available=0
for i in `cat $ORATAB | grep -v ^# | sed ‘s/:.*//’`
do
sid_available=`expr $sid_available + 1`
sid[$sid_available]=$i
done
# get the required SID
case ${SETSID_AUTO:-”"} in
YES) # Auto set use 1st entry
sid_selected=1 ;;
*)
echo “”
echo “Please select a SID from the list below by entering”
echo “the associated number.”
echo “”
i=1
while [ $i -le $sid_available ]
do
printf “%2d. %10s\n” $i ${sid[$i]}
i=`expr $i + 1`
done
echo “”
echo “Select the Oracle SID [1]: \c”
read entry
if [ -n "$entry" ]
then
entry=`echo “$entry” | sed ‘s/[a-z,A-Z]//g’`
if [ -n "$entry" ]
then
entry=`expr $entry`
if [ $entry -ge 1 ] && [ $entry -le $sid_available ]
then
sid_selected=$entry
fi
fi
else
sid_selected=1
fi
esac
done
#
# At this point we have a valid sid reference
#
export ORACLE_SID=${sid[$sid_selected]}
export ORAENV_ASK=NO
. oraenv
export ORAENV_ASK=NO
echo
echo “Setting ORACLE_SID = $ORACLE_SID” ;
echo

#
# Amend variables based on environment
#
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
export PATH=$ORACLE_HOME/OPatch:$PATH
asmcheck

else
echo “No entries in $ORATAB. no environment set”
fi
fi
fi
}

########################################################
# Sum the sizes of the files in a directory
########################################################
lsum ()
{
ls -l |awk ‘NR == 1 {d=1024 ; z=”Kb”}
{sz+=$5}
sz > 1048575 {d=1048576 ; z=”Mb”}
sz > 1073741823 {d=1073741824 ; z=”Gb”}
END {printf (“%.1f %s\n”, sz/d,z)}’
}

########################################################
# This function implements the human readable output form in ls which gets you the size of the file in Mb’s and GB’s rather than bytes
# based on the Solaris -ls -lh command
########################################################
lsh ()
{
ls -l |awk ‘$5 1048575 && $5 1073741823 {printf (“%s %+10s %+6s %10.1f%s %s %+2s %+5s %s\n”, $1,$3,$4,$5/1073741824,”Gb”,$6,$7,$8,$9)}’
}
########################################################
# Some Unix Environment Defaults
########################################################
umask 022
export TMOUT=0
export HISTFILE=/.sh_history
export EDITOR=vi
export PATH=$PATH:.:/usr/sbin
export LD_LIBRARY_PATH=/usr/lib/hpux64
export UNIX95=”"
export USER=$LOGNAME
export PS1=”[\${LOGNAME}@`hostname`][\${ORACLE_SID}]\${PWD} $”

########################################################
# Set up some standard Oracle Environment components
########################################################
export SQLPATH=/shared/oracle
export ORACLE_BASE=/app/oracle
export NLS_DATE_FORMAT=”YYYY-MM-DD:HH24:MI:SS”

########################################################
# Standard Alias settings (no dependency on environment)
########################################################

alias asm=’asmcmd -p’
alias lss=’ls -ltr’
alias sysdba=’sqlplus “/ as sysdba”‘
alias sysasm=’sqlplus “/ as sysasm”‘
alias rmant=’rman target / nocatalog’
alias pmon=’ps -fu oracle | grep pmon | grep -v grep’

#############################################################
# The Main profile code starts here
#############################################################

#
# If you are on terminal then process
#
if tty -s
then

#
# Set up the Terminal
#
if [ "$TERM" = "" ]
then
eval ` tset -s -Q -m ‘:?hp’ `
else
eval ` tset -s -Q `
fi
stty erase “^?” kill “^U” intr “^C” eof “^D”
stty hupcl ixon ixoff
tabs
set -u

#
# Mail Configuration
#
export MAIL=/var/mail/oracle
export MAILMSG=”You have mail!”
if [ -s "$MAIL" ]
then
echo “$MAILMSG”
fi

#
# Set the default Oracle Environment
#
SETSID_AUTO=”YES” # setsid AUTO function enabled set 1st entry in oratab
setsid
SETSID_AUTO=”" # setsid AUTO function disabled

#
# Environment Dependent Alias Settings
#
alias oh=’cd $ORACLE_HOME’
alias vahome=’if [ -d /oradata/$ORACLE_SID/VirtualAgent ] ; then cd /oradata/$ORACLE_SID/VirtualAgent;fi’
alias diagdest=’cd $ORACLE_BASE/`adrci exec=”show homes” | grep $ORACLE_SID`/trace’
alias diagdestasm=’cd $ORACLE_BASE/`adrci exec=”show homes” | grep ASM`/trace’
fi


Producing a grid report

$
0
0

I mentioned that we have a morning check report that we run from OEM in this post . I was asked about it a couple of days ago so I thought I would post the contents as it may give some ideas on what can be monitored and how we use the report.

We check status on the following events and I have shown the code we use for each of the following reports

Usable space in FRA < 20%
Filesystems over 90% used
Database not backed up within 1 day and not blacked out and not a physical standby
Data Guard Status (targets not blacked out)
Alert Log Errors
Database Parameter Changes

The report can be seen by editing the report definition / elements (base it on one of the standard reports) and then selecting ‘sql parameters’

The sql code is produced below in the same order as the list above

SELECT TARGET_NAME “Database Name”,
NVL2(VALUE, VALUE, ‘No Data Available’) “Usable Flash Recovery Area (%)”,
COLLECTION_TIMESTAMP “Collected”

select “Host”, “Mount Point”, “Size MB”, “Used MB”, “Used %” from sysman.mor_filesystem_usage where “Used %” > 90
and target_guid not in (select distinct member_target_guid from MGMT$TARGET_FLAT_MEMBERS where aggregate_target_name in (‘E-Business’,'PeopleSoft’))
and “Mount Point” not like ‘/var’
and “Mount Point” not like ‘/var/adm/crash’
and “Mount Point” not like ‘/dm’
and “Mount Point” not like ‘/staging’
and “Mount Point” not like ‘/app/oradata/rman_backup_cutover’
and “Mount Point” not like ‘/home’
and “Mount Point” not like ‘/DEV_mrdw’
and “Mount Point” not like ‘/app/peoplesoft/hrws2′

select
host,
database_name,
status,
start_time,
end_time,
input_type,
output_device_type,
collected
from (select * from sysman.mor_all_backups where lower(db_type) <> ‘physical standby’
and database_name not like  —excluded list of databases goes here)
where ((status not in (‘COMPLETED’,'RUNNING’) or (status is null and end_time < SYSDATE -1) or end_time < SYSDATE -1) or end_time is null)
and target_guid not in (select distinct member_target_guid from MGMT$TARGET_FLAT_MEMBERS where aggregate_target_name in (‘E-Business’,'PeopleSoft’))
FROM SYSMAN.MGMT$METRIC_CURRENT
WHERE lower(metric_label) = ‘flash recovery’ and lower(metric_column)=’usable_area’
and value < 20
and target_guid not in (select distinct member_target_guid from MGMT$TARGET_FLAT_MEMBERS where aggregate_target_name in (‘E-Business’,'PeopleSoft’))
and TARGET_NAME not like ‘%ora10g%’

select
  mc.collection_timestamp,
  mab.database_name as primary,
  mc.key_value as standby,
  mc.value as Status
from
  sysman.mor_all_backups mab,
  (select collection_timestamp,target_name, value,key_value from sysman.mgmt$metric_current where lower(metric_name) = ‘dataguard’) mc
where mc.TARGET_NAME = mab.database_name
and lower(mab.db_type) <> ‘physical standby’
and mc.value <>’_$_$’
and target_guid not in (select distinct member_target_guid from MGMT$TARGET_FLAT_MEMBERS where aggregate_target_name in (‘E-Business’,'PeopleSoft’))
and database_name not like ‘DBAPIT1A’

select
maa.TARGET,maa.CATEGORY,maa.TIME,maa.ERROR
from sysman.MOR_ACTIVE_ALERTS maa
where target_guid not in (select distinct member_target_guid from MGMT$TARGET_FLAT_MEMBERS where aggregate_target_name in (‘E-Business’,'PeopleSoft’))

select
deltatime “Date”,
target_name “Database”,
hostname “Host”,
operation “Operation”,
key1 “Parameter”,
attribute “Attribute”,
oldvalue “Old Vlue”,
newvalue “New Value”
from MGMT$ECM_CONFIG_HISTORY_KEY1
where target_name like ‘%PRD%’
and deltatime >= sysdate -1
and collectiontype=’MGMT_DB_INIT_PARAMS_ECM’
and key1 !=’resource_manager_plan’


The power of emdiag

$
0
0

I am currently loooking at emdiag and finding it more and more useful as I fully understand it’s capabilities. To copy a comment from a metalink note – EMDIAG is a diagnostics and troubleshooting kit which can help with  a health assesment of a site. It is a set of scripts developed by Werner De Gruyter and instructions for download and usage are in Note 421053.1 EMDiagkit download and master index. I will not go into the installation instructions here but just show a few of the commands that I am finding useful and an example of an issue that it is highlighted. Note that I have set my Oracle Home to be the OMS home and repvfy is found in OH/bin. However the output is located under wherever you have installed the emdiag software and in my case would be OH/emdiag/log

repvfy dump health -pwd password 

This gives an very good overview of repository DB specific information, database performance statistics , installed OMS patchsets and EM monitoring targets.  

repvfy -pwd password 

This loops through a list of modules and runs specific tests against each module. The output is a list of tests run and errors found. The first few lines of my current output show the following errors

verifyAGENTS
001. Agents without a monitored host target: 2
101. Active Agents with clock-skew problems: 21
113. Agents not uploading any data: 63
verifyASLM
verifyAVAILABILITY
100. Broken targets marked as UP: 1
verifyBLACKOUTS
verifyCA
verifyCREDENTIALS
700. Orphaned target credentials: 1

The complete list of modules available can be found by using the repvfy -h4 command

repvfy -h4

repvfy 2010.0514 - EMDIAG - Repository verification
Usage:
 repvfy [-{h}] [-i] [-t <trace lvl>] [-zip] <commands>
      [-usr <user>] [-pwd <pwd>] [-tns <tns alias>]
      [-module <module name>] [-test <test number>] [-level <level>] [-detail]
      [-name <obj name>] [-type <obj type>] [-col <obj col>] [-owner <obj owner>] [-guid <obj guid>]
      [-stime <start time>] [-etime <end time>] [-id <obj id>] [-vers <obj version>]
      [/{d|o} <home>] [/log <dir>] [/{sid} <name>] [/u {<env>,...}] [/v {<env>=<var>,...}]

-- Available modules for VERIFY --

  AGENTS           Grid Control Agents
  ASLM             Application Server Level Monitoring
  AVAILABILITY     Availbility sub-system
  BLACKOUTS        Blackout sub-system
  CA               Corrective Actions
  CREDENTIALS      Credentials
  DEVELOPMENT      Development/Test (internal only)
  ECM              Configuration Management
  EVENT            Event sub-system
  JOBS             Job sub-system
  LOADERS          Loader
  METRICS          Metrics
  NOTIFICATIONS    Notification sub-system
  PLUGINS          Plugins and extentions
  POLICIES         Policies and violations
  PROVISIONING     Provisioning setup and configuration
  RCA              Root Cause Analysis Engine
  REPORTS          Reporting framework
  REPOSITORY       Repository
  ROLES            Roles and privileges
  TARGETS          Targets
  TEMPLATES        Templates
  USERS            User sub-system

If we want to focus on a particular test that is indicating problems we can get more information by running that test in isolation and gathering both the sql used and the problems identified.  Test 101 is showing 21 agents with a clock time that is different by more than 120 seconds greater or less than the OMS server

verifyAGENTS
101. Active Agents with clock-skew problems: 21
repvfy verify agents -test 101 -pwd password -detail 

Two files have been created , a sql and a detail file. In my view one of the best features is that the command above produces the sql query that it is running. This is a very good way to find out which tables are being used and where data within the repository is stored. The sql file contains the following.

SELECT agent, timezone_region, difference "seconds",
       DECODE(SIGN(difference),-1,'-','+')||
               TRIM(TO_CHAR(MOD(FLOOR(ABS(difference)/3600),24),'09'))||'h'||
               TRIM(TO_CHAR(MOD(FLOOR(ABS(difference)/60),60),'09'))||'m'||
               TRIM(TO_CHAR(MOD(ABS(difference),60),'09'))||'s' clock_skew
FROM   (SELECT t.target_name agent, t.timezone_region,
               (p.last_heartbeat_utc-(MGMT_GLOBAL.TO_UTC(p.last_heartbeat_ts,t.timezone_region)))*86400 difference
        FROM   mgmt_emd_ping p, mgmt_targets t
        WHERE  p.target_guid = t.target_guid
          AND  p.status = 1
          AND  p.max_inactive_time > 0)
WHERE  difference NOT BETWEEN -120 AND 120
ORDER BY difference
;

The log file details which agents are out of synch with the OMS server whic for us raised an interesting question.

Within OEM there is a pre-built and locked report which is called “Agents Clock synchronization offset” which we have been using for a long-time. That shows that we have no agents that are more than a few seconds out and yet the emdiag query shows we have 21 targets that are differing by between 240 500 seconds. The OEM report is a locked down query so I have a SR open with Oracl;e to try and determine why we see the differences. Just for information the emdiag report is correct and the clocks are out on 21 servers. Might be worth trying out on your environments.

So that is a brief overview of how I am using emdiag and no doubt I will post more as I delve deeper.


Using Grid to display database CPU usage

$
0
0

There was a recent post on the Oracle -L list asking  about using Grid Control to  report on a particular databases cpu usage during a certain period of time. A number of answers came in showing  the sql queries that would answer the question but I saw the question being ‘ how can we display the CPU usage in Grid’  or indeed how can we produce a customised metric report on any database in Grid

However for those who are interested in the recommended scripted methods then the the answers that were of most use in my view were from Karl Arao pointing to  a script he has written and Rich Jesse produced the following code

SELECT
mmd.*
FROM
sysman.mgmt$metric_daily mmd
JOIN
sysman.mgmt$target mt
ON mmd.target_name = mt.target_name
AND mmd.target_type = mt.target_type
AND mmd.target_guid = mt.target_guid
WHERE
mmd.metric_column like '%cpu%'
AND mt.target_name = :D B_NAME
AND mt.target_type = 'oracle_database';

My method was to create a report that could be used to report on any instance and this is how I did it.

On OEM select create report and give it a title, category and sub-category. This is how where it will be located in the reports tab. Select target type of ‘Database Instance’ and select a time period, in my case the last 24 hours.

Now add 2 new elements as I am going to produce a report with two metric graphs in. Edit the set parameters tab

Then ensure that  the appropriate metrics are selected by choosing the target type ‘Database Instance’ but again inherit target. Select whichever metric you are interested in and then repeat the process for the second required graph.

Now all you need to do is to look at the preview, enter a SID and hey presto

Finally our end reports looks like this – all we need to do s run the report and select the instance name

I hope that has proved useful and demonstrated how easy it is to run a customised report which can be run against any desired instance



Scripts to resize standby redolog files

$
0
0

I have already posted about an issue that required me to drop and recreate standby log files so I thought I would post the scripts I used.

Resize Standby Redo Logs

1. On primary defer log shipping (dynamic change)

alter system set log_archive_dest_state_2 = defer scope = memory;

2. On standby database cancel managed recovery

alter database recover managed standby database cancel;

3. Drop standby logs on standby database

ALTER DATABASE DROP STANDBY LOGFILE GROUP 4;

ALTER DATABASE DROP STANDBY LOGFILE GROUP 5;

ALTER DATABASE DROP STANDBY LOGFILE GROUP 6;

ALTER DATABASE DROP STANDBY LOGFILE GROUP 7;

4. Recreate the new Standby logs

alter database add standby logfile THREAD 1 group 4 ('+DATA(ONLINELOG)','+FRA(ONLINELOG)') SIZE 1000M;

alter database add standby logfile THREAD 1 group 5 ('+DATA(ONLINELOG)','+FRA(ONLINELOG)') SIZE 1000M;

alter database add standby logfile THREAD 1 group 6 ('+DATA(ONLINELOG)','+FRA(ONLINELOG)') SIZE 1000M;

alter database add standby logfile THREAD 1 group 7 ('+DATA(ONLINELOG)','+FRA(ONLINELOG)') SIZE 1000M;

5. Enable log shipping on the Primary database

alter system set log_archive_dest_state_2 = enable scope = memory;

6. Enable managed recovery on standby database

alter database recover managed standby database using current logfile disconnect;

7. Check the the standby logs are being used by running following query :

set lines 155 pages 9999
col thread# for 9999990
col sequence# for 999999990
col grp for 990
col fnm for a50 head "File Name"
col "Fisrt SCN Number" for 999999999999990
break on thread
# skip 1
select a.thread#
,a.sequence#
,a.group# grp     
, a.bytes/1024/1024 Size_MB     
,a.status     
,a.archived     
,a.first_change# "First SCN Number"     
,to_char(FIRST_TIME,'DD-Mon-RR HH24:MI:SS') "First SCN Time"   
,to_char(LAST_TIME,'DD-Mon-RR HH24:MI:SS') "Last SCN Time"  from
 v$standby_log a  order by 1,2,3,4
 /

Should return the following :

THREAD#  SEQUENCE#  GRP    SIZE_MB STATUS     ARC Fisrt SCN Number First SCN Time              Last SCN Time
-------- ---------- ---- ---------- ---------- --- ---------------- --------------------------- ---------------------------
       1          0    4        100 UNASSIGNED NO                 0
                  0    6        100 UNASSIGNED YES                0
                  0    7        100 UNASSIGNED YES                0
               7316    5        100 ACTIVE     YES        153517071 04-Feb-11 13:39:32          04-Feb-11 13:40:41

The Mother of all ASM scripts

$
0
0

Back in 2009 I posted a script which I found very useful to review ASM disks. I gave that post the low-key title of The ASM script of all ASM scripts. Now that script has been improved I have to go a bit further with the hyperbole and we have the The Mother of all ASM scripts.  If it ever gets improved then the next post will just be called ‘Who’s the Daddy’.

I have been using the current script across all our systems for the last 3 years and I find it very useful, a colleague, Allan Webster, has added a couple of improvements and it is now better than before.

The improvements show current disk I/O statistics and a breakdown of the types of files in each disk group and the total sizes of that filetype. The I/O statistics are useful when you have a lot of databases, many of which are test and development and so you do not look at them as that often. It just gives a quick overview that allows you to get a feel if anything is wrong and to see what the system is actually doing. There are also a few comments at the beginning defining the various ASM views available.

  REM ASM views:
REM VIEW            |ASM INSTANCE                                     |DB INSTANCE
REM ----------------------------------------------------------------------------------------------------------
REM V$ASM_DISKGROUP |Describes a disk group (number, name, size       |Contains one row for every open ASM
REM                 |related info, state, and redundancy type)        |disk in the DB instance.
REM V$ASM_CLIENT    |Identifies databases using disk groups           |Contains no rows.
REM                 |managed by the ASM instance.                     |
REM V$ASM_DISK      |Contains one row for every disk discovered       |Contains rows only for disks in the
REM                 |by the ASM instance, including disks that        |disk groups in use by that DB instance.
REM                 |are not part of any disk group.                  |
REM V$ASM_FILE      |Contains one row for every ASM file in every     |Contains rows only for files that are
REM                 |disk group mounted by the ASM instance.          |currently open in the DB instance.
REM V$ASM_TEMPLATE  |Contains one row for every template present in   |Contains no rows.
REM                 |every disk group mounted by the ASM instance.    |
REM V$ASM_ALIAS     |Contains one row for every alias present in      |Contains no rows.
REM                 |every disk group mounted by the ASM instance.    |
REM v$ASM_OPERATION |Contains one row for every active ASM long       |Contains no rows.
REM                 |running operation executing in the ASM instance. |

set wrap off
set lines 155 pages 9999
col "Group Name" for a6    Head "Group|Name"
col "Disk Name"  for a10
col "State"      for a10
col "Type"       for a10   Head "Diskgroup|Redundancy"
col "Total GB"   for 9,990 Head "Total|GB"
col "Free GB"    for 9,990 Head "Free|GB"
col "Imbalance"  for 99.9  Head "Percent|Imbalance"
col "Variance"   for 99.9  Head "Percent|Disk Size|Variance"
col "MinFree"    for 99.9  Head "Minimum|Percent|Free"
col "MaxFree"    for 99.9  Head "Maximum|Percent|Free"
col "DiskCnt"    for 9999  Head "Disk|Count"

prompt
prompt ASM Disk Groups
prompt ===============

SELECT g.group_number  "Group"
,      g.name          "Group Name"
,      g.state         "State"
,      g.type          "Type"
,      g.total_mb/1024 "Total GB"
,      g.free_mb/1024  "Free GB"
,      100*(max((d.total_mb-d.free_mb)/d.total_mb)-min((d.total_mb-d.free_mb)/d.total_mb))/max((d.total_mb-d.free_mb)/d.total_mb) "Imbalance"
,      100*(max(d.total_mb)-min(d.total_mb))/max(d.total_mb) "Variance"
,      100*(min(d.free_mb/d.total_mb)) "MinFree"
,      100*(max(d.free_mb/d.total_mb)) "MaxFree"
,      count(*)        "DiskCnt"
FROM v$asm_disk d, v$asm_diskgroup g
WHERE d.group_number = g.group_number and
d.group_number <> 0 and
d.state = 'NORMAL' and
d.mount_status = 'CACHED'
GROUP BY g.group_number, g.name, g.state, g.type, g.total_mb, g.free_mb
ORDER BY 1;

prompt ASM Disks In Use
prompt ================

col "Group"          for 999
col "Disk"           for 999
col "Header"         for a9
col "Mode"           for a8
col "State"          for a8
col "Created"        for a10          Head "Added To|Diskgroup"
--col "Redundancy"     for a10
--col "Failure Group"  for a10  Head "Failure|Group"
col "Path"           for a19
--col "ReadTime"       for 999999990    Head "Read Time|seconds"
--col "WriteTime"      for 999999990    Head "Write Time|seconds"
--col "BytesRead"      for 999990.00    Head "GigaBytes|Read"
--col "BytesWrite"     for 999990.00    Head "GigaBytes|Written"
col "SecsPerRead"    for 9.000        Head "Seconds|PerRead"
col "SecsPerWrite"   for 9.000        Head "Seconds|PerWrite"

select group_number  "Group"
,      disk_number   "Disk"
,      header_status "Header"
,      mode_status   "Mode"
,      state         "State"
,      create_date   "Created"
--,      redundancy    "Redundancy"
,      total_mb/1024 "Total GB"
,      free_mb/1024  "Free GB"
,      name          "Disk Name"
--,      failgroup     "Failure Group"
,      path          "Path"
--,      read_time     "ReadTime"
--,      write_time    "WriteTime"
--,      bytes_read/1073741824    "BytesRead"
--,      bytes_written/1073741824 "BytesWrite"
,      read_time/reads "SecsPerRead"
,      write_time/writes "SecsPerWrite"
from   v$asm_disk_stat
where header_status not in ('FORMER','CANDIDATE')
order by group_number
,        disk_number
/

Prompt File Types in Diskgroups
Prompt ========================
col "File Type"      for a16
col "Block Size"     for a5    Head "Block|Size"
col "Gb"             for 9990.00
col "Files"          for 99990
break on "Group Name" skip 1 nodup

select g.name                                   "Group Name"
,      f.TYPE                                   "File Type"
,      f.BLOCK_SIZE/1024||'k'                   "Block Size"
,      f.STRIPED
,        count(*)                               "Files"
,      round(sum(f.BYTES)/(1024*1024*1024),2)   "Gb"
from   v$asm_file f,v$asm_diskgroup g
where  f.group_number=g.group_number
group by g.name,f.TYPE,f.BLOCK_SIZE,f.STRIPED
order by 1,2;
clear break

prompt Instances currently accessing these diskgroups
prompt ==============================================
col "Instance" form a8
select c.group_number  "Group"
,      g.name          "Group Name"
,      c.instance_name "Instance"
from   v$asm_client c
,      v$asm_diskgroup g
where  g.group_number=c.group_number
/

prompt Free ASM disks and their paths
prompt ==============================
col "Disk Size"    form a9
select header_status                   "Header"
, mode_status                     "Mode"
, path                            "Path"
, lpad(round(os_mb/1024),7)||'Gb' "Disk Size"
from   v$asm_disk
where header_status in ('FORMER','CANDIDATE')
order by path
/

prompt Current ASM disk operations
prompt ===========================
select *
from   v$asm_operation
/ 

This is how some of the changes look

Added To    Total   Free                                Seconds  Seconds
Group Disk Header    Mode     State    Diskgroup      GB     GB Disk Name  Path                PerRead PerWrite
----- ---- --------- -------- -------- ---------- ------ ------ ---------- ------------------- ------- --------
1    0 MEMBER    ONLINE   NORMAL   20-FEB-09      89     88 FRA_0000   /dev/oracle/disk388    .004     .002
1    1 MEMBER    ONLINE   NORMAL   31-MAY-10      89     88 FRA_0001   /dev/oracle/disk260    .002     .002
1    2 MEMBER    ONLINE   NORMAL   31-MAY-10      89     88 FRA_0002   /dev/oracle/disk260    .007     .002
2   15 MEMBER    ONLINE   NORMAL   04-MAR-10      89     29 DATA_0015  /dev/oracle/disk203    .012     .023
2   16 MEMBER    ONLINE   NORMAL   04-MAR-10      89     29 DATA_0016  /dev/oracle/disk203    .012     .021
2   17 MEMBER    ONLINE   NORMAL   04-MAR-10      89     29 DATA_0017  /dev/oracle/disk203    .007     .026
2   27 MEMBER    ONLINE   NORMAL   31-MAY-10      89     29 DATA_0027  /dev/oracle/disk260    .011     .023
2   28 MEMBER    ONLINE   NORMAL   31-MAY-10      89     29 DATA_0028  /dev/oracle/disk259    .009     .020
2   38 MEMBER    ONLINE   NORMAL   31-MAY-10      89     29 DATA_0038  /dev/oracle/disk190    .012     .025
2   39 MEMBER    ONLINE   NORMAL   31-MAY-10      89     29 DATA_0039  /dev/oracle/disk189    .014     .015
2   40 MEMBER    ONLINE   NORMAL   31-MAY-10      89     30 DATA_0040  /dev/oracle/disk260    .011     .024
2   41 MEMBER    ONLINE   NORMAL   31-MAY-10      89     30 DATA_0041  /dev/oracle/disk260    .009     .022
2   42 MEMBER    ONLINE   NORMAL   31-MAY-10      89     29 DATA_0042  /dev/oracle/disk260    .011     .018
2   43 MEMBER    ONLINE   NORMAL   31-MAY-10      89     29 DATA_0043  /dev/oracle/disk260    .003     .026
2   44 MEMBER    ONLINE   NORMAL   31-MAY-10      89     29 DATA_0044  /dev/oracle/disk260    .008     .019
2   45 MEMBER    ONLINE   NORMAL   31-MAY-10      89     30 DATA_0045  /dev/oracle/disk193    .008     .018
2   46 MEMBER    ONLINE   NORMAL   31-MAY-10      89     30 DATA_0046  /dev/oracle/disk192    .007     .024
2   47 MEMBER    ONLINE   NORMAL   31-MAY-10      89     30 DATA_0047  /dev/oracle/disk191    .005     .022
2   48 MEMBER    ONLINE   NORMAL   31-MAY-10      89     29 DATA_0048  /dev/oracle/disk190    .008     .021
2   49 MEMBER    ONLINE   NORMAL   31-MAY-10      89     29 DATA_0049  /dev/oracle/disk189    .008     .026
2   50 MEMBER    ONLINE   NORMAL   31-MAY-10      89     29 DATA_0050  /dev/oracle/disk261    .009     .030

56 rows selected.

File Types in Diskgroups
========================

Group                   Block
Name   File Type        Size  STRIPE  Files       Gb
------ ---------------- ----- ------ ------ --------
DATA   CONTROLFILE      16k   FINE        1     0.01
DATAFILE         16k   COARSE    404  2532.58
ONLINELOG        1k    FINE        3     6.00
PARAMETERFILE    1k    COARSE      1     0.00
TEMPFILE         16k   COARSE     13   440.59

FRA    AUTOBACKUP       16k   COARSE      2     0.02
CONTROLFILE      16k   FINE        1     0.01
ONLINELOG        1k    FINE        3     6.00

A setup to test backup and restore options

$
0
0

A couple of years my team put together some procedures and the underlying databases to allow testing of as many RMAN recovery options as we could think of. We had two databases which gave us a Dataguard capability and we secured both databases using Commvault backups so we always had a clean starting point. Each team member had a training objective to complete the course and we all thought it was an excellent refresher process.   As I recall Hamid Ansari did most of the work and I should mention him if I mention the output.

 

I noticed today that Francisco Munoz Alvarez has provided a script which does very much the same thing and I can only applaud the idea. I don’t think his script covers Dataguard but no doubt it could.  His web page Crash Simulator is where the script can be found. I will be looking at this myself in the next few days.

Viewing all 11 articles
Browse latest View live