Thursday, July 30, 2009

howto autostart database & listener under Solaris & Linux

whenever our office require to do some maintenance work, they ask us to take databases down, mostly it is on weekend and it turns to become a headache...we found that this thing is happening only or mostly with developer's environment database we decided to configure all developer's environment databases on autostart !

here is a small howto:

step 1: Modify oratab

For Solaris:

$vi /var/opt/oracle/oratab

For Linux:

$vi /etc/oratab

at the end of oratab file, you will see following lines :


replace above with:


step 2: Creating script

$vi /etc/init.d/

write following content in it:


if [ ! -f $ORACLE_HOME/bin/dbstart ]
echo "Oracle startup: cannot start"

case "$1" in
'start') # Start the Oracle databases and listeners
su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/dbstart"
su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/lsnrctl start"
'stop') # Stop the Oracle databases and listeners
su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/lsnrctl stop"
su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/dbshut"

step 3: configure O.S to run on bootup/shutdown

For Solaris:

Link the to /etc/rc directories:

ln -s /etc/init.d/ /etc/rc0.d/K10dbora
ln -s /etc/init.d/ /etc/rc2.d/S99dbora


cp /etc/init.d/ /etc/rc0.d/K10dbora

cp /etc/init.d/ /etc/rc2.d/S99dbora

Note:we did cp on solaris 9. not sure but ln was not working.

For Linux:

$chkconfig --level 345 on

step 4: create pfile under $ORACLE_HOME/dbs --- OPTIONAL

if your database do not have pfile under $ORACLE_HOME/dbs you need to create by following command.

connect as "sysdba"

SQL> create pfile='$ORACLE_HOME/dba/initSID.ora' from spfile;


Wednesday, July 15, 2009

ORA-00018 maximum number of sessions exceeded

We had ORA-00018 problem in one of our database in data center, in fact, the application users suddenly have been increased and we were not informed about it, so as soon as the default sessions limit went off, the database blocked all new sessions, and even if we were connecting to database with SYS, it was reporting somewhat, database is not available... though there was't any error reporting in alert-logs except few session waiting too long in queue. This could be because of session queue was over loaded and it was keeping all new database connections in queue for too long.

I followed this thread. to resolve it, though the terrible part of the story is, it can only be solved by bouncing the instance... So we had to wait until working hours :(

SPECIAL NOTE: if you are having this problem in RAC environment, then you should give a look to Riyaj Post


Tuesday, July 14, 2009

Oracle 11g Pre Installation on Solaris 10 x64

Yesterday , i was asked to write a Pre Installation task guide for oracle 11g, i went through several sites to get detail of the document. Though oracle 11g can be installed under ASM or Local File system, in our case we took local file system. There isn't any complexity in oracle 11g installation.


References: [1] [2]
Meta Link Note: 743042.1


Tuesday, July 7, 2009

create standby when archive logs are lost/corrupt

I have been in such situation where somehow we lost or (most recently) got corrupt the archive logs from production site and we had to synchronize our standby/DataGaurd site. Anyhow the terrible fact about such situation is that we had to recreate standby database, here is the scenario with solution :)

Scenario: Your database is archiving itself to shared storage (of course in RAC) or it is using local disk (in NON-RAC) and somehow your disk get failed and you lost all/(some of) your archive files, or oracle wrote archive log on a corrupt block and a specific archive log file is lost. In either case you lost archives before transferring to standby/Dataguard database.

Consequences: You d'nt have any other way to regenerate that archive file and you are now to a point where you have to recreate that specific standby database.

Protection: In our office, we have a slight protection against such failure i.e. we have a shell cron script that is running after every 5 min. and coping archive log files to a NFS mounted disk. This is very handy in cases when we lost archive log file on disk but it is found in NFS mounted disk. BUT last week we even did'nt had those missing archive log files in NFS mounted disk.

Solution: You have to undergo with following procedure.

Step 1: Create a new rman based backup, EXCLUDING Archive Log Files, Including control file for standby.

Step 2: Copy all archives to standby site, those who are recent then your last backup. i.e. those archive log file that were generated after you made step 1 backup.

Step 3: recreate your standby database just as normal.

PS: you can take help from my post to accomplish step 3.

Request: if you know any other better way then this, d'nt forget to share with me :p