Tuesday, April 14, 2015

Disk overhead Exadata X2

I had a discussion recently about the available storage from the Exadata storage cells in this case it was an X2. Please note the disks are advertised as 2TB in size however there is a small overhead at each stage from the physicaldisk layer then to the celldisk layer which I show in this posting.


From the below example we can see the physical disk size is 2TB from the makemodel property but the physicalsize is about 1862.66GB this is a drop of 185.34GB from 2048G.


CellCLI> list physicaldisk 20:0 detail
         name:                   20:0
         deviceId:               19
         diskType:               HardDisk
         enclosureDeviceId:      20
         errMediaCount:          0
         errOtherCount:          0
         foreignState:           false
         luns:                   0_0
         makeModel:              "SEAGATE ST32000SSSUN2.0T"
         physicalFirmware:       061A
         physicalInsertTime:     xxxxxxxxxxxxxxxxxxx
         physicalInterface:      sas
         physicalSerial:         xxxxxxxxxxxxxxxxxxx
         physicalSize:           1862.6559999994934G
         slotNumber:             0
         status:                 normal



We can see from the celldisk level the size is being reported as about 1832.59GB another drop of about 30GB.

CellCLI> list celldisk CD_00_cel01 detail
         name:                   CD_00_cel01
         comment:
         creationTime:           xxxxxxxxxxxxxxxxxxx
         deviceName:             /dev/sda
         devicePartition:        /dev/sda3
         diskType:               HardDisk
         errorCount:             0
         freeSpace:              0
         id:                     xxxxxxxxxxxxxxxxxxx
         interleaving:           none
         lun:                    0_0
         physicalDisk:           xxxxxxxxxxxxxxxxxxx
         raidLevel:              0
         size:                   1832.59375G
         status:                 normal


cel01: size:                 1832.59375G
cel01: size:                 1832.59375G
cel01: size:                 1861.703125G
cel01: size:                 1861.703125G
cel01: size:                 1861.703125G
cel01: size:                 1861.703125G
cel01: size:                 1861.703125G
cel01: size:                 1861.703125G
cel01: size:                 1861.703125G
cel01: size:                 1861.703125G
cel01: size:                 1861.703125G
cel01: size:                 1861.703125G

Finally we can see with the overhead at each level starting from a 2TB physical disk down to 1832GB of usable space to be used in ASM before we add the disk to a diskgroup with a NORMAL or HIGH redundancy level which will reduce the available space even further with ASM mirroring. We get about 89.5% of usable disk storage for each 2TB disk for an overhead of 10.5% which only applies to the first and second disk for overhead in the cell storage node. The remaining celldisks have 1861.7GB disks which is small overhead of about 1GB. 

One more item to note the grid disk size will match the size reported in v$asm_disk since the grid disks are presented in ASM.


CellCLI> list griddisk DATA_CD_00_cel01 detail
         name:                   DATA_CD_00_cel01
         asmDiskgroupName:       DATA
         asmDiskName:            DATA_CD_00_CEL01
         asmFailGroupName:       CEL01
         availableTo:
         cachingPolicy:          default
         cellDisk:               CD_00_cel01
         comment:
         creationTime:          
xxxxxxxxxxxxxxxxxxx          
         diskType:               HardDisk
         errorCount:             0
         id:                
    xxxxxxxxxxx                     
         offset:                 32M
         size:                   1466G
         status:                 active



select name, TOTAL_MB/1024 from  v$asm_disk ;


NAME                           TOTAL_MB/1024
------------------------------ -------------

...
DATA_CD_01_CEL01                    1466
....















 

Please keep this in mind when doing sizing for diskgroups and future capacity planning in setting up your Exadata storage.

Wednesday, April 8, 2015

Automate Deleting a Database


Delete a RAC database with DBCA and GUI and command line.

I wanted to demonstrate a simple way to delete or drop a RAC database using the Database Configuration Assistant. I have done this several times manually and decided to use DBCA to do this task easily which helps to automate the process and save time.

  1. Set the appropriate database home then invoke dbca as the oracle user from the command line.










  1. Select RAC database












  1. Select the option to Delete a Database.







  1. Select the database to delete then click Finish.




  1. Confirm Yes to delete all of the Oracle instances and datafiles for your database.

  1. Monitor the progress


  1. Final confirmation screen to perform another operation.

  1. Also you may monitor the progress of the database deletion from the alert log, please note the alert log will be removed after the database is removed.

Wed Apr 08 10:28:07 2015
ALTER SYSTEM SET cluster_database=FALSE SCOPE=SPFILE;
Shutting down instance (immediate)
Stopping background process SMCO
Shutting down instance: further logons disabled
Stopping background process QMN

...... Summmarized output

Completed: ALTER DATABASE CLOSE NORMAL

Deleted Oracle managed file +RECO_EXAD/odsdev2/archivelog/2015_04_08/thread_1_seq_21807.49154.876479305
...... Start the database with cluster_database set to false

Completed: ALTER DATABASE   MOUNT
ALTER SYSTEM enable restricted session;
DROP DATABASE
Deleted Oracle managed file +DATA_EXAD/odsdev2/datafile/system.459.791624835
Deleted Oracle managed file +DATA_EXAD/odsdev2/datafile/sysaux.457.791624821
...... Dropping of files from ASM associated with database
You will notice in ASM the directory for the database and its contents are removed as well also all RAC services registered to the Clusterware are also removed for you automatically!
Below is even an easier way to invoke dbca to drop a database via the command line.
The database must be running when you do this. Thanks to Charles Kim for sharing this with me.

RMANDR - oracle: cat del_DBATOOLS_dbca.txt
dbca -silent -deleteDatabase -sourceDB DBATOOLS -sysDBAUserName sys -sysDBAPassword ChangemeSys!

RMANDR - oracle: ksh del_DBATOOLS_dbca.txt
Connecting to database
4% complete
9% complete
14% complete
19% complete
23% complete
28% complete
47% complete
Updating network configuration files
52% complete
Deleting instance and datafiles
76% complete
100% complete
Look at the log file "/apps/oracle/cfgtoollogs/dbca/DBATOOLS.log" for further details.