CellMetric Tool

Hey guys, it’s been a while since my last post as I’ve been quite busy on the last months at a personal project which could be used to collect some current metrics from a Exadata Storage Server. I got a beta version now, so it’s stills under development but I hope you enjoy this tool. It’s a Java App which was built and tested under ESS image version 12.1.2.1.1.150316.2 and Java version 1.7.0_72 . Java 7.0 is deployed under ESS image version 12.1.2.1.1

Well what this app call CellMetric does is quite simple, it executes CellCLI throw SSH – so Node Equivalency to cellmonitor uses needs to be set correctly -, list the current metrics from the cell that you provided using saving this output to a XML file and then print the results on the screen. That easy. The only attention is that the Database I/O Load Metric information is top ordered listing only the hugest 15 databases. The way how you execute it is java CellMetric -top -cell <cellHostName> . Below you can see an image from the execution time:

CellMetric Image

So just download this CellMetric_v1.2.zip place both file on your Exadata Database Machine and then enjoy it. Cya!!

Storage Exadata: Architecture

Hello fellows!! Lets go head into a new post about Exadata Database Machine, and having a tip from my friend David Siqueira, I’ll write about the storage archtecture from this machine. On the X5-2, the disks that are address to database that are on the Exadata Database Machine can be choose from two flavors: High Capacity (HC); and Extreme Flash (EF). Both of them has twelve disks, the HC configuration has 10K SAS disks having 4TB size each one, totaling 48TB of raw data but if the diskgroups are built using normal redudancy, this value falls to 20TB of data. The point about EF configuration is all about better write for the data. According to Oracle the EF configuration has the double of speed when writing to disks but the data capacity falls behind 25% if relation to HC.

The physical disks are presented on the disk controller for each cell node which are configured as LUNs on the server and based on this, the Exadata system software build the cell disks from the luns. The physical disks are the lower level of abstraction for the disk controller, while for the Exadata Storage software the luns are the lower level and the cell disks are the higher level. When the cell disks are mounted, one or more grid disks can be build from each cell disk and after that the grid disks can be used to create a diskgroup on an ASM instance.

zonebit

Grid disks are always build using the lowest offset (outer part of the disk ) available from a cell disk until the coldest one, often this are the hottest portion of the disk. That’s why when Oracle setup an Exadata Machine, the diskgroup for data use the lowest offset followed by RECO and DBFS diskgroups. The picture from the left show the gray part of the disk as the hottest and the orange one as the coldest

 

 

Only Oracle ACS is authorized to modify the structure for the cell disks after the Exadata Machine setup (except any open SR at Oracle that allows you). And if the company which acquires the Exadata Database Machine modify this structure, it could lose the support from this product because the company can only customize the grid disks from it. Below there is a picture that shows the relation from physical disk until grid disks:

 

Exadata Storage

When you need to add/create a disk to a diskgroup, you must inform all the infiniband IPs from the storage server and the name of the grids. Each grid disk has a naming standard which is composite of <diskgroup_name>_<cell_disk_type>_<cell_disk_number>_<cell_hostname>. Bellow a listed some commands that are used to list the physical disks, luns, cell disks and grid disks, also there is a command to add a grid disk to a diskgroup:

physical disk

parted lun

lun

griddisk

SYS@SQL> ALTER DISKGROUP <DISKGROUP_NAME> ADD DISK ‘o/<IP_IB01>;<IP_IB02>/<GRIDDISK_NAME>’;
SYS@SQL> ALTER DISKGROUP DATA ADD DISK ‘o/192.168.10.9;192.168.10.10/DATA_CD_00_exa01cell01’;

And as you can see, there were two grid disks built on a single cell disk and then we could add it to the diskgrou on the ASM. Well guys, that’s all for now and I hope you like it. See you next time!