Patching Exadata QFSP July 2015

Hey, everyone! I’m here with this shortly post about patching Exadata QFSP July 2015. My teammate and I have recently patched our X2-2 Half Rack environment from 11.2.3.3.0.131014.1 to 12.1.2.1.2.150617.1 so I want to THANK them (Vitor Eduardo, Claudio Angerami, Bruno Palma, Anselmo Ribeiro and Edmilson Carmo) for the great job we’ve done. There are no big news, nothing really changed from the other post that I made before, the big key is to pay attention on the ‘Known Issues’ and address them as founded. Also, analyze the RPMs that will be deleted in order to guarantee same functionality as before. After that, if everything is fine your platform should be ready to patch.

So let’s go for it! .Just a point here, we changed the real hostnames and IPs from the servers, cells and switches.

  • IB SWITCHES UPGRADE:

[root@exa01db01 patch_12.1.2.1.2.150617.1]# ./patchmgr -ibswitches /root/ib_switches -upgrade

With arguments: -ibswitches /root/ib_switches -upgrade
2015-08-08 17:01:37 -0300 [INFO] pid file: /var/log/exadatatmp/_EXA_AXE_patchmgr.lock : 98220
PID TTY TIME CMD
2015-08-08 17:01:42 -0300 ++++++++++++++++++ Logs so far begin ++++++++++
2015-08-08 17:01:42 -0300 ++++++++++++++++++ Logs so far end ++++++++++
2015-08-08 17:01:42 -0300 1 of 1 :SUCCESS: DO: Initiate upgrade of InfiniBand switches to 2.1.5-1. Expect up to 15 minutes for each switch
2015-08-08 17:45:23 -0300 ++++++++++++++++++ Logs so far begin ++++++++++
—– InfiniBand switch update process started Sat Aug 8 17:01:42 BRT 2015 —–
[NOTE ] Log file at /var/log/cellos/upgradeIBSwitch.log

[INFO ] List of InfiniBand switches for upgrade: ( exa01sw-ib2 exa01sw-ib3 )
[PROMPT ] Use the default password for all switches? (y/n) [n]:
[PROMPT ] Updating only 2 switch(es). Are you sure you want to continue? (y/n) [n]:
[SUCCESS ] Verifying Network connectivity to exa01sw-ib2
[SUCCESS ] Verifying Network connectivity to exa01sw-ib3
[SUCCESS ] Validating verify-topology output
[INFO ] Proceeding with upgrade of InfiniBand switches to version 2.1.5_1
[INFO ] Master Subnet Manager is set to “exa01sw-ib2” in all Switches

[INFO ] ———- Starting with IBSwitch exa01sw-ib2
[SUCCESS ] Disable Subnet Manager on exa01sw-ib2
[SUCCESS ] Copy firmware packages to exa01sw-ib2
[SUCCESS ] exa01sw-ib2 is at 2.1.3-4. Meets minimal patching level 2.1.3-4
[SUCCESS ] Verifying that /tmp has 120M in exa01sw-ib2, found 138M
[SUCCESS ] Verifying that / has 80M in exa01sw-ib2, found 198M
[SUCCESS ] Verifying that exa01sw-ib2 has 120M free memory, found 239M
[SUCCESS ] Verifying host details in /etc/hosts and /etc/sysconfig/network-scripts/ifcfg-eth[0,1] for exa01sw-ib2
[SUCCESS ] Verifying that exa01sw-ib2 has at least 1 NTP Server, found 1
[INFO ] Manually validate the following entries Date:(YYYY-MM-DD) 2015-08-08 Time:(HH:MM:SS) 17:03:03
[SUCCESS ] Execute plugin check for Patch Check Prereq on exa01sw-ib2
[SUCCESS ] Pre-update validation on exa01sw-ib2
[INFO ] Starting upgrade on exa01sw-ib2 to 2.1.5_1. Please give upto 10 mins for the process to complete. DO NOT INTERRUPT or HIT CTRL+C during the upgrade
[SUCCESS ] Execute plugin check for Patching on exa01sw-ib2
[SUCCESS ] Load firmware 2.1.5_1 onto exa01sw-ib2
[SUCCESS ] Disable Subnet Manager on exa01sw-ib2
[SUCCESS ] Verify that /conf/configvalid is set to 1 on exa01sw-ib2
[SUCCESS ] Set SMPriority to 5 on exa01sw-ib2
[INFO ] Rebooting exa01sw-ib2. Wait for 240 secs before continuing
[SUCCESS ] Reboot exa01sw-ib2
[SUCCESS ] Restart Subnet Manager on exa01sw-ib2
[INFO ] Starting post-update validation on exa01sw-ib2
[SUCCESS ] Inifiniband switch exa01sw-ib2 is at target patching level
[SUCCESS ] Verifying host details in /etc/hosts and /etc/sysconfig/network-scripts/ifcfg-eth[0,1] for exa01sw-ib2
[SUCCESS ] Verifying that exa01sw-ib2 has at least 1 NTP Server, found 1
[INFO ] Manually validate the following entries Date:(YYYY-MM-DD) 2015-08-08 Time:(HH:MM:SS) 17:19:42
[SUCCESS ] Firmware verification on InfiniBand switch exa01sw-ib2
[SUCCESS ] Execute plugin check for Post Patch on exa01sw-ib2
[SUCCESS ] Post-check validation on IBSwitch exa01sw-ib2
[SUCCESS ] Update switch exa01sw-ib2 to 2.1.5_1

[INFO ] ———- Starting with InfiniBand Switch exa01sw-ib3
[SUCCESS ] Disable Subnet Manager on exa01sw-ib3
[SUCCESS ] Copy firmware packages to exa01sw-ib3
[SUCCESS ] exa01sw-ib3 is at 2.1.3-4. Meets minimal patching level 2.1.3-4
[SUCCESS ] Verifying that /tmp has 120M in exa01sw-ib3, found 138M
[SUCCESS ] Verifying that / has 80M in exa01sw-ib3, found 199M
[SUCCESS ] Verifying that exa01sw-ib3 has 120M free memory, found 237M
[SUCCESS ] Verifying host details in /etc/hosts and /etc/sysconfig/network-scripts/ifcfg-eth[0,1] for exa01sw-ib3
[SUCCESS ] Verifying that exa01sw-ib3 has at least 1 NTP Server, found 1
[INFO ] Manually validate the following entries Date:(YYYY-MM-DD) 2015-08-08 Time:(HH:MM:SS) 17:24:40
[SUCCESS ] Execute plugin check for Patch Check Prereq on exa01sw-ib3
[SUCCESS ] Pre-update validation on exa01sw-ib3
[INFO ] Starting upgrade on exa01sw-ib3 to 2.1.5_1. Please give upto 10 mins for the process to complete. DO NOT INTERRUPT or HIT CTRL+C during the upgrade
[SUCCESS ] Execute plugin check for Patching on exa01sw-ib3
[SUCCESS ] Load firmware 2.1.5_1 onto exa01sw-ib3
[SUCCESS ] Disable Subnet Manager on exa01sw-ib3
[SUCCESS ] Verify that /conf/configvalid is set to 1 on exa01sw-ib3
[SUCCESS ] Set SMPriority to 5 on exa01sw-ib3
[INFO ] Rebooting exa01sw-ib3. Wait for 240 secs before continuing
[SUCCESS ] Reboot exa01sw-ib3
[SUCCESS ] Restart Subnet Manager on exa01sw-ib3
[INFO ] Starting post-update validation on exa01sw-ib3
[SUCCESS ] Inifiniband switch exa01sw-ib3 is at target patching level
[SUCCESS ] Verifying host details in /etc/hosts and /etc/sysconfig/network-scripts/ifcfg-eth[0,1] for exa01sw-ib3
[SUCCESS ] Verifying that exa01sw-ib3 has at least 1 NTP Server, found 1
[INFO ] Manually validate the following entries Date:(YYYY-MM-DD) 2015-08-08 Time:(HH:MM:SS) 17:41:09
[SUCCESS ] Firmware verification on InfiniBand switch exa01sw-ib3
[SUCCESS ] Execute plugin check for Post Patch on exa01sw-ib3
[SUCCESS ] Post-check validation on IBSwitch exa01sw-ib3
[SUCCESS ] Update switch exa01sw-ib3 to 2.1.5_1
[INFO ] InfiniBand Switches ( exa01sw-ib2 exa01sw-ib3 ) updated to 2.1.5_1
[SUCCESS ] Overall status

—– InfiniBand switch update process ended Sat Aug 8 17:45:23 BRT 2015 —–
2015-08-08 17:45:23 -0300 ++++++++++++++++++ Logs so far end ++++++++++
2015-08-08 17:45:23 -0300 1 of 1 :SUCCESS: DONE: Upgrade InfiniBand switch(es) to 2.1.5-1.
================PatchMgr run ended 2015-08-08 17:45:23 -0300 ===========


  • CELL NODES UPGRADE:

For this one, you need to execute -reset_force and -cleanup procedures before upgrade, as below:

[root@ndcing01db01 patch_12.1.2.1.2.150617.1]# ./patchmgr -cells /root/cell_group -reset_force
================PatchMgr run started 2015-08-08 00:21:13 -0300 ===========
With arguments: -cells /root/cell_group -reset_force
2015-08-08 00:21:13 -0300 [INFO] pid file: /var/log/exadatatmp/_EXA_AXE_patchmgr.lock : 72003
PID TTY TIME CMD
[INFO] Reset force was successful.
2015-08-08 00:21:18 -0300 :DONE: reset_force
================PatchMgr run ended 2015-08-08 00:21:18 -0300 ===========

[root@ndcing01db01 patch_12.1.2.1.2.150617.1]# ./patchmgr -cells /root/cell_group -cleanup
================PatchMgr run started 2015-08-08 00:21:45 -0300 ===========
With arguments: -cells /root/cell_group -cleanup
2015-08-08 00:22:50 -0300 :Working: DO: Cleanup …
2015-08-08 00:22:51 -0300 ++++++++++++++++++ Logs so far begin ++++++++++
[INFO] Reset force was successful.
2015-08-08 00:22:51 -0300 ++++++++++++++++++ Logs so far end ++++++++++
2015-08-08 00:22:51 -0300 :SUCCESS: DONE: Cleanup
================PatchMgr run ended 2015-08-08 00:22:51 -0300 ===========

After that, you are ready to patch the cells in a rolling fashion:

[root@ndcing01db01 patch_12.1.2.1.2.150617.1]# ./patchmgr -cells /root/cell_group -patch -rolling
================PatchMgr run started 2015-08-08 00:26:02 -0300 ===========
With arguments: -cells /root/cell_group -patch -rolling
2015-08-08 00:27:07 -0300 :Working: DO: Check cells have ssh equivalence for root user. Up to 10 seconds per cell …
2015-08-08 00:27:07 -0300 ++++++++++++++++++ Logs so far begin ++++++++++
2015-08-08 00:27:08 -0300 ++++++++++++++++++ Logs so far end ++++++++++
2015-08-08 00:27:08 -0300 :SUCCESS: DONE: Check cells have ssh equivalence for root user.
2015-08-08 00:27:11 -0300 :Working: DO: Initialize files, check space and state of cell services. Up to 1 minute …
2015-08-08 00:27:28 -0300 ++++++++++++++++++ Logs so far begin ++++++++++
.
.
.

I didn’t posted the whole log as this is a big one, but it is here so feel free to take a look at it. And finally we went to database nodes.

  • DB NODES UPGRADE:

[root@exa01db01 5.150701]# ./dbnodeupdate.sh -u -l /u01/install/quarterly_full_jul2015/21339383/Infrastructure/12.1.2.1.2/ExadataDatabaseServer_OL6/p21151982_121212_Linux-x86-64.zip -s

After this procedure thee server get reboot and it takes a while to bring up. If everything is fine with the patching, complete the patching:

[root@exa01db01 5.150701]# ./dbnodeupdate.sh -c

That’s it guys, upgrade finished successfully, everything went smoothly. Hope you enjoyed it!

Exadata Storage: What is the Secret?

Hello guys!! Today I’ll post about some technologies from Exadata Storage. IMHO, Oracle has highly score when it launch this hardware for it database. The newest version of it is an X5-2 that can has it configuration using flash disks on the storage which it’s called Extreme Flash. Ok, let’s take a look on this machine and the most interesting features that are beside it.

This hardware is made of database servers which host the whole database and clusterware instances. The data is present on the storage nodes (cell nodes) and also the Exadata system software.The communication between this servers uses two infiniband switches which can handle data transfer up to 40 Gbps. Besides that there is a management switch and the PDUs. When we talk about High Availability, this machine is all about it.

OK, but what is the deal about this hardware because if it is all about hardware everyone can “copy and paste” it? It is all about avoiding or reducing I/O that the software can provide. And this can only be achieved because there is a communication from the databases and the storage servers using a protocol called iDB that allows intelligent I/O to be done. When the I/O is requested from the database nodes to the cell nodes, the cell node knows what kind of I/O is occurring and how to deal with it.

Most of the features that will be mentioned ahead are about the Smart Scan concept. This behaviour can only occur when Direct Path is performed on the database, so sequential reads will no have benefit from Smart Scan. Bellow are mentioned some of this features that minimize I/O on the Exadata:

  1. Column Filtering: As the name means, there is a filtering about the columns so when your query that retrieves only one column from a table that has 10 columns, only the selected column is returned to the database server. On a normal environment, the storage would retrieve all the columns and the SGBD would filter it;
  2. Predicate Filtering: Similar to the Column Filtering feature, but this one is happens on the row level. The Exadata Storage can retrieve only the rows that satisfy your query;
  3. Cell Offloading: Normally, the work can be offloaded to the cells. An example would be a query that count all the employees from a company (select count(*) from hr.employees), is work is done on the cell nodes and only the result goes back to the db node. There could be cases that when there is a high workload on the cells, it can’t offload the work and all the rows goes back to the database node as it would in a normal environment;
  4. Storage Indexes: The cell nodes have the ability to analyze the queries so they can build the Storage Indexes (SI). This structure resides on the memory of the cell nodes and they are lost on every restart from the cells. This feature can provide information about the minimum and maximum values from a column, so the Exadata knows exactly what are the blocks to hit. Each table can have a maximum of eight SI;
  5. Join Processing: The Exadata uses the Bloom Filtering technique which is a probabilist method when you join two tables to efficiently test result sets, this can only be used on database using version equal and above to 11.2.0.4;

So the Exadata storage has a technology addressed to the highest data throughput per transaction and this is not recommended for an OLTP environment? No exactly, there are three features that I see as better designed to OLTP environments:

  1. Exadata Smart Flash Cache: This isn’t similar to the Database Smart Flash Cache feature. This feature has a method of write called write-back cache which the data can be first written on the ESFC and than can be write asynchronously on the cell disks presented on the storage nodes. Also, you can choose to com compress the data on the ESFC which gives you a better data usage capacity;
  2. Smart Flash Cache Log: There is a small area which is built on the flash cache from each cell nodes designed to redo logs writes. This is known as Smart Flash Cache Log, so when the cell nodes attempts to write a redo log request it tries to write on both cell disks and flasch cache. The first it gets, it acknowledge the request back to the db node speeding redo logs write which is excellent for OLTP environments;
  3. Join Processing: this feature is a good one for both DW and OLTP environments;

I understand that Smart Flash Cache with write-back enabled can be a good feature for DW environments too, but when we move to high workload environment with high load of data, the data could probably not fit into Smart Flash Cache. Besides that, the Exadata Database Machine has a special feature called HCC (Hybrid Columnar Compression) where the data could be compressed at high levels reducing I/O and enhance the performance for this machine. Well guys, that’s all for now! See you!

Storage Exadata: Architecture

Hello fellows!! Lets go head into a new post about Exadata Database Machine, and having a tip from my friend David Siqueira, I’ll write about the storage archtecture from this machine. On the X5-2, the disks that are address to database that are on the Exadata Database Machine can be choose from two flavors: High Capacity (HC); and Extreme Flash (EF). Both of them has twelve disks, the HC configuration has 10K SAS disks having 4TB size each one, totaling 48TB of raw data but if the diskgroups are built using normal redudancy, this value falls to 20TB of data. The point about EF configuration is all about better write for the data. According to Oracle the EF configuration has the double of speed when writing to disks but the data capacity falls behind 25% if relation to HC.

The physical disks are presented on the disk controller for each cell node which are configured as LUNs on the server and based on this, the Exadata system software build the cell disks from the luns. The physical disks are the lower level of abstraction for the disk controller, while for the Exadata Storage software the luns are the lower level and the cell disks are the higher level. When the cell disks are mounted, one or more grid disks can be build from each cell disk and after that the grid disks can be used to create a diskgroup on an ASM instance.

zonebit

Grid disks are always build using the lowest offset (outer part of the disk ) available from a cell disk until the coldest one, often this are the hottest portion of the disk. That’s why when Oracle setup an Exadata Machine, the diskgroup for data use the lowest offset followed by RECO and DBFS diskgroups. The picture from the left show the gray part of the disk as the hottest and the orange one as the coldest

 

 

Only Oracle ACS is authorized to modify the structure for the cell disks after the Exadata Machine setup (except any open SR at Oracle that allows you). And if the company which acquires the Exadata Database Machine modify this structure, it could lose the support from this product because the company can only customize the grid disks from it. Below there is a picture that shows the relation from physical disk until grid disks:

 

Exadata Storage

When you need to add/create a disk to a diskgroup, you must inform all the infiniband IPs from the storage server and the name of the grids. Each grid disk has a naming standard which is composite of <diskgroup_name>_<cell_disk_type>_<cell_disk_number>_<cell_hostname>. Bellow a listed some commands that are used to list the physical disks, luns, cell disks and grid disks, also there is a command to add a grid disk to a diskgroup:

physical disk

parted lun

lun

griddisk

SYS@SQL> ALTER DISKGROUP <DISKGROUP_NAME> ADD DISK ‘o/<IP_IB01>;<IP_IB02>/<GRIDDISK_NAME>’;
SYS@SQL> ALTER DISKGROUP DATA ADD DISK ‘o/192.168.10.9;192.168.10.10/DATA_CD_00_exa01cell01’;

And as you can see, there were two grid disks built on a single cell disk and then we could add it to the diskgrou on the ASM. Well guys, that’s all for now and I hope you like it. See you next time!

Storage Exadata: Qual o segredo?

Olá, galerinha! Hoje venho postar (bem por alto) sobre algumas das tecnologias do storage Exadata. Em minha humilde opinião, a Oracle acertou em cheio quando lançou no mercado este hardware projetado especificamente para o seu banco de dados. A versão mais recente deste hardware é a X5-2 (adquiridas nas versões de rack: eight; quarter; half; e full) que, diferentemente das versões anteriores, pode-se optar por discos de flash na opção de Extreme Flash. Aliás, nesta versão, o comprador pode também optar por adicionar apenas uma célula e/ou um database node não ficando amarrado nas configurações padrões.

Bom, vou passar brevemente pela arquitetura desta máquina apenas para ilustrar melhor o conceito do software. O hardware é composto por database nodes que hospedam os binários do Oracle Clusterware e as instâncias de banco de dados. E os dados residem nos storage nodes (cell nodes) que possuem os discos e o software. A comunicação entre estes servidores é efetuada através de infinibands switches que apresentam taxas de transmissão de até 40 Gbps. Além disso, temos um swtich de gerencia e as PDUs. Em termos de alta disponibilidade, a máquina apresenta uma solução completa.

Mas afinal, qual o grande lance desta tecnologia já que hardware qualquer fabricante consegue copiar? Em resumo (e na maioria das suas características), eu arrisco dizer que é a inteligência na redução de I/O proporcionado pelo software que está em suas células. E isso só acontece porque há uma comunicação, utilizando o protocolo iDB, entre o banco de dados e as células que informam o tipo de atividade executada. Sendo assim, as células funcionam provendo serviço de dados para o banco de dados nos db nodes.

Vale lembrar que as features abaixo mencionadas estão dentro do conceito de Smart Scan. Esta característica só pode ocorrer se houver (em básico porque existem várias regras) Direct Path no banco de dados, portanto todas as leituras ordenadas em índices (Index Range Scan / Index Unique Scan / Index Max/Min / Index Skip Scan / Index Full Scan), obtém em nada os recursos que somente este storage pode prover. A seguir serão mencionadas algumas destas tecnologias de redução de I/O:

  1. Column Filtering: No Exadata, e como o nome sugere, ocorre o filtro por coluna. Sendo assim, a query que buscar apenas uma coluna da tabela e esta possui dez colunas, somente será retornado os dados desta coluna. Em um servidor convencional, os blocos de toda a tabela seriam retornados do storage para o banco de dados, e este efetuaria o filtro da única coluna mencionada;
  2. Predicate Filtering: Similar a característica anterior, este filtro ocorre a nível de registro. Entenda que quando o banco de dados convencional solicita apenas um registro de uma tabela, e este registro está presente em um único bloco, o bloco por completo será retornado para o servidor de banco de dados, que descartará os demais registros. No Exadata, apenas o registro do bloco é retornado para o banco de dados;
  3. Cell Offloading: Sempre que possível, o trabalho será realizado pelas células. Exemplo, quando uma query pesquisar pela quantidade de funcionários em uma tabela (select count(*) from hr.employees), esta atividade será efetuada pela célula e somente será retornado o valor final para o db node. Em um servidor convencional, todos os dados da tabela seriam retornados para o banco de dados e este teria que efetuar a conta para retornar o resultado. Podem existir casos extremos que, com o grande consumo de atividades nas células, as atividades de offloading podem ser direcionadas para os database nodes executarem;
  4. Storage Indexes: As células tem a inteligência de analisar as pesquisas das queries em determinadas tabelas e viabilizam a construção dos SI (Storage Indexes). Esta estrutura reside na memória das células (sendo assim são perdidas todas as vezes que a célula for desligada) e informa os valores mínimos e máximos da coluna envolvida na query, sendo assim o storage consegue saber exatamente onde está o dado. Esta estrutura consegue montar até oito SIs para uma tabela;
  5. Join Processing: O Exadata utiliza da técnica de Bloom Filtering que é um método probabilístico utilizado quando se envolve uma tabela grande e uma pequena para se testar eficientemente um conjunto de resultados entre ambos. Esta técnica também pode ser analisada em banco de dados que estão acima da versão 11.2.0.4;

Então quer dizer que o storage do Exadata é uma tecnologia direcionada para ambientes com altíssimo volume de dados por transação (DW / DSS), não sendo recomendada para ambientes com alta quantidade transacional de baixo volume de dados (OLTPs com baixa escrita)? Não exatamente. Existem três características do Exadata que consigo avaliar sua utilização benéficas em ambientes OLTPs e estas são:

  1. Exadata Smart Flash Cache: Não confunda esta tecnologia com a Database Smart Flash Cache. A tecnologia que possui no Exadata utiliza os discos flash que constam na célula enquanto que a outra utiliza discos de alta velocidade (SSD). E no Exadata, esta área ainda pode ser configurada como write-back onde o dado será escrito primeiro na cache, liberando a transação e a cache se encarregará de escrevê-lo nos discos das células. Atualmente, pode-se também optar por comprimir os discos flash para aumentar a capacidade da flash cache;
  2. Smart Flash Cache Log: Esta tecnologia, que confunde um pouco as pessoas (inclusive eu estava confuso sobre esta há alguns meses atrás), mantém uma área de dentro de cada uma nas flash cache das células utilizadas exclusivamente para a escrita de redo logs. Sendo assim, quando a célula recebe uma requisição para escrever em seus discos dados de redo, automaticamente esta efetua escrita paralelizada tanto nos discos como na flash cache. Buscando otimizar a escrita do mesmo, pois em ambientes OLTPs, o gargalo pode ocorrer nestas áreas;
  3. Join Processing: mencionado anteriormente e que pode ter uma boa utilização em uniões de tabelas;

Entendo que a tecnologia do Smart Flash Cache com write-back habilitado pode ser um excelente benefício também para ambientes DWs porém quando estamos falando de alto volume de dados neste ambiente, pode ser que as áreas de cache não comportem a capacidade de dados transacionados.

Além destas tecnologias, o Exadata possui a característica de compressão de tabelas chamada HCC (Hybrid Columnar Compression) onde os dados são armazenados e comprimidos a nível de coluna, obtendo ótimos resultados de compressão. Mas por hoje é só e nos próximos posts, irei dar exemplo do funcionamento destas tecnologias de redução de I/O assim como de compressão. Aquele abraaaaaaaa.

Applying patch to Exadata

Hi everyone! I’m here to post about the patch apply on Exadata Machine. As best practices we will apply the QFSP (Quatterly Full Stack Patch) for Exadata Jan/2014. The patch apply is totally automatic so if the prereqs were addressed correctly, you will have no bad surprise and your Exadata environment will be patched successfully. At my job, our team applied it recently without any issue.

The patch number is 17816100 [Quarterly Full Stack Download Patch For Oracle Exadata (Jan 2014 – 11.2.3.3.0)] which has 3.6G . This patch will patch most of the Exadata Database Machine components, whic are: databases; dbnodes; storage servers; infiniband switches; and PDUs (Power Distribution Units). Our databases are already patched to version (11.2.0.3.21) and on the end of this patching, the image version for the db and cell nodes should be 11.2.3.3.0 as we are moving from image 11.2.3.2.1.

You should carefully read all the README and notes regarding this patch as there is a complete list of prereqs and things to analyze. Although the db and cell nodes will all end with the same image version, on our case, the infiniband switches upgrade was optional according to the compatibility matrix but to keep things easy, we upgraded them too. The PDUs upgrade are optional and these is the easiest one.

Now lets get hands on it and lets begin with the PDUs. Doing this upgrade will cost you no outage and it is as simple as upgrading the firmware from your home network router. Just navigate to your PDU from your browser and hit “Net Configuration”. Scroll down to “Firmware Upgrade” and select the file MKAPP_Vx.x.dl to upgrade. After the PDU firmware was upgraded it will popup for the HTML interface to be upgraded so you need to select the file HTML_Vx.x.dl. Do that on all of the PDUs and your are done with it. Peace of cake.

Now lets proceed to the cells upgrade. As we usage the rolling upgrade strategy (no outage), all of the database software must have 17854520 patch applied on them, other while, the DBs may hang or crash. The utility used to patch the cells and infiniband switches is patchmgr (which should be executed as root). Also, you can run a precheck for the upgrade from this utility, as mentioned below:

# ./patchmgr -cells cell_group -patch_check_prereq -rolling

It is recommended to higher the disk repair time from diskgroups, in order to do not drop the disks. Also, and according to Oracle docs, it is recommended to reset the cells if this is the first time that those cells image are upgraded. Do this one cell at a time and then initiate the cell upgrade. The patchmgr should be executed from the dbnode.

# ./patchmgr -cells cel01 -reset_force
# ./patchmgr -cells cel02 -reset_force
# ./patchmgr -cells cel03 -reset_force
# ./patchmgr -cells cell_group -rolling

After finishing successfully the cells upgrade, go for infiniband switches precheck upgrade and execute the patchmgr utility as listed below:

# ./patchmgr -ibswitches -upgrade -ibswitch_precheck

To continue with the ib switches upgrade just remove the precheck parameter:

# ./patchmgr -ibswitches -upgrade

When you are done with the infiniband switches and the cell nodes you should go to upgrade the database nodes. For this upgrade, you will have the dbnodeupdate.sh utility. This will upgrade dbnodes kernel and all of the dependent packages. Pay attention that if you have any other third package installed you should upgrade it manually after the upgrade. On our environment, the kernel will be upgrade to Oracle Linux 5.9 (kernel-2.6.39-400.126.1.el5uek).The dbnodeupdate.sh is fully automatic and it will disable and bring down the CRS for the node. You must use root user to run it and for best practices do it one node at a time.

To perform a precheck run it with the parameter -v on the end
# ./dbnodeupdate.sh -u -l $PATCH_17816100/Infrastructure/ExadataStorageServer/11.2.3.3.0/p17809253_112330_Linux-x86-64.zip -v

Now to start the upgrade for the dbnode, execute it without the -v parameter
# ./dbnodeupdate.sh -u -l $PATCH_17816100/Infrastructure/ExadataStorageServer/11.2.3.3.0/p17809253_112330_Linux-x86-64.zip

After the machine reboots, confirm the upgrade executing:
# ./dbnodeupdate.sh -c

Perform this steps on all the dbnodes remaining and you are done. The whole Exadata Machine is patched, run imageinfo on all dbnodes e storage servers to confirm the new image. On the ibswitches run the command version to confirm it:

# dcli -g all_group -l root imageinfo
db01:
db01: Kernel version: 2.6.39-400.126.1.el5uek #1 SMP Fri Sep 20 10:54:38 PDT 2013 x86_64
db01: Image version: 11.2.3.3.0.131014.1
db01: Image activated: 2014-03-29 10:30:56 -0300
db01: Image status: success
db01: System partition on device: /dev/mapper/VGExaDb-LVDbSys1
db01:

db02:
db02: Kernel version: 2.6.39-400.126.1.el5uek #1 SMP Fri Sep 20 10:54:38 PDT 2013 x86_64
db02: Image version: 11.2.3.3.0.131014.1
db02: Image activated: 2014-03-30 10:23:58 -0300
db02: Image status: success
db02: System partition on device: /dev/mapper/VGExaDb-LVDbSys1
db02:

cel01:
cel01: Kernel version: 2.6.39-400.126.1.el5uek #1 SMP Fri Sep 20 10:54:38 PDT 2013 x86_64
cel01: Cell version: OSS_11.2.3.3.0_LINUX.X64_131014.1
cel01: Cell rpm version: cell-11.2.3.3.0_LINUX.X64_131014.1-1
cel01:
cel01: Active image version: 11.2.3.3.0.131014.1
cel01: Active image activated: 2014-03-28 23:42:33 -0300
cel01: Active image status: success
cel01: Active system partition on device: /dev/md6
cel01: Active software partition on device: /dev/md8
cel01:
cel01: In partition rollback: Impossible
cel01:
cel01: Cell boot usb partition: /dev/sdm1
cel01: Cell boot usb version: 11.2.3.3.0.131014.1
cel01:
cel01: Inactive image version: 11.2.3.1.0.120304
cel01: Inactive image activated: 2012-05-21 18:00:09 -0300
cel01: Inactive image status: success
cel01: Inactive system partition on device: /dev/md5
cel01: Inactive software partition on device: /dev/md7
cel01:
cel01: Boot area has rollback archive for the version: 11.2.3.1.0.120304
cel01: Rollback to the inactive partitions: Possible

cel02:
cel02: Kernel version: 2.6.39-400.126.1.el5uek #1 SMP Fri Sep 20 10:54:38 PDT 2013 x86_64
cel02: Cell version: OSS_11.2.3.3.0_LINUX.X64_131014.1
cel02: Cell rpm version: cell-11.2.3.3.0_LINUX.X64_131014.1-1
cel02:
cel02: Active image version: 11.2.3.3.0.131014.1
cel02: Active image activated: 2014-03-29 00:46:13 -0300
cel02: Active image status: success
cel02: Active system partition on device: /dev/md6
cel02: Active software partition on device: /dev/md8
cel02:
cel02: In partition rollback: Impossible
cel02:
cel02: Cell boot usb partition: /dev/sdm1
cel02: Cell boot usb version: 11.2.3.3.0.131014.1
cel02:
cel02: Inactive image version: 11.2.3.1.0.120304
cel02: Inactive image activated: 2012-05-21 18:01:07 -0300
cel02: Inactive image status: success
cel02: Inactive system partition on device: /dev/md5
cel02: Inactive software partition on device: /dev/md7
cel02:
cel02: Boot area has rollback archive for the version: 11.2.3.1.0.120304
cel02: Rollback to the inactive partitions: Possible

cel03:
cel03: Kernel version: 2.6.39-400.126.1.el5uek #1 SMP Fri Sep 20 10:54:38 PDT 2013 x86_64
cel03: Cell version: OSS_11.2.3.3.0_LINUX.X64_131014.1
cel03: Cell rpm version: cell-11.2.3.3.0_LINUX.X64_131014.1-1
cel03:
cel03: Active image version: 11.2.3.3.0.131014.1
cel03: Active image activated: 2014-03-29 01:51:22 -0300
cel03: Active image status: success
cel03: Active system partition on device: /dev/md6
cel03: Active software partition on device: /dev/md8
cel03:
cel03: In partition rollback: Impossible
cel03:
cel03: Cell boot usb partition: /dev/sdm1
cel03: Cell boot usb version: 11.2.3.3.0.131014.1
cel03:
cel03: Inactive image version: 11.2.3.1.0.120304
cel03: Inactive image activated: 2012-05-21 18:01:28 -0300
cel03: Inactive image status: success
cel03: Inactive system partition on device: /dev/md5
cel03: Inactive software partition on device: /dev/md7
cel03:
cel03: Boot area has rollback archive for the version: 11.2.3.1.0.120304
cel03: Rollback to the inactive partitions: Possible

sw-ib2 # version
SUN DCS 36p version: 2.1.3-4
Build time: Aug 28 2013 16:25:57
SP board info:
Manufacturing Date: 2011.05.08
Serial Number: “NCD6I0106”
Hardware Revision: 0x0006
Firmware Revision: 0x0000
BIOS version: SUN0R100
BIOS date: 06/22/2010

sw-ib3 # version
SUN DCS 36p version: 2.1.3-4
Build time: Aug 28 2013 16:25:57
SP board info:
Manufacturing Date: 2011.05.11
Serial Number: “NCD6Q0110”
Hardware Revision: 0x0006
Firmware Revision: 0x0000
BIOS version: SUN0R100
BIOS date: 06/22/2010

Docs:

• Exadata 11.2.3.3.0 release and patch (16278923) (Doc ID 1487339.1)
• Exadata Database Server Patching using the DB Node Update Utility (Doc ID 1553103.1)
• Exadata Patching Overview and Patch Testing Guidelines (Doc ID 1262380.1)
• Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1)

Thats it guys!