Tuesday, April 9, 2013

Key Links for Virtualizing Oracle


Key Links for Best Practices
Lot of the best practices for virtualizing Oracle applies to hypervisors in general, whether it's VMware or Oracle VM.  Obviously, there are specific best practices when it comes to features that are specific to either of the products.  For example, we need to create separate interfaces on the VM host (ESXi host or Oracle VM Server) to segment off management related network traffic (i.e. management related traffic to maintain a network heartbeat or the traffic to perform live migrations (vMotion in VMware)).  At a minimum, each physical host needs to have 4 physical network interface cards.  6 Network interface cards will be highly recommended.  We will create a bonded network interfaces for the following network workloads:
1.      2 NICs bonded for the public network for all oracle database related traffic
2.      2 NICs bonded for oracle private network between the RAC clusters
3.      2 NICs bonded for communication between the ESXi or Oracle VM Server host machines

All the best practices that are applicable at the VM Guest level apply to both VMware and Oracle VM.  For example, we want to enable jumbo frames on the Guest VM.  We also want to setup hugepages and disable NUMA at the Guest VM level. 

In general, we also do not want to over-commit memory or CPUs for production environments.  For databases that fit well for consolidation, we can consider over-committing memory or CPUs.

For additional information for best practices for VMware, please read the following articles.

Four key documents for virtualizing Oracle
DBA Best Practices



High Availability Guide


vCloud Suite and vCloud Networking and Security
vCloud Editions

 vCloud Networking and Security


vCenter Operations


VMware Tech Resource Center (Videos, Whitepapers, Docs)

Miscellaneous
A high level whitepaper on virtualizing Business Critical Apps on VMware

Deployment Guide, Reference Architecture, Customer case studies and white papers



VMware Network I/O Control: Architecture, Performance and Best Practices http://www.vmware.com/files/pdf/techpaper/VMW_Netioc_BestPractices.pdf


Esxtop and vscsiStats

Memory Management vSphere 5

Resource Mgmt vSphere 5 
  
Achieving a Million IOPS in a single VM with vSphere5

VMXNET3 was designed with improving performance in mind. See, VMware KB 1001805: http://kb.vmware.com/selfservice/documentLinkInt.do?micrositeID=null&externalID=1001805


Performance Evaluation of VMXNET3 Virtual Network Device can be found at: http://www.vmware.com/pdf/vsp_4_vmxnet3_perf.pdf



Preferred BIOS settings (always double check with hardware vendor, http://www.vmware.com/pdf/Perf_Best_Practices_vSphere4.1.pdf




SCSI Queue Depth - Controlling LUN queue depth throttling in VMware ESX/ESXi


 Monitor disk latency at three distinct layers of the device or HBA, the kernel or ESX hypervisor and the guest or virtual machine. 


PVSCSI Storage Performance 


Snapshot limitations and best practices to minimize problems http://kb.vmware.com/kb/1025279

Jumbo frames VMXNET3 



The vSphere 4 CPU scheduler

   
Some excellent storage links from Chris Sakac (EMC) and Vaughn Stewart (NetApp)
VNX and vSphere Techbook

VMAX and vSphere Techbook

Isilon and vSphere Best Practices Guide

No comments:

Post a Comment