Download Clusterware Installation Software: Oracle Cluster

In this Document Applies to: Oracle Unified Directory - Version 11.1.2.3.0 to 11.1.2.3.0 Release 11g Oracle Database - Enterprise Edition - Version 10.2.0.1 to 12.1.0.2 Release 10.2 to 12.1 Oracle Database - Enterprise Edition - Version 12.2.0.1 to 12.2.0.1 Release 12.2 Information in this document applies to any platform. Purpose The goal of the Oracle Real Application Clusters (RAC) series of Best Practice and Starter Kit notes is to provide customers with quick knowledge transfer of generic and platform specific best practices for implementing, upgrading and maintaining an Oracle RAC system.

This document is compiled and maintained based on Oracle's experience with its global RAC customer base. This Starter Kit is not meant to replace or supplant the Oracle Documentation set, but rather, it is meant as a supplement to the same. It is imperative that the Oracle Documentation be read, understood, and referenced to provide answers to any questions that may not be clearly addressed by this Starter Kit.

All recommendations should be carefully reviewed by your own operations group and should only be implemented if the potential gain as measured against the associated risk warrants implementation. Risk assessments can only be made with a detailed knowledge of the system, application, and business environment. As every customer environment is unique, the success of any Oracle Database implementation, including implementations of Oracle RAC, is predicated on a successful test environment. It is thus imperative that any recommendations from this Starter Kit are thoroughly tested and validated using a testing environment that is a replica of the target production environment before being implemented in the production environment to ensure that there is no negative impact associated with the recommendations that are made. Scope This article applies to all new and existing RAC implementations as well as RAC upgrades.

Details RAC Platform Specific Starter Kits and Best Practices While this note focuses on platform independent (generic) RAC Best Practices, the following notes contain detailed platform specific best practices including Step-By-Step installation cookbooks. RAC and Oracle Clusterware Best Practices and Starter Kit (Linux) RAC and Oracle Clusterware Best Practices and Starter Kit (Solaris) RAC and Oracle Clusterware Best Practices and Starter Kit (Windows) RAC and Oracle Clusterware Best Practices and Starter Kit (AIX) RAC and Oracle Clusterware Best Practices and Starter Kit (HP-UX) RAC Platform Generic Load Testing and System Test Plan Outline A critical task of any successful implementation, particularly mission critical Maximum Availability environments, is testing. For a RAC environment, testing should include both load generation and fault injection testing. Load testing will allow for measurement of how the system reacts under heavy load while fault injection testing will help ensure that the system reacts as designed when those inevitable hardware and/or software failures occur. The following documents will provide you with guidance in performing this crucial testing. Click for a White Paper on available RAC System Load Testing Tools Click for a platform generic RAC System Test Plan Outline for 10gR2 and 11gR1 Click for a platform generic RAC System Test Plan Outline for 11gR2 These documents are to be used to validate your system setup and configuration, and also as a means to practice responses and establish procedures in case of certain types of failures.

ORAchk (Formerly RACcheck) - Oracle Configuration Audit Tool ORAchk is a Configuration Audit tool designed to audit various important configuration settings within Real Application Clusters (RAC), Oracle Clusterware (CRS), Automatic Storage Management (ASM) and Grid Infrastructure environments. This utility is to be used to validate the Best Practices and Success Factors defined in the series of Oracle Real Application Clusters (RAC) Best Practice and Starter Kit notes which are maintained by the RAC Assurance development and support teams. At present ORAchk supports Linux (x86 and x8664), Solaris SPARC and AIX (with the bash shell) platforms. Those customers running RAC on ORAchk supported platforms are strongly encouraged to utilize this tool identify potential configuration issues that could impact the stability of the cluster. ORAchk - Oracle Configuration Audit Tool Note: Oracle is constantly generating and maintaining Best Practices and Success Factors from the global customer base. As a result the ORAchk utility is frequently updated with this information. That said, it is recommended that you ensure you are using the version of ORAchk prior to execution.

Top 11 Things to do NOW to Stabilize your RAC Cluster Environment As a proactive measure to prevent cluster instability due to commonly know issues, the Oracle RAC Proactive Support team has compiled a list of the top 11 issues that can impact the stability of a RAC cluster. Though all of these recommendations are contained within the series of of Best Practice and Starter Kit notes, we do strongly recommend the following note be reviewed as we do feel these are key success factors. Top 11 Things to do NOW to Stabilize your RAC Cluster Environment Design Considerations The following Design Considerations are to provide guidance and best practice information around the infrastructure (Platform Independent) to support an Oracle RAC implementation. This information not only pertains to new installations and upgrade but will also provide useful information for those supporting existing RAC implementations. General Design Considerations. To simplify the stack and simplify vendor interactions, Oracle recommends avoiding 3rd party clusterware, unless absolutely necessary.

Automatic Storage Management (ASM) is recommended for database storage. Additional information regarding ASM can be found in.

Check the support matrix to ensure supportability of product, version and platform combinations or for understanding any specific steps which need to be completed which are extra in the case of some such combinations. Check with the Disk Vendor that the Number of Nodes, OS version, RAC version, CRS version, Network fabric, and Patches are certified, as some Storage/San vendors may require special certification for a certain number of nodes. Plan and document capacity requirements. Work with server vendor to produce detailed capacity plan and system configuration, but consider: Use normal capacity planning process to estimate number of CPUs required to run workload. Both SMP and RAC clusters have synchronization costs as the number of CPUs increase. SMPs normally scale well for small number of CPUs, RAC clusters normally scale better than SMPs for large number of CPUs.

Eliminate any single points of failure in the architecture. Examples include (but are not limited to): Cluster interconnect redundancy (NIC bonding etc), multiple access paths to storage, using 2 or more HBA's or initiators and multipathing software, and Disk mirroring/RAID. Additional details are found in the subsequent sections. Use proven Maximum Availability strategies. RAC is one component in the overall Maximum Availability Architecture.

Review Oracle's Maximum Availability Architecture blueprint found at. Having a system test plan to help plan for and practice unplanned outages is crucial. This note has an attached sample System Test Plan Outline, to guide your system testing to help prepare for potential unplanned failures. It is strongly advised that a production RAC instance does not share a node with a DEV, TEST, QA or TRAINING instance. These extra instances can often introduce unexpected performance changes into a production environment. Along the same lines, it is highly recommended that testing environments mirror production environments as closely as possible.

Having a step-by-step plan for your RAC project implementation is invaluable. The following OTN article contains a sample project outline: Networking Considerations. For the private network 10 Gigabit Ethernet is highly recommended, the minimum requirement is 1 Gigabit Ethernet. Underscores are not be used in a host or domain name according to - DoD Internet host table specification.

The same applies for Net, Host, Gateway, or Domain name. The VIPs and SCAN VIPs must be on the same subnet as the public interface. For additional information see the. The default gateway must be on the same subnet as the VIPs (including SCAN VIPs) to prevent VIP start/stop/failover issues. With 11gR2 this is detected and reported by the OUI, if the check is ignored this will result in the failure to start the VIPs resulting in failure of the installation itself.

It is recommended that the SCAN name (11gR2 and above) resolve via DNS to a minimum of 3 IP addresses round-robin regardless of the size of the cluster. For additional information see the.

To avoid name resolution issues, ensure that the HOSTS files and DNS are furnished with both VIP and Public host names. SCAN must NOT be in the HOSTS file due to the fact that the HOSTS file is only able to represent a 1:1 host to IP mapping. The network interfaces must have the same name on all nodes (e.g eth1 - eth1 in support of the VIP and eth2 - eth2 in support of the private interconnect). Network Interface Card (NIC) names must not contain '.

'. Jumbo Frames for the private interconnect is a recommended best practice for enhanced performance of cache fusion operations. Reference:.

Use non-routable network addresses for private interconnect; Class A: 10.0.0.0 to 10.255.255.255, Class B: 172.16.0.0 to 172.31.255.255, Class C: 192.168.0.0 to 192.168.255.255. Refer to and for additional information. Make sure network interfaces are configured correctly in terms of speed, duplex, etc. Various tools exist to monitor and test network: ethtool, iperf, netperf, spray and tcp. To avoid the public network or the private interconnect network from being a single point of failure, Oracle highly recommends configuring a redundant set of public network interface cards (NIC's) and private interconnect NIC's on each cluster node. Starting with 11.2.0.2 Oracle Grid Infrastructure can provide redundancy and load balancing for the private interconnect (NOT the public network), this is the preferred method of NIC redundancy for full 11.2.0.2 stacks (11.2.0.2 Database must be used). More information can be found in.

NOTE: If using the 11.2.0.2 Redundant Interconnect/HAIP feature - At present it is REQUIRED that all interconnect interfaces be placed on separate subnets. If the interfaces are all on the same subnet and the cable is pulled from the first NIC in the routing table a rebootless-restart or node reboot will occur. See for a technical description of this requirement. For more predictable hardware discovery, place hba and nic cards in the same corresponding slot on each server in the Grid. The use of a switch (or redundant switches) is required for the private network (crossover cables are NOT supported).

Dedicated redundant switches are highly recommended for the private interconnect due to the fact that deploying the private interconnect on a switch (even when using a VLAN) may expose the interconnect links to congestion and instability in the larger IP network topology. If deploying the interconnect on a VLAN, there should be a 1:1 mapping of VLAN to non-routable subnet and the VLAN should not span multiple VLANs (tagged) or multiple switches. Deployment concerns in this environment include Spanning Tree loops when the larger IP network topology changes, Asymmetric routing that may cause packet flooding, and lack of fine grained monitoring of the VLAN/port. If deploying the cluster interconnect on a VLAN, review the considerations in the white paper. Consider using Infiniband on the interconnect for workloads that have high volume requirements.

Infiniband can also improve performance by lowering latency. When Infiniband is in place the RDS protocol can be used to further reduce latency. See for additional details. Starting with 12.1.0.1 IPv6 is supported for the Public Network, IPv4 must be used for the Private Network.

Please see the for details. For version Grid Infrastructure 11.2.0.2 multicast traffic must be allowed on the private network for the 230.0.1.0 subnet. (Included in GI PSU 11.2.0.2.1 and above) for Oracle Grid Infrastructure 11.2.0.2 enables multicasting on the 224.0.0.251 multicast address on the private network. Multicast must be allowed on the private network for one of these 2 addresses (assuming the patch has been applied).

Additional information as well as a program to test multicast functionality is provided in. Storage Considerations (Including ASM). Implement multiple access paths to storage array using two or more HBAs or initiators with multi-pathing software over these HBAs. Where possible, use the pseudo devices (multi-path I/O) as the diskstring for ASM.

Examples are: EMC PowerPath, Veritas DMP, Sun Traffic Manager, Hitachi HDLM, IBM SDDPC, Linux 2.6 Device Mapper. This is useful for I/O loadbalancing and failover. Ensure Correct Mount Options for NFS Disks when RAC is used with NFS.The documented mount options are detailed in for each platform. ASM is the current and future direction for Oracle Database storage. That said, it is a highly recommended best practice that ASM be used (opposed to a clustered file system) within a RAC environment. ASM is required for data file storage when using Oracle RAC Standard Edition. Adhere to ASM best practices.

Reference: ASM Technical Best Practices. Though not explicitly stated, these best practices laid forth in this paper also are applicable to 11gR2. 11gR2 ASM Best Practices are also documented in. Ensure that ASM Disk Discovery times are optimal by customizing the ASM Diskstring, See Improving ASM Disk Discovery (ASMDISKSTRING) Time Best Practices. It is recommended to maintain no more than 2 ASM disk groups, one for database area and one for flash recovery area, on separate physical disks. RAID storage array LUNs can be used as ASM disks to minimize the number of LUNs presented to the OS. A minimum of 4 LUNs that are identical in size and performance per ASM diskgroup (each LUN in a separate RAID group) should be used to ensure optimal performance.

Create external redundancy disk groups when using high-end storage arrays. High-end storage arrays generally provide hardware RAID protection.

Oracle

Use Oracle ASM mirroring redundancy when not using hardware RAID, or when you need host-based volume management functionality, such as mirroring across storage systems. You can use Oracle ASM mirroring in configurations when mirroring between geographically-separated sites (extended clusters). For 11g, Automatic Memory Management (AMM) is enabled by default on an ASM instance, even when the MEMORYTARGET parameter is not explicitly set. The default value used for MEMORYTARGET is generally not sufficient and should be increased to 1536MB. If you are experiencing ORA-4031 errors within your ASM Instance(s), adjustments to the AMM settings may be necessary.

Review (ASM Instances Are Reporting ORA-04031 Errors) for guidance on proper settings to avoid shared pool exhaustion. For 10g, increase ASM instance SGA parameter size allocations from their default values. ASM processes = 50 x ( + ) + 10 x.

Choose a hardware RAID stripe size that is a power of 2 and less than or equal to the size of the Oracle ASM allocation unit. ORA-15196 (ASM block corruption) can occur, if LUNs larger than 2TB are presented to an ASM diskgroup. As a result of the fix, ORA-15099 will be raised if a disk larger than 2TB is specified.

This is irrespective of the presence of asmlib. Workaround: Do not add more than 2 TB size disk to a diskgroup. Reference:.

On some platforms repeat warnings about AIO limits may be seen in the alert log: 'WARNING:Oracle process running out of OS kernel I/O resources.' Apply, available on many platforms. This issue affects 10.2.0.3, 10.2.0.4, and 11.1.0.6.

It is fixed in 11.1.0.7. The occurrence of (possible metadata corruption) has being identified during an ASM upgrade from release 10.2 to release 11.1 or 11.2, this bug could only occur having ASM diskgroups with an AU 1 MB (before the ASM upgrade is performed). This bug is not encountered with new diskgroups created directly on release 11.1 or 11.2.

In order to prevent any occurrence of, a public alert has been generated and it is visible through My Oracle Support. Reference: Alert: Querying v$asmfile Gives ORA-15196 After ASM Was Upgraded From 10gR2 To 11gR2. In short, you would want to run an 'alter diskgroup check all repair' to validate and repair any upgraded diskgroups. Clusterware and Grid Infrastructure Configuration Considerations. For versions prior to 11gR2 it is recommended that Voting Disks be stored on RAW or Block Devices (depending on the OS and Oracle version) and Oracle supplied redundancy be used regardless of the underlying storage configuration. Two OCRs are recommended. Voting Disks should be be maintained in odd numbers making the minimum number 3.

Odd numbers of Voting Disks are recommended because losing 1/2 or more of all of your voting disks will cause nodes to get evicted from the cluster, or nodes to evict themselves out of the cluster. With Grid Infrastructure 11gR2, RAW (and block) devices have been deprecated making ASM the recommended method of storing the OCR and Voting Disks. When storing the OCR and Voting Disk within ASM in 11gR2 and higher it is recommended to maintain a separate diskgroup for OCR and Voting Disk (not in the same diskgroup which stores database related files). If you are utilizing external redundancy (see ASM Considerations for details on diskgroup redundancy) for your disk groups this means you will have 1 Voting Disk and 1 OCR.

It is recommended to use Normal or High redundancy if external redundancy is not in use. With 11.2.0.2 Oracle Grid Infrastructure can provide redundancy and load balancing for the private interconnect (NOT the public network), this is the preferred method of NIC redundancy for full 11.2.0.2 stacks (11.2.0.2 Database must be used). More information can be found in. For versions 10gR2 and 11gR1, it is a best practice on all platforms to set the CSS diagwait parameter to 13 in order to provide time for dumping diagnostics in case of node evictions. Setting the diagwait above 13 is NOT recommended without explicit instruction from Support. This setting is no longer required in Oracle Clusterware 11g Release 2. Reference for more details on diagwait.

DO NOT set the ORACRSHOME environment variable (on all platforms). Setting this variable will problems for various Oracle components, and it is never necessary for CRS programs because they all have wrapper scripts. Virtualization Considerations. Oracle Clusterware, Grid Infrastructure and RAC are supported on specific Virtualization technologies (e.g. Oracle VM) with specific platform, version and patch requirements. When deploying RAC in a Virtualized environment it is essential that the support requirements documented in the are met to ensure a successful and supported deployment.

Installation Considerations. Execute root.sh/rootupgrade.sh as the 'real' root user, not sudo. When switching to the root user to execute rootupgrade.sh, 'su -' or 'su - root' provides the full root environment, while sudo, pbrun, 'su root' or 'su' or similar facilities don't. It is recommended to execute root.sh/rootupgrade.sh with full root environment. See, and for additional details. It is recommended that local file systems on local disks are used for installation of Oracle RAC software to allow for rolling patches, avoid a single point of failure as well as other factors. See the Oracle Homes in an white paper for additional information.

Do note that 11gR2 Grid Infrastructure is not supported on clustered file systems, see Section 2.5.4 of. Check Cluster Prerequisites Using cluvfy (Cluster Verification Utility). Use cluvfy at all stages prior to and during installation of Oracle software. When installing a pre-11gR2 release it is crucial to download the latest version of. And contain more relevant information on this topic.

When performing pre-11gR2 installations is recommended to patch the Clusterware Home to the desired level before performing any RDBMS or ASM home install. For example, install Clusterware 10.2.0.1 and patch to 10.2.0.4 before installing 10.2.0.1 RDBMS. In pre-11gR2 environments, install ASM in a separate ORACLEHOME from the database for maintenance and availability reasons (eg., to independently patch and upgrade). For ease of upgrades to 11gR2, the ASM software owner should be kept the same as the Clusterware software owner. Starting with 11gR2, all Patchsets are fully installable releases. For example, to install 11.2.0.2 (11gR2 Patchset 1) you will install directly from the 11.2.0.2 opposed to installing 11.2.0.1 and patching to 11.2.0.2.

Oracle Clusterware 12c

With 11gR2 Grid Infrastructure, all Patchsets are out-of-place upgrades. With 11gR2 RDBMS you can perform either an out-of-place or in-place upgrade, with out-of-place being the recommended method. More information can be found in. If you are installing Oracle Clusterware as a user that is a member of multiple operating system groups, the installer installs files on all nodes of the cluster with group ownership set to that of the user's current active or primary group. Therefore: ensure that the first group listed in the file /etc/ group is the current active group OR invoke the Oracle Clusterware installation using the following additional command line option, to force the installer to use the proper group when setting group ownership on all files: runInstaller susergroup=currentactivegroup Patching Considerations This section is targeted towards developing a proactive patching strategy for new and existing implementations. For new implementations, it is strongly recommended that the latest available Patchset and applicable Patch Set Update (PSU) for your platform be applied at the outset of your testing. In cases where that latest version of the RDBMS cannot be used because of lags in internal or 3rd party application certification or due to other limitations, it is still supported to have the CRS Home and ASM (or Grid Infrastructure) Homes running at a later patch level than the RDBMS Home.

As a best practice (with some exceptions, see the Note in the references section below), Oracle Support recommends that the following be true:. The Clusterware (or Grid Infrastructure) MUST be at a patch level or version that is greater than or equal to the patch level of version of the RDBMS Home (to the 4th dot in a given release). For pre-11.2 the Clusterware must be a patch level or version that is greater than or equal to the patch level or version of the ASM and RDBMS home (to the 4th dot in a given release). Before patching the database, ASM or Clusterware homes using opatch check the available space on the filesystem and use in order to estimate how much space will be needed and how to handle the situation if the filesystem should fill up during the patching process. provides a basic overview of patching Oracle Clusterware in a pre-11gR2 environment and clarifies how the Oracle Clusterware components are updated through patching. If patching Grid Infrastructure from 11.2.0.1 to 11.2.0.2, it is essential that - 'Things to Consider Before Upgrading to Grid Infrastructure 11.2.0.2' be reviewed. This document states all of the prerequisites and procedures that MUST be followed to ensure a successful upgrade to 11.2.0.2.

Develop a proactive patching strategy, to stay ahead of the latest known issues. Keep current with the latest Patch Set Updates (as documented in ) and be aware of the most current recommended patches (as documented in ). Plan for periodic (for example: quarterly) maintenance windows to keep current with the latest recommended PSUs and patches.

With 11.2.0.3 GI was enhanced to utilize broadcast or multicast (on 230.0.1.0 or 224.0.0.251 addresses) to bootstrap. However the 11.2.0.3.5 GI PSU introduces a new issue with effectively disables the broadcast functionality (Bug 16547309).

Do note that most networks do support multicast on the 224.0.0.251 multicast address without any special configuration, therefore the odds of this being an issue for 11.2.0.3.5 - 11.2.0.3.7 and 12.1.0.1 are greatly reduced. See and for additional information, available patches and corrective action. Upgrade Considerations This section is actually broken into 2 sub-sections. The first section is covers the Clusterware, ASM and Grid Infrastructure upgrades and the second section covers the RDBMS upgrades. Clusterware, ASM and Grid Infrastructure Upgrade Considerations. For upgrades to 11.2.0.3 and above, utilize the ORAchk Upgrade Readiness Assessment to assist in pre-upgrade requirements planning and post-upgrade validation. See the Upgrade Readiness Tab of ORAchk for additional details.

Oracle Cluster

Execute root.sh/rootupgrade.sh as the 'real' root user, not sudo. When switching to the root user to execute rootupgrade.sh, 'su -' or 'su - root' provides the full root environment, while sudo, pbrun, 'su root' or 'su' or similar facilities don't.

It is recommended to execute root.sh/rootupgrade.sh with full root environment. See, and for additional detail. Oracle Clusterware and Oracle ASM upgrades to Grid Infrastructure are always out-of-place upgrades. With 11g release 2 (11.2), you cannot perform an in-place upgrade of Oracle Clusterware and Oracle ASM to existing homes.

If the existing Oracle Clusterware home is a shared home, note that you can use a non-shared home for the Oracle Grid Infrastructure for a Cluster. Prior to beginning an upgrade of or to Grid Infrastructure, it is essential that the following is reviewed (depending on target version).

Things to Consider Before Upgrading to Grid Infrastructure 11.2.0.2 - Things to Consider Before Upgrading to 11.2.0.3/11.2.0.4 Grid Infrastructure/ASM A few key points to take away from the above documents are as follows (details are found in the note):. Validate the health of the existing Clusterware and ASM (or Grid Infrastructure) configuration. Ensure all prerequisite patches are applied to the existing Clusterware/ASM/Grid Infrastructure Homes for example:. When upgrading from GI 11.2.0.1 to 11.2.0.2, must be applied to the GI 11.2.0.1 home prior to attempting the upgrade. For upgrades to 11.2.0.2, validate multicast functionality on the private interconnect network. Patch 11.2.0.2 upgrades to the latest GI PSU prior to executing rootupgrade.sh (or root.sh), instructions for doing so are found in. To upgrade 10gR2 Clusterware to 11g, you must start with minimum version 10.2.0.3 as stated in following Oracle Upgrade Guide 11gR1 similar is also stated in the 11gR2 GI platform specific documentation.

This document states the following. Note: A new prerequisite check has been added to ensure that Oracle Clusterware release 10.2.0.x is at release 10.2.0.3 (or higher), before you attempt to upgrade it to Oracle Clusterware 11g release 1 (11.1).

If this check fails, then you are instructed to apply Oracle Clusterware patch set release 10.2.0.3.0 or later to your existing release before it can be upgraded. All other upgrade paths and fresh install cycles are unaffected by this prerequisite check. Use rolling upgrades where appropriate for Oracle Clusterware (CRS). For detailed upgrade assistance, refer to the appropriate Upgrade Companion for your release: 10g Upgrade Companion and Oracle 11gR1 Upgrade Companion.

With 11gR2, the upgrade of the Clusterware itself will be rolling (old stack MUST be up on all nodes), ASM upgrades will be rolling for ASM 11.1 and above. Pre-11.1 versions of ASM are NOT rolling. If there are plans to run a pre-11gR2 databases within an 11gR2 Grid Infrastructure environment, review: Pre 11.2 Database Issues in 11gR2 Grid Infrastructure Environment.

The 11.2.0.2 HAIP feature will NOT provide NIC redundancy or load balancing for pre-11.2.0.2 databases, if there are plans to run a pre-11.2.0.2 database on 11.2.0.2 Grid Infrastructure you must use a 3rd party NIC redundancy solution as you would have done in pre-11.2.0.2 releases. RDBMS Upgrade Considerations. Be sure to review the Upgrade Companion for your target release. When upgrading to 11gR2, be sure to review the presentation. For assistance on deciding on the method in which a database will be upgraded to 11gR2 review the white paper. Upgrading from Oracle Database 10g to 11g: What to expect from the Optimizer:.

For those upgrading a database and require minimal downtime consider using a transient logical standby, refer to:: Oracle11g Data Guard: Database Rolling Upgrade Shell Script. Database Configuration Considerations for RAC Database Initialization Parameter Considerations. Observe best practices with regard to large SGAs for RAC databases using very large SGA (e.g. 100GB) per instance.: Best Practices and Recommendations for RAC databases using very large SGA (e.g. 100 GB). Set PREPAGESGA=false. If set to true, it can significantly increase the time required to establish database connections.

In cases where clients might complain that connections to the database are very slow then consider setting this parameter to false, doing so avoids mapping the whole SGA and process startup and thus saves connection time. For 12.1 and above PREPAGESGA Behaviour changed Refer Doc ID 1987975.1. Be sure to monitor the number of active servers and calculate the average value to be applied for PARALLELMINSERVERS. This can be done. lmrcvrhangcheckfrequency = 20 lmrcvrhangallowtime = 70 lmrcvrhangkill = true Performance Tuning Considerations. In any database system, RAC or single instance, the most significant performance gains are usually obtained from traditional application tuning techniques.

The benefits of those techniques are even more remarkable in a RAC database. Remove unselective indexes. In RAC environments, unselective index blocks may be subject to inter-instance contention, increasing the frequency of cache transfers for indexes belonging to INSERT intensive tables. To avoid the performance impact of 'checkpoint not complete' conditions and frequent log switches, it is recommended that a minimum of 3 redo log groups per thread are created and the size of the redo logs allows for log switches to occur every 15 - 30 minutes.

See for details. Use Automatic Segment Space Management (ASSM).

ASSM tablespaces automate freelist management and remove the requirement/ability to specify PCTUSED, FREELISTS and FREELIST GROUPS storage parameters for individual tables and indexes created in these tablespaces. See for additional details. Increasing sequence caches for insert intensive applications improves instance affinity to index keys deriving their values from sequences. Increase the Cache for Application Sequences and some System sequences for better performance. Use a large cache value of maybe 10,000 or more.

Additionally use of the NOORDER attribute is most effective, but it does not guarantee sequence numbers are generated in order of request (this is actually the default). RAC: Frequently Asked Questions 11gR2 Clusterware and Grid Home - What You Need to Know Troubleshooting 11.2 Clusterware Node Evictions (Reboots) Troubleshooting 10g and 11.1 Clusterware Reboots Data Gathering for Troubleshooting CRS Issues Note: Additional information can be found in the Master Note for Real Application Clusters (RAC) Oracle Clusterware and Oracle Grid Infrastructure. RAC Database Diagnostics.

When opening an SR with Oracle Support related to issues with a RAC database be sure to review to ensure the proper diagnostic information is gathered and provided at the time of SR creation. Providing this information up front can decrease the turnaround time of SRs. The following notes are often of value when troubleshooting RAC related database issues.

Oracle clusterware 12c

RAC: Frequently Asked Questions GC Lost Blocks Diagnostics Troubleshoot ORA-29740 errors in a RAC Environment 11g How to Unpack a Package in to ADR 11g Quick Steps to Package and Send Critical Error Diagnostic Information to Support Data Gathering for Troubleshooting RAC Issues Note: Additional information can be found in the Master Note for Real Application Clusters (RAC) Oracle Clusterware and Oracle Grid Infrastructure. Patching Diagnostics (OPatch). The following notes are often of value when troubleshooting OPatch related issues.