May 2016 updated: 1z0-052 dumps


♥♥ 2017 NEW RECOMMEND ♥♥

Free VCE & PDF File for Oracle 1Z0-058 Real Exam (Full Version!)

★ Pass on Your First TRY ★ 100% Money Back Guarantee ★ Realistic Practice Exam Questions

Free Instant Download NEW 1Z0-058 Exam Dumps (PDF & VCE):
Available on: http://www.certleader.com/1Z0-058-dumps.html


1Z0-058 Product Description:
Exam Number/Code: 1Z0-058 vce
Exam name: Oracle Real Application Clusters 11g Release 2 and Grid Infrastructure
n questions with full explanations
Certification: Oracle Certification
Last updated on Global synchronizing

Instant Access to Free VCE Files: Oracle 1Z0-058 Oracle Real Application Clusters 11g Release 2 and Grid Infrastructure

1Z0-058 examcollection

Exam Code: 1Z0-058 (Practice Exam Latest Test Questions VCE PDF)
Exam Name: Oracle Real Application Clusters 11g Release 2 and Grid Infrastructure
Certification Provider: Oracle
Free Today! Guaranteed Training- Pass 1Z0-058 Exam.

2016 May 1Z0-058 Study Guide Questions:

Q21. You want to reorganize the DATA diskgroup while continuing database operations. The DATA diskgroup was created using normal redundancy having one disk per failure group. The two disks used are /dev/sdal and /dev/sda2. 

You plan to drop the existing disks and add the /dev/sdb1 and /dev/sdb2 disks to failure group FG_C and the /dev/sdcl and /dev/sdc2 disks to failure group FG_D. 

Which procedure would you use to minimize the effect of the I/Os of this reorganization on ongoing database operations? 

A. Set rebalance power to 0 for diskgroup DATA. 

Add failure group FG_C with all the /dev/sdb disks. 

Add failure group FG_D with all the /dev/sdc disks. 

Drop disks/dev/sda1 and /dev/sda2. 

Set rebalance power to 1 for diskgroup DATA. 

B. Set rebalance power to 0 for diskgroup DATA. Add failure group FG_C with all the /dev/sdb disks. 

Add failure group FG_D with all the /dev/sdb disks. 

Drop disks/dev/sda1 and /dev/sda2. 

Set rebalance power to 9 for diskgroup DATA. 

C. Set rebalance power to 9 for diskgroup DATA. 

Add failure group FG_C with all the /dev/sdb disks. 

Add failure group FG_D with all the /dev/sdc disks. 

Drop disks /dev/sda1 and /dev/sda2. 

Set rebalance power to 0 for diskgroup DATA. 

D. Set rebalance power to 0 for diskgroup DATA 

Drop disks /dev/sdal and /dev/sdb disks. 

Add failure group FG_C with all the /dev/sdb disks. 

Add failure group FG_D with all the /dev/sdc disks 

Set rebalance power to 1 for diskgroup DATA. 

Answer: A 

Explanation: . To control the speed and resource consumption of the rebalance operation, you can include the REBALANCE POWER clause in statements that add, drop, or resize disks. . The ASM_POWER_LIMIT initialization parameter specifies the default power for disk rebalancing in a disk group. The range of values is 0 to 1024. The default value is 1. A value of 0 disables rebalancing. Higher numeric values enable the rebalancing operation to complete more quickly, but might result in higher I/O overhead and more rebalancing processes. . Failure groups are used to place mirrored copies of data so that each copy is on a disk in a different failure group. The simultaneous failure of all disks in a failure group does not result in data loss. . You define the failure groups for a disk group when you create an Oracle ASM disk group. After a disk group is created, you cannot alter the redundancy level of the disk group. If you omit the failure group specification, then Oracle ASM automatically places each disk into its own failure group, except for disk groups containing disks on Oracle Exadata cells. Normal redundancy disk groups require at least two failure groups. High redundancy disk groups require at least three failure groups. Disk groups with external redundancy do not use failure groups. 

Oracle. Automatic Storage Management 


Q22. For which two purposes would you recommend an ASM clustered file system (ACFS)? 

A. a shared home directory for Oracle database executables in a single-instance cluster for cold failover 

B. a shared home directory for Oracle Grid Infrastructure executables 

C. a root file system for the operating system 

D. a shared file system for RAC data files 

E. a general purpose shared file system for OS files 

F. a clustered file system for OCR and voting disk files 

Answer: A,E 

Explanation: 

Overview of Oracle ACFS Oracle Automatic Storage Management Cluster File System (Oracle ACFS) is a multi-platform, scalable file system, and storage management technology that extends Oracle Automatic Storage Management (Oracle ASM) functionality to support customer files maintained outside of Oracle Database. Oracle ACFS supports many database and application files, including executables, database trace files, database alert logs, application reports, BFILEs, and configuration files. Other supported files are video, audio, text, images, engineering drawings, and other general-purpose application file data. 

Notes: Oracle ASM is the preferred storage manager for all database files. It has been specifically designed and optimized to provide the best performance for database file types. For a list of file types supported by Oracle ASM, see Table 7-1, "File types supported by Oracle ASM". Oracle ACFS is the preferred file manager for non-database files. It is optimized for general purpose files. Oracle ACFS does not support any file type that can be directly stored in Oracle ASM, except where explicitly noted in the documentation. Not supported means Oracle Support Services does not take calls and development does not fix bugs associated with storing unsupported file types in Oracle ACFS. Starting with Oracle Automatic Storage Management 11g Release 2 (11.2.0.3), Oracle ACFS supports RMAN backups (BACKUPSET file type), archive logs (ARCHIVELOG file type), and Data Pump dumpsets (DUMPSET file type). Note that Oracle ACFS snapshots are not supported with these files. Oracle ACFS does not support files for the Oracle Grid Infrastructure home. Oracle ACFS does not support Oracle Cluster Registry (OCR) and voting files. Oracle ACFS functionality requires that the disk group compatibility attributes for ASM and ADVM be set to 

11.2 or greater. For information about disk group compatibility, refer to "Disk Group Compatibility". 

Oracle. Automatic Storage Management Administrator's Guide 11g Release 2 (11.2) 


Q23. Examine the following details from the AWR report for your three-instance RAC database: 

Which inferences is correct? 

A. There are a large number of requests for cr blocks or current blocks currently in progress. 

B. Global cache access is optimal without any significant delays. 

C. The log file sync waits are clue to cluster interconnect latency. 

D. To determine the frequency of two-way block requests you must examine other events In the report. 

Answer: B 

Explanation: 

Analyzing Cache Fusion Transfer Impact Using GCS Statistics This section describes how to monitor GCS performance by identifying objects read and modified frequently and the service times imposed by the remote access. Waiting for blocks to arrive may constitute a significant portion of the response time, in the same way that reading from disk could increase the block access delays, only that cache fusion transfers in most cases are faster than disk access latencies. The following wait events indicate that the remotely cached blocks were shipped to the local instance without having been busy, pinned or requiring a log flush: 

gc current block 2-way gc current block 3-way gc cr block 2-way gc cr block 3-way 

The object statistics for gc current blocks received and gc cr blocks received enable quick identification of the indexes and tables which are shared by the active instances. As mentioned earlier, creating an ADDM analysis will, in most cases, point you to the SQL statements and database objects that could be impacted by interinstance contention. Any increases in the average wait times for the events mentioned in the preceding list could be caused by the following occurrences: High load: CPU shortages, long run queues, scheduling delays Misconfiguration: using public instead of private interconnect for message and block traffic If the average wait times are acceptable and no interconnect or load issues can be diagnosed, then the accumulated time waited can usually be attributed to a few SQL statements which need to be tuned to minimize the number of blocks accessed. Oracle. Real Application Clusters Administration and Deployment Guide 11g Release 2 (11.2) 


1Z0-058  book

Most recent oracle 1z0-052 certification:

Q24. On the OUI Grid Plug and Play information page, you can configure GRID Naming Service (GNS). What will be the SCAN Name field default to if you enter cluster01 in the cluster Name field and cluster01.example.com in the GNS Sub Domain field? 

A. cluster01.example.com 

B. cluster01-qns.example.com 

C. cluster01-scan.cluster01.example.com 

D. cluster-vip.example.com 

Answer: C 

Explanation: If you specify a GNS domain, then the SCAN name defaults to clustername-scan.GNS_domain. Otherwise, it defaults to clustername-scan.current_domain. For example, if you start Oracle Grid Infrastructure installation from the server node1, the cluster name is mycluster, and the GNS domain is grid.example.com, then the SCAN Name is mycluster-scan.grid.example.com. 

Oracle Grid Infrastructure Installation Guide 


Q25. Your network administrator informs you that the Internet service provider is being changed in a month's time in conjunction with a data center move. 

You are asked to plan for the changes required in the Oracle Grid Infrastructure, which is set up to use GNS. 

The IP addresses and subnets of the public network are to change. 

Which two must be done in the Oracle Grid Infrastructure network setup to accommodate this change using the command-line Interfaces available? 

A. The SCAN VIPs and node VIPs must be reconfigured using srvctl. 

B. The SCAN VIPs and SCAN listener resources must be removed and added to obtain the new SCAN IP addresses from DHCP. 

C. The interconnect must be reconfigured by using oifcfg, crsctl, and ifconfig. 

D. The SCAN VIPs and node VIPs must be reconfigured by using oifcfg. 

E. The Interconnect must be reconfigured by using srvctl. 

Answer: C,D 

Explanation: 

How to Modify Public or Private Network Information in Oracle Clusterware [ID 283684.1] Modified 14-MAR-2012 Type HOWTO Status PUBLISHED 

Applies to: 

Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 11.2.0.3 - Release: 10.1 to 11.2 Information in this document applies to any platform. 

Goal 

The purpose of this note is to describe how to change or update the cluster_interconnect and/or public interface information that is stored in OCR. It may be necessary to change or update interface names, or subnet associated with an interface if there is a network change affecting the servers, or if the original information that was input during the installation was incorrect. It may also be the case that for some reason, the Oracle Interface Configuration Assistant ('oifcfg') did not succeed during the installation. 

This note is not intended as a means to change the Public or Private Hostname themselves. Public hostname or Private hostname can only be changed by removing/adding nodes, or reinstalling Oracle Clusterware. 

However, node VIP name/IP can be changed, refer to Note 276434.1 for details. 

Refer to note 1386709.1 for basics of IPv4 subnet and Oracle Clusterware 

Instructions for Changing Interfaces/Subnet 

1. Public Network Change 

If the change is only public IP address and the new ones are still in the same subnet, nothing needs to be done on clusterware level (all changes needs to be done on OS level to reflect the change). 

If the change involves different subnet or interface, as there is not a 'modify' option - you will need to delete the interface and add it back with the correct information. So, in the example here, the subnet is being changed from 10.2.156.0 to 10.2.166.0 via two separate commands - first a 'delif' followed by a 'setif': % $ORA_CRS_HOME/bin/oifcfg delif -global eth0 % $ORA_CRS_HOME/bin/oifcfg setif -global eth0/10.2.166.0:public syntax: oifcfg setif <interface-name>/<subnet>:<cluster_interconnect|public> 

Note: If public network is changed, it may be necessary to change VIP as well, refer to Note 276434.1 for details; for 11gR2, it may be necessary to change SCAN as well, refer to note 972500.1 for details (This procedure does not apply when GNS is being used). 

2. Private Network Change 2A. For pre-11gR2, if you wish to change the cluster_interconnect information and/or private IP address, hosts file needs to be modified on each node to reflect the change while the Oracle Clusterware Stack is down on all nodes. After the stack has restarted, to change the cluster_interconnect used by RDBMS and ASM instances, run oifcfg. In this example: % $ORA_CRS_HOME/bin/oifcfg delif -global eth1 % $ORA_CRS_HOME/bin/oifcfg setif -global eth1/192.168.1.0:cluster_interconnect 2B. For 11gR2 and higher, refer to note 1073502.1 

Note: For 11gR2, as clusterware also uses cluster_interconnect, intended private network must be added by "oifcfg setif" before stopping clusterware for any change. Note: If you are running OCFS2 on Linux and are changing the private IP address for your cluster, you may also need to change the private IP address that OCFS2 is using to communicate with other nodes. For more information on this, please refer to <Note 604958.1> 

3. Verify the correct interface subnet is in use by re-running oifcfg with the 'getif' option: % $ORA_CRS_HOME/bin/oifcfg getif eth0 10.2.166.0 global public eth1 192.168.1.0 global cluster_interconnect How to Modify Private Network Interface in 11.2 Grid Infrastructure [ID 1073502.1] Modified 08-FEB-2012 Type HOWTO Status PUBLISHED 

Applies to: 

Oracle Server - Enterprise Edition - Version: 11.2.0.1.0 and later [Release: 11.2 and later ] Information in this document applies to any platform. 

Goal 

The purpose of this document is to demonstrate how to change the private network interface configuration stored in the OCR. This may be required if the name of the interface for the private network (cluster interconnect) needs to be changed at the OS level, for example, the private network is configured on a single network interface eth0, now you want to replace it with a bond interface bond0 and eth0 will be part of the bond0 interface. It also includes command for adding/deleting a private network interface. 

Solution 

As of 11.2 Grid Infrastructure, the CRS daemon (crsd.bin) now has a dependency on the private network configuration stored in the gpnp profile and OCR. If the private network is not available or its definition is incorrect, the CRSD process will not start and any subsequent changes to the OCR will be impossible. Therefore care needs to be taken when making modifications to the configuration of the private network. It is important to perform the changes in the correct order. Note: If only private network IP is going to be changed, the subnet and network interface remain same (for examples changing private IP from 192.168.0.1 to 192.168.0.10), simply shutdown GI stack, make IP modification at OS level (like /etc/hosts, network config etc) for private network, then restart GI stack will complete the task. The following procedures apply when subnet or network interface name also requires change. Please take a backup of profile.xml on all cluster nodes before proceeding, as grid user: $ cd $GRID_HOME/gpnp/<hostname>/profiles/peer/ $ cp -p profile.xml profile.xml.bk 

To modify the private network (cluster_interconnect): 

1. Ensure CRS is running on ALL cluster nodes in the cluster 

2. As grid user, add new interface: 

Find the interface which needs to be removed. For example: 

$ oifcfg getif 

eth1 100.17.10.0 global public 

eth0 192.168.0.0 global cluster_interconnect 

Here the eth0 interface will be replaced by bond0 interface. 

Add new interface bond0: 

$ oifcfg setif -global <interface>/<subnet>:cluster_interconnect 

For example: 

$ oifcfg setif -global bond0/192.168.0.0:cluster_interconnect 

This can be done with -global option even if the interface is not available yet, but this can 

not be done with - node option if the interface is not available, it will lead to node eviction. 

If the interface is available on the server, subnet address can be identified by command: 

$ oifcfg iflist 

It lists the network interface and its subnet address. This command can be run even if CRS 

is not up and running. Please note, subnet address might not be in the format of x.y.z.0. 

For example, it can be: 

$ oifcfg iflist 

lan1 18.1.2.0 

lan2 10.2.3.64 << this is the private network subnet address associated with privet network 

IP: 10.2.3.86 

If the scenario is just to add a 2nd private network, for example: new interface is eth3 with 

subnet address: 

192.168.1.96, then issue: 

$ oifcfg setif -global eth3/192.168.1.96:cluster_interconnect 

Verify the change: 

$ oifcfg getif 

3. Shutdown CRS on all nodes and disable the CRS as root user: # crsctl stop crs # crsctl disable crs 

4. Make the network configuration change at OS level as required, ensure the new interface is available on all nodes after the change. $ ifconfig -a 

$ ping <private hostname> 

5. Enable CRS and restart CRS on all nodes as root user: 

# crsctl enable crs 

# crsctl start crs 

6. Remove the old interface: 

$ oifcfg delif -global eth0 

Note #1. This step is not required for adding 2nd interface scenario. 

#2. If the new interface is added without removing the old interface, eg: old interface still available when CRS restart, then after step 6, CRS needs to be stop and start again to ensure the old interface is no longer in use. untitled 

Workaround: restore the OS network configuration back to the original status, start CRS. 

Then follow above steps to make the changes again. 

Please consult with Oracle Support Service if after restoring OS network configuration, 

CRS still could not start. 

2. If any one node is down in the cluster, oifcfg command will fail with error: $ oifcfg setif -global bond0/192.168.0.0:cluster_interconnect PRIF-26: Error in update the profiles in the cluster Workaround: start CRS on the node where it is not running. Ensure CRS is up on all cluster nodes. 

3. If a user other than Grid Infrastructure owner issues above command, it will fail with same error: $ oifcfg setif -global bond0/192.168.0.0:cluster_interconnect PRIF-26: Error in update the profiles in the cluster Workaround: ensure to login as Grid Infrastructure owner to perform such command. 

4. From 11.2.0.2 onwards, if attempt to delete the last private interface (cluster_interconnect) without adding a new one first, following error will occur: 

PRIF-31: Failed to delete the specified network interface because it is the last private interface Workaround: Add new private interface first before deleting the old private interface. 

5. If CRS is down on the node, the following error is expected: $ oifcfg getif PRIF-10: failed to initialize the cluster registry Workaround: Start the CRS on the node My Oracle Support 


Q26. Which two conditions are required by the ASM fast mirror resynchronization to track block changes for a set period of time before dropping the disk from the disk group? 

A. Redundancy is normal or high. 

B. compatibility. rdbms is set to a value of at least 11. l. 

C. disk_repair_time is set to a nondefault value. 

D. block_change_tracking IS enabled. 

E. db_block_checking is enabled. 

F. resumable_timeout is set to a nondefault value. 

Answer: A,B 

Explanation: 

ASM Fast Mirror Resync Enabled when COMPATIBLE.RDBMS >= 11.1 Whenever ASM is unable to write an extent, ASM takes the associated disk offline. If the corresponding disk group uses ASM mirroring (NORMAL or HIGH redundancy), at least one mirror copy of the same extent exists on another disk in the disk group. Before Oracle Database 11g, ASM assumed that an offline disk contains only stale data and no longer reads from such disks. Shortly after a disk is put offline, ASM drops it from the disk group by re-creating the extents allocated to the disk on the remaining disks in the disk group using mirrored extent copies. This process is quite resource intensive and can take hours to complete. If the disk is replaced or the failure is repaired, the disk must be added again and another rebalance operation must take place. 

D60488GC11 Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 8 - 32 


certleader.com

Actual 1z0-052 exam dumps:

Q27. Some new non-ASM shared storage has been made available by the storage administrator, and the Oracle Grid Infrastructure administrator decides to move the voting disks, which do not reside in ASM, to this new non-ASM location. How can this be done? 

A. by running crsctl add css votedisk <path_to_new_location> followed by crsctl delete css –votedisk <path_to_old_location> 

B. by running crsctl replace css votedisk <path_to_old_location,path_to_new_location> 

C. by running srvctl replace css votedisk <path_to_old_location, path_to_new_location> 

D. by running srvctl add css votedisk <path_to_new_location> followed by srvctl delete css votedisk <path_to_old_location> 

Answer: A 

Explanation: 

Adding, Deleting, or Migrating Voting Disks 

Modifying voting disks that are stored in Oracle ASM To migrate voting disks from Oracle ASM to an alternative storage device, specify the path to the non-Oracle ASM storage device with which you want to replace the Oracle ASM disk group using the following command: $ crsctl replace votedisk path_to_voting_disk You can run this command on any node in the cluster. 

To replace all voting disks not stored in Oracle ASM with voting disks managed by Oracle ASM in an Oracle ASM disk group, run the following command: $ crsctl replace votedisk +asm_disk_group 

Modifying voting disks that are not stored on Oracle ASM: To add one or more voting disks, run the following command, replacing the path_to_voting_disk variable with one or more space-delimited, complete paths to the voting disks you want to add: $ crsctl add css votedisk path_to_voting_disk [...] To replace voting disk A with voting disk B, you must add voting disk B, and then delete voting disk A. To add a new disk and remove the existing disk, run the following command, replacing the path_to_voting_diskB variable with the fully qualified path name of voting disk 

B: 

$ crsctl add css votedisk path_to_voting_diskB -purge 

The -purge option deletes existing voting disks. 

To remove a voting disk, run the following command, specifying one or more space-

delimited, voting disk FUIDs or comma-delimited directory paths to the voting disks you 

want to remove: 

$ crsctl delete css votedisk {FUID | path_to_voting_disk[...]} 

Oracle. Clusterware Administration and Deployment Guide 11g Release 2 (11.2) 


Q28. Which four statements are true about ADVM interoperability? 

A. Using fdisk or similar disk utilities to partition ADVM-managed volumes is not supported 

B. On Linux platforms, the raw utility can be used to map ADVM volume block devices to raw volume devices. 

C. The creation of multipath devices over ADVM devices is not supported. 

D. You may create ASMLIB devices over ADVM devices to simplify volume management. 

E. ADVM does not support ASM storage contained in Exadata. 

F. F. ADVM volumes cannot be used as a boot device or a root file system. 

Answer: A,C,E,F 

Explanation: Oracle Automatic Storage Management Cluster File System (Oracle ACFS) and Oracle ASM Dynamic Volume Manager (Oracle ADVM) extend Oracle ASM support to include database and application executables, database trace files, database alert logs, application reports, BFILEs, and configuration files. Other supported files are video, audio, text, images, engineering drawings, and other general-purpose application file data. Because of the fact that Oracle ADVM Volumes are technically spoken ASM files located on ASM Disk groups, and the fact that the Dynamic Volumes do not use the traditional device partitioning, it enables Oracle to extend some of the ASM features to the ASM Clustered File Systems, which are created inside these ADVM Volumes, such as dynamic resizing or dynamically adding volumes. This makes ADVM and ACFS a far more flexible solution than traditional physical devices. 

Important Notes: 

Partitioning of dynamic volumes (using fdisk or similar) is not supported Do not use raw to map ADVM volume block devices into raw volumes devices Do not create multipath devices over ADVM devices Do not create ASMLIB devices over ADVM devices Oracle ADVM supports all storage solutions supported for Oracle ASM with the exception of NFS and Exadata storage ADVM volumes cannot be used as a boot device or a root file system 


Q29. You are ready to add two new nodes called RACNODE 5 and RACNODE 6 to your existing four-node cluster by using addNode.sh. 

You have run cluvfy -peer to check the new nodes against a reference node. 

When you originally created the cluster, the network administrators chose to statically define the SCAN VIP addresses in the corporate DNS server, and you installed the Oracle Grid Infrastructure without using GNS. 

What is the correct way to silently add the nodes? 

A. addNode.sh -silent “CLUSTER_NEW_NODES={RACNODE5, RACNODE6} " 

B. addNode.sh -silent " CLUSTER_NEW_VIRTUAL_HOSTNAMES={RACNODE5-VIP, RACNODE6-VIP} " 

C. addNode.sh -silent “CLUSTER_NEW_NODES={RACNODE5, RACNODE6}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={RACNODE5-VIP, RACNODE6-VIP}" 

D. addNode.sh -silent -responseFile mynewnodes.txt with the response file containing only “CLUSTER_NEW_NODES={RACNODE5, RACNODE6} " 

E. addNode.sh -silent -responseFile mynewnodes.txt With the response file containing "CLUSTER_NEW_VIRTUAL_HOSTNAMES={RACNODE3-VIP, RACNODE4-VIP}" 

Answer: C 

Explanation: 

Adding a Cluster Node on Linux and UNIX Systems If you are not using GNS, run the following command: $ ./addNode.sh "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip}" 

Oracle. Clusterware Administration and Deployment Guide 11g Release 2 (11.2) 


Q30. Which three fragments will complete this statement correctly? In a cluster environment, an ACFS volume _____________. 

A. Will be automatically mounted by a node on reboot by default 

B. Must be manually mounted after a node reboot 

C. Will be automatically mounted by a node if it is defined as cluster stack startup if it is included in the ACFS mount registry. 

D. Will be automatically mounted to all node if it is defined as cluster resource when dependent cluster resources requires access 

E. Will be automatically mounted to all node in the cluster when the file system is registered 

F. Must be mounted before it can be registered 

Answer: A,C,E 

Explanation: The Oracle ACFS registry resource actions are designed to automatically mount a file system only one time for each Oracle Grid Infrastructure initialization to avoid potential conflicts with administrative actions to dismount a given file system. 

Reference: Oracle Automatic Storage Management Administrator's Guide 



see more Oracle Real Application Clusters 11g Release 2 and Grid Infrastructure