Introduction
Starting in Oracle 12.2, the grid installation experience changed in two major ways:
- In the past, you unzipped the binaries into a download folder and executed runInstaller. Oracle Universal Installer (OUI) copied the binaries to your grid oracle home. In the new procedure, you unzip the grid binaries into the grid home and then run gridSetup.sh to configure the software in-place.
- Instead of configuring asmlib, you are going to initialize the disks using ASM Filter Driver (AFD).
These changes raise new organizational issues and introduce a new, critical step.
Scope
The scope of this article is:
- Installation to cloud virtual machine (Azure or AWS EC2)
- Oracle Restart (Not RAC)
- x86_64 hardware
- Linux 7
- udev is not considered
- New install, not an upgrade
- No preexisting asmlib
Grid infrastructure in the cloud
RAC
The main purpose of oracle Grid Infrastructure was to support Real Application Clusters (RAC). RAC requires sharing storage volumes across hosts, which most Cloud providers disallow. An iscsi server can be used to share drives, but at the cost of added complexity. RAC deployments are, therefore, less common in the cloud.
ASM
Oracle Grid Infrastructure is extremely useful, even without RAC. Automatic Storage Management is a main component of grid. Some benefits of ASM are:
- Convenient, consolidated, managed storage
- Efficient RMAN and Data Guard administration
- Monitoring and managing space in ASM using Oracle Enterprise Manager
restart
An Oracle standalone (non-RAC) database that is running on grid infrastructure is known as a “restart”. You use grid to manage oracle startup and shutdown, and to monitor the database instance. Oracle restart works very well in the cloud.
Organizational issues
Previously, during the operating system setup, asmlib could be configured along with other root steps, prior to grid installation. However, AFD labeling must wait until grid installation has begun. In an organization with segregation of SA and DBA duties, the DBA is going to need to schedule three separate tasks for the SA.
- Setup the operating system
- Label ASM disks using AFD
- Run root.sh during the grid setup
Instead of personal handoffs, you can do one of these:
- Have one person with root and grid access perform all steps.
- Use an automated deployment tool.
- Run a script as root with some steps su’d to grid.
This blog article covers manual command line steps as root or grid, and use of the gridSetup.sh X windows GUI.
Steps leading up to AFD
The steps leading up to ASM disk labeling are, briefly:
- Select a compatible machine image (Linux on x86_64)
- Create the OS (ex: 2 CPU, 8G RAM, 32G operating system disk)
- Update and install packages
- Configure the operating system (swap, sysctl.conf)
- Create the grid user and directories
- Download and unzip the software
Now you are ready to configure your ASM disks.
Disk allocation
In the cloud, you can allocate new disks from the web console. You can also use a command line tool. Finally, you can write a script to do the job, in python for example. You can select from among available type (hard drive or SSD), you can specify the size, and in some cases, you can specify the IOPS.
You can attach a disk to a live system, and you can detach a disk from a live system. Before allocating new disks, list the existing disks
[root@grid19c ~]# df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 33025276 12824288 20200988 39% /
[root@grid19c ~]# swapon
NAME TYPE SIZE USED PRIO
/mnt/resource/swapfile file 2G 0B -2
[root@grid19c ~]# ls -l /dev/sd*
brw-rw----. 1 root disk 8, 0 Aug 23 12:40 /dev/sda
brw-rw----. 1 root disk 8, 1 Aug 23 12:40 /dev/sda1
brw-rw----. 1 root disk 8, 2 Aug 23 12:40 /dev/sda2
brw-rw----. 1 root disk 8, 16 Aug 23 12:40 /dev/sdb
brw-rw----. 1 root disk 8, 17 Aug 23 12:40 /dev/sdb1
AWS
We are mainly interested in Linux systems that are available in the AWS Marketplace. The virtualization type in the AWS Marketplace right now is predominantly Hardware Virtual Machine (HVM). In the systems that I have checked, the possible device names are /dev/xvd[a-z], or only 26 devices, which is not very flexible. The allowable device size range is 1 GiB to 16384 GiB. The largest SSD devices are burstable up to 64000 IOPS. In AWS, an attached device can be resized.

AWS lets you choose the Linux device node name. You can retrieve the node name from the console, the command line, or the API. For example:
>>> from boto3 import resource
>>> resource('ec2').Volume('vol-0679cfe2a209db2ed').attachments[0]['Device']
'/dev/sdg'
Azure
In Azure, disks range in size from 32 GiB to 32767 GiB. A disk can be resized, and the type (HDD or SSD) may be modified only when the disk is unattached. The largest premium SSD offers IOPS up to 20000.

Azure does not identify the new disk names for you. I have not found a way to retrieve the operating system node name from the Azure cloud. The closest I can get is to display the attachment status.
>>> from azure.common.credentials import get_azure_cli_credentials
>>> from azure.mgmt.compute import ComputeManagementClient
>>> cli = ComputeManagementClient(
... *get_azure_cli_credentials()
... )
>>> cli.disks.get(
... resource_group_name='ora',
... disk_name='asm-0'
... ).disk_state
'Attached'
This seems like a gap to me. You want a certain method to distinguish new disks from existing disks, and you want a positive method to identify each disk.
Identify the new disks
Example:
brw-rw----. 1 root disk 8, 32 Aug 23 12:40 /dev/sdc
brw-rw----. 1 root disk 8, 48 Aug 23 12:40 /dev/sdd
AFD label
Prior to starting the grid install, you need to label the disk or disks that you intend to use for the first ASM disk group. We will use asmcmd afd_label. This is a critical step.
afd_label
To label your disks, sudo to root, and execute
asmcmd afd_label label devicenode –init
For example:
[root@grid19c ~]# export ORACLE_HOME=/u01/app/19.3.0/grid
[root@grid19c ~]# export ORACLE_BASE=/tmp
[root@grid19c ~]# cd /u01/app/19.3.0/grid/bin
[root@grid19c bin]# ./asmcmd afd_label DATA01_00001 /dev/sdc --init
- ORACLE_BASE=/tmp avoids creating root-owned files under the grid oracle base.
- Exercise care to follow this instruction exactly because an incorrect invocation may not print an error message at all.
separation of duties
If root steps are not done by the DBA, then the DBA should prepare a script for the AFD labeling. For example:
#!/bin/bash
export ORACLE_HOME=/u01/app/19.3.0/grid
export ORACLE_BASE=/tmp
cd /u01/app/19.3.0/grid/bin
./asmcmd afd_label DATA01_00001 /dev/sdc --init
afd_lslbl
If your disk discovery string is the default value ‘/dev/sd*’, as in Azure, you will be able to list your ASM disks with this simple command:
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
DATA01_00001 /dev/sdc
On AWS, your disk string might be ‘/dev/xvd*’, so this might not work:
[root@ip-172-31-86-22 bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'No devices to be scanned.
In that case, you need to specify the disk string on the command line:
[root@ip-172-31-86-22 bin]# ./asmcmd afd_lslbl '/dev/xvd*'
--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
ASM01_00001 /dev/xvdc
After you run root.sh, you do not need to specify the disk discovery string.
[root@ip-172-31-25-179 bin]# ./asmcmd afd_lslbl
--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
ASM01_00001 /dev/xvdc
After root.sh, afd_lslbl gets its disk string from file /etc/oracleafd.conf:
# cat /etc/oracleafd.conf
afd_diskstring='/dev/xvd*'
afd_unlabel
You can erase the AFD label with:
[root@grid19c bin]# ./asmcmd afd_unlabel /dev/sdc --init
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'No devices to be scanned.
Bugs and limitations
the afd_label –init option
If you leave out the –init option afd_label might not write a label to your disk, but it will also print no error message, and will return a success code.
[root@grid19c bin]# ./asmcmd afd_label DATA01_00001 /dev/sdc
[root@grid19c bin]# echo $?
0
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'No devices to be scanned.
You can use dd to verify that the disk is empty
[root@grid19c bin]# dd if=/dev/sdc ibs=8192 obs=8192 count=1 | od -X
0000000 00000000 00000000 00000000 00000000
*
0020000
1+0 records in
1+0 records out
8192 bytes (8,2 kB) copied, 0.032763 s, 25.0 MB/s
the afd_unlabel –init option
Likewise, if you run afd_unlabel without the –init option, asmcmd may silently not unlabel your disk.
[root@grid19c bin]# ./asmcmd afd_label DATA01_00001 /dev/sdc --init
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
DATA01_00001 /dev/sdc
[root@grid19c bin]# ./asmcmd afd_unlabel /dev/sdc
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
DATA01_00001 /dev/sdc
You can also use dd to view the label
[root@grid19c bin]# dd if=/dev/sdc ibs=8192 obs=8192 count=1 | od -X
0000000 00000000 00000000 00000000 1f75bc96
0000020 00000000 00000000 00000000 00000000
0000040 4c43524f 4b534944 41544144 305f3130
0000060 31303030 00000000 00000000 00000000
0000100 00000000 00000000 00000000 00000000
*
0000440 00000000 00000000 0500000a 5d5ee7d3
0000460 00000000 00000000 00000000 00000000
*
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied*
0020000
other utilities
In this context, other afd commands might run, produce no output, do nothing, and exit with success code.
[root@grid19c bin]# ./asmcmd afd_state
[root@grid19c bin]# echo $?
0
[root@grid19c bin]# ./asmcmd afd_configure
[root@grid19c bin]# echo $?
0
At this stage so far in the grid installation, only commands afd_label is needed. You can also run afd_unlabel, and afd_lslbl. If you experiment with other commands or options, you may end up in a state that is difficult to diagnose.
grid installation directory
Oracle grid can be installed in virtually any directory. Originally, however, path”/u01/app/19.0.0/grid” is hard-coded into scripts such as kfod. You cannot kfod and other such scripts at this early stage. Later in the installation procedure, root.sh replaces such files with scripts that can run out of any installation directory.
[root@grid19c bin]# ./asmcmd
/u01/app/19.3.0/grid/bin/kfod: line 22: /u01/app/19.0.0/grid/bin/kfod.bin: No such file or directory
Use of uninitialized value $result[0] in scalar chomp at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5982.
Use of uninitialized value $result[0] in split at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5985.
Use of uninitialized value $clus_mode in scalar chomp at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5987.
Use of uninitialized value $clus_mode in concatenation (.) or string at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5988.
Use of uninitialized value $clus_mode in string eq at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5993.
...
ASMCMD> exit
conclusion
Most utilities are designed to run with full functionality after gridSetup.sh and root.sh have completed. One notable exception is subset of asmcmd afd sub commands with very specific options.
Because of the bugginess or limitations of the asmcmd AFD options, it is critical to be aware of the limitations, and by all means, to get the AFD disk labeling correct the first time.
Grid setup
This section describes the grid setup, with emphasis on the ASM Disk Group screen
- start X server (Xming) on Windows
- On Linux, sudo to grid and cd to the grid home
- Start gridSetup.sh
- At the Configuration Options screen, select Standalone (Oracle Restart).
- Click Next to go to the Create ASM Disk Group screen.
Here is where you are going to create your first ASM disk group. gridSetup.sh scans the operating system for labeled disks. Labeled disks matching “/dev/sd*” will appear in your display.

On some AWS systems, you need to search for “/dev/xvd*”. “Click Change Disk Discovery Path”. For example:


You can change the disk group name, allocation unit size, and redundancy. Select the disk or disks that you want in your first disk group. Check that Configure Oracle ASM Filter Driver is checked. Press Next
Continuing the grid install
After the Create ASM diskgroup screen, installation can proceed smoothly
- Complete the remaining screens
- When prompted, run root.sh
After root.sh is finished, all utilities in grid $ORACLE_HOME/bin should operate with full functionality.
Summary
Oracle Grid Infrastructure substantially improves manageability. Grid is less complex to install than Oracle RAC, but requires a disk installation step which requires attention to detail. The Oracle 19c grid installation requires configuration of the ASM Filter Driver prior to running gridSetup.sh. You can use cloud and Linux administrative tools to allocate storage and then use asmcmd to label the disks.