Dual stack listeners on Linux

By Brian Fitzgerald

Question

$ netstat -ntl | grep :1521
tcp6 0 0 :::1521 :::* LISTEN

Q: Does the netstat output shown here mean that the listener accepts only IPv6 connections?

A: No. By default, a Linux listener uses a dual stack socket.

Oracle listener trace

Here is an strace of the Oracle TNS listener socket binding.

32649 socket(AF_INET6, SOCK_STREAM, IPPROTO_IP) = 8
32649 setsockopt(8, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
32649 bind(8, {sa_family=AF_INET6, sin6_port=htons(1521), inet_pton(AF_INET6, "::", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0
32649 listen(8, 128)

In the bind call, notice that the socket address family is AF_INET6, and the IP address shown is “::”, meaning listen on all IP addresses on the local host. The netstat output looks like this:

[grid@ip-172-32-10-34 ~]$ netstat -ntl | grep :1521
tcp6 0 0 :::1521 :::* LISTEN

However, the listener will accept either IPv4 or IPv6. You can check this by testing IPv4 and IPv6 one at a time:

[ec2-user@ip-172-32-10-34 ~]$ nc -v -4 localhost 1521 < /dev/null
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 127.0.0.1:1521.
Ncat: 0 bytes sent, 0 bytes received in 0.02 seconds.
[ec2-user@ip-172-32-10-34 ~]$ nc -v -6 localhost 1521 < /dev/null
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to ::1:1521.
Ncat: 0 bytes sent, 0 bytes received in 0.03 seconds.

The connect calls were:

connect(3, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
connect(3, {sa_family=AF_INET6, sin6_port=htons(1521), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 EINPROGRESS (Operation now in progress)

You can compare the single-stack Oracle listener to other listeners that use separate sockets.

Linux sshd trace

By comparison, here is a trace of the sshd listener socket bindings.

1142 socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
1142 fcntl(3, F_GETFL) = 0x2 (flags O_RDWR)
1142 fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0
1142 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
1142 bind(3, {sa_family=AF_INET, sin_port=htons(22), sin_addr=inet_addr("0.0.0.0")}, 16) = 0
1142 listen(3, 128) = 0
...
1142 socket(AF_INET6, SOCK_STREAM, IPPROTO_TCP) = 4
1142 fcntl(4, F_GETFL) = 0x2 (flags O_RDWR)
1142 fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0
1142 setsockopt(4, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
1142 setsockopt(4, SOL_IPV6, IPV6_V6ONLY, [1], 4) = 0
1142 bind(4, {sa_family=AF_INET6, sin6_port=htons(22), inet_pton(AF_INET6, "::", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0
1142 listen(4, 128)

Notice that for socket 3, the address family is AF_INET and the listener IP address is given as “0.0.0.0”, again meaning listen on all IP addresses. Examining socket 4 trace carefully, we see that before the bind call, socket option IPV6_V6ONLY is set. The netstat output looks like this:

[ec2-user@ip-172-32-10-34 ~]$ netstat -ntl | grep :22
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp6 0 0 :::22 :::* LISTEN

netstat displays one output line per socket. sshd has two listener sockets, one for IPv4 and a separate IPv6 socket.

IPv6-only listener

You can demonstrate an IPv6-only listener:

[ec2-user@ip-172-32-10-34 ~]$ nc -6 -l 6666
[ec2-user@ip-172-32-10-34 ~]$ netstat -ntl | grep :6666
tcp6       0      0 :::6666                 :::*                    LISTEN

An IPv4 connection fails:

[ec2-user@ip-172-32-10-34 ~]$ nc -4 localhost 6666
Ncat: Connection refused.

The nc utility makes two connection attempts:

socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
fcntl(3, F_GETFL)                       = 0x2 (flags O_RDWR)
fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK)    = 0
connect(3, {sa_family=AF_INET, sin_port=htons(6666), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
...
socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 4
fcntl(4, F_GETFL)                       = 0x2 (flags O_RDWR)
fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK)    = 0
connect(4, {sa_family=AF_INET, sin_port=htons(6666), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)

IPv4-only listener

Likewise, you can demonstrate an IPv4-only listener.

[ec2-user@ip-172-32-10-34 ~]$ nc -4 -l 4444

Netstat:

[ec2-user@ip-172-32-10-34 ~]$ netstat -ntl | grep :4444
tcp 0 0 0.0.0.0:4444 0.0.0.0:* LISTEN

IPv6 connection fails:

[ec2-user@ip-172-32-10-34 ~]$ nc -6 localhost 4444
Ncat: Connection refused.

The connect call:

connect(3, {sa_family=AF_INET6, sin6_port=htons(4444), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 EINPROGRESS (Operation now in progress)

Conclusion

In the netstat output,

[ec2-user@ip-172-32-10-34 ~]$ netstat -ntl | grep :1521
tcp6 0 0 :::1521 :::* LISTEN

The lack of a line such as

tcp 0 0 0.0.0.0:1521 0.0.0.0:* LISTEN

does not mean that the listener does not accept IPv4 connections. It could mean that the listener implements a dual stack socket.

grid 19c install with ASM filter driver

Introduction

Starting in Oracle 12.2, the grid installation experience changed in two major ways:

  1. In the past, you unzipped the binaries into a download folder and executed runInstaller. Oracle Universal Installer (OUI) copied the binaries to your grid oracle home. In the new procedure, you unzip the grid binaries into the grid home and then run gridSetup.sh to configure the software in-place.
  2. Instead of configuring asmlib, you are going to initialize the disks using ASM Filter Driver (AFD).

These changes raise new organizational issues and introduce a new, critical step.

Scope

The scope of this article is:

  • Installation to cloud virtual machine (Azure or AWS EC2)
  • Oracle Restart (Not RAC)
  • x86_64 hardware
  • Linux 7
  • udev is not considered
  • New install, not an upgrade
  • No preexisting asmlib

Grid infrastructure in the cloud

RAC

The main purpose of oracle Grid Infrastructure was to support Real Application Clusters (RAC). RAC requires sharing storage volumes across hosts, which most Cloud providers disallow. An iscsi server can be used to share drives, but at the cost of added complexity. RAC deployments are, therefore, less common in the cloud.

ASM

Oracle Grid Infrastructure is extremely useful, even without RAC. Automatic Storage Management is a main component of grid. Some benefits of ASM are:

  • Convenient, consolidated, managed storage
  • Efficient RMAN and Data Guard administration
  • Monitoring and managing space in ASM using Oracle Enterprise Manager

restart

An Oracle standalone (non-RAC) database that is running on grid infrastructure is known as a “restart”. You use grid to manage oracle startup and shutdown, and to monitor the database instance. Oracle restart works very well in the cloud.

Organizational issues

Previously, during the operating system setup, asmlib could be configured along with other root steps, prior to grid installation. However, AFD labeling must wait until grid installation has begun. In an organization with segregation of SA and DBA duties, the DBA is going to need to schedule three separate tasks for the SA.

  1. Setup the operating system
  2. Label ASM disks using AFD
  3. Run root.sh during the grid setup

Instead of personal handoffs, you can do one of these:

  1. Have one person with root and grid access perform all steps.
  2. Use an automated deployment tool.
  3. Run a script as root with some steps su’d to grid.

This blog article covers manual command line steps as root or grid, and use of the gridSetup.sh X windows GUI.

Steps leading up to AFD

The steps leading up to ASM disk labeling are, briefly:

  • Select a compatible machine image (Linux on x86_64)
  • Create the OS (ex: 2 CPU, 8G RAM, 32G operating system disk)
  • Update and install packages
  • Configure the operating system (swap, sysctl.conf)
  • Create the grid user and directories
  • Download and unzip the software

Now you are ready to configure your ASM disks.

Disk allocation

In the cloud, you can allocate new disks from the web console. You can also use a command line tool. Finally, you can write a script to do the job, in python for example. You can select from among  available type (hard drive or SSD), you can specify the size, and in some cases, you can specify the IOPS.

You can attach a disk to a live system, and you can detach a disk from a live system. Before allocating new disks, list the existing disks

[root@grid19c ~]# df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 33025276 12824288 20200988 39% /
[root@grid19c ~]# swapon
NAME TYPE SIZE USED PRIO
/mnt/resource/swapfile file 2G 0B -2
[root@grid19c ~]# ls -l /dev/sd*
brw-rw----. 1 root disk 8, 0 Aug 23 12:40 /dev/sda
brw-rw----. 1 root disk 8, 1 Aug 23 12:40 /dev/sda1
brw-rw----. 1 root disk 8, 2 Aug 23 12:40 /dev/sda2
brw-rw----. 1 root disk 8, 16 Aug 23 12:40 /dev/sdb
brw-rw----. 1 root disk 8, 17 Aug 23 12:40 /dev/sdb1

AWS

We are mainly interested in Linux systems that are available in the AWS Marketplace. The virtualization type in the AWS Marketplace right now is predominantly Hardware Virtual Machine (HVM). In the systems that I have checked, the possible device names are /dev/xvd[a-z], or only 26 devices, which is not very flexible. The allowable device size range is 1 GiB to 16384 GiB. The largest SSD devices are burstable up to 64000 IOPS. In AWS, an attached device can be resized.

aws.disks

AWS lets you choose the Linux device node name. You can retrieve the node name from the console, the command line, or the API. For example:

>>> from boto3 import resource
>>> resource('ec2').Volume('vol-0679cfe2a209db2ed').attachments[0]['Device']
'/dev/sdg'

Azure

In Azure, disks range in size from 32 GiB to 32767 GiB. A disk can be resized, and the type (HDD or SSD) may be modified only when the disk is unattached. The largest premium SSD offers IOPS up to 20000.

azure.disks

Azure does not identify the new disk names for you. I have not found a way to retrieve the operating system node name from the Azure cloud. The closest I can get is to display the attachment status.

>>> from azure.common.credentials import get_azure_cli_credentials
>>> from azure.mgmt.compute import ComputeManagementClient
>>> cli = ComputeManagementClient(
... *get_azure_cli_credentials()
... )
>>> cli.disks.get(
... resource_group_name='ora',
... disk_name='asm-0'
... ).disk_state
'Attached'

This seems like a gap to me. You want a certain method to distinguish new disks from existing disks, and you want a positive method to identify each disk.

Identify the new disks

Example:

brw-rw----. 1 root disk 8, 32 Aug 23 12:40 /dev/sdc
brw-rw----. 1 root disk 8, 48 Aug 23 12:40 /dev/sdd

AFD label

Prior to starting the grid install, you need to label the disk or disks that you intend to use for the first ASM disk group. We will use asmcmd afd_label. This is a critical step.

afd_label

To label your disks, sudo to root, and execute

asmcmd afd_label label devicenode –init

For example:

[root@grid19c ~]# export ORACLE_HOME=/u01/app/19.3.0/grid
[root@grid19c ~]# export ORACLE_BASE=/tmp
[root@grid19c ~]# cd /u01/app/19.3.0/grid/bin
[root@grid19c bin]# ./asmcmd afd_label DATA01_00001 /dev/sdc --init
  • ORACLE_BASE=/tmp avoids creating root-owned files under the grid oracle base.
  • Exercise care to follow this instruction exactly because an incorrect invocation may not print an error message at all.

separation of duties

If root steps are not done by the DBA, then the DBA should prepare a script for the AFD labeling. For example:

#!/bin/bash

export ORACLE_HOME=/u01/app/19.3.0/grid
export ORACLE_BASE=/tmp
cd /u01/app/19.3.0/grid/bin
./asmcmd afd_label DATA01_00001 /dev/sdc --init

afd_lslbl

If your disk discovery string is the default value ‘/dev/sd*’, as in Azure, you will be able to list your ASM disks with this simple command:

[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
DATA01_00001 /dev/sdc

On AWS, your disk string might be ‘/dev/xvd*’, so this might not work:

[root@ip-172-31-86-22 bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'No devices to be scanned.

In that case, you need to specify the disk string on the command line:

[root@ip-172-31-86-22 bin]# ./asmcmd afd_lslbl '/dev/xvd*'
--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
ASM01_00001 /dev/xvdc

After you run root.sh, you do not need to specify the disk discovery string.

[root@ip-172-31-25-179 bin]# ./asmcmd afd_lslbl
--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
ASM01_00001 /dev/xvdc

After root.sh, afd_lslbl gets its disk string from file /etc/oracleafd.conf:

# cat /etc/oracleafd.conf
afd_diskstring='/dev/xvd*'

afd_unlabel

You can erase the AFD label with:

[root@grid19c bin]# ./asmcmd afd_unlabel /dev/sdc --init
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'No devices to be scanned.

Bugs and limitations

the afd_label –init option

If you leave out the –init option afd_label might not write a label to your disk, but it will also print no error message, and will return a success code.

[root@grid19c bin]# ./asmcmd afd_label DATA01_00001 /dev/sdc
[root@grid19c bin]# echo $?
0
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'No devices to be scanned.

You can use dd to verify that the disk is empty

[root@grid19c bin]# dd if=/dev/sdc ibs=8192 obs=8192 count=1 | od -X
0000000 00000000 00000000 00000000 00000000
*
0020000
1+0 records in
1+0 records out
8192 bytes (8,2 kB) copied, 0.032763 s, 25.0 MB/s

the afd_unlabel –init option

Likewise, if you run afd_unlabel without the –init option, asmcmd may silently not unlabel your disk.

[root@grid19c bin]# ./asmcmd afd_label DATA01_00001 /dev/sdc --init
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
DATA01_00001 /dev/sdc
[root@grid19c bin]# ./asmcmd afd_unlabel /dev/sdc
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
DATA01_00001 /dev/sdc

You can also use dd to view the label

[root@grid19c bin]# dd if=/dev/sdc ibs=8192 obs=8192 count=1 | od -X
0000000 00000000 00000000 00000000 1f75bc96
0000020 00000000 00000000 00000000 00000000
0000040 4c43524f 4b534944 41544144 305f3130
0000060 31303030 00000000 00000000 00000000
0000100 00000000 00000000 00000000 00000000
*
0000440 00000000 00000000 0500000a 5d5ee7d3
0000460 00000000 00000000 00000000 00000000
*
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied*
0020000

other utilities

In this context, other afd commands might run, produce no output, do nothing, and exit with success code.

[root@grid19c bin]# ./asmcmd afd_state
[root@grid19c bin]# echo $?
0
[root@grid19c bin]# ./asmcmd afd_configure
[root@grid19c bin]# echo $?
0

At this stage so far in the grid installation, only commands afd_label is needed. You can also run afd_unlabel, and afd_lslbl. If you experiment with other commands or options, you may end up in a state that is difficult to diagnose.

grid installation directory

Oracle grid can be installed in virtually any directory. Originally, however, path”/u01/app/19.0.0/grid” is hard-coded into scripts such as kfod. You cannot kfod and other such scripts at this early stage. Later in the installation procedure, root.sh replaces such files with scripts that can run out of any installation directory.

[root@grid19c bin]# ./asmcmd
/u01/app/19.3.0/grid/bin/kfod: line 22: /u01/app/19.0.0/grid/bin/kfod.bin: No such file or directory
Use of uninitialized value $result[0] in scalar chomp at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5982.
Use of uninitialized value $result[0] in split at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5985.
Use of uninitialized value $clus_mode in scalar chomp at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5987.
Use of uninitialized value $clus_mode in concatenation (.) or string at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5988.
Use of uninitialized value $clus_mode in string eq at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5993.
...
ASMCMD> exit

conclusion

Most utilities are designed to run with full functionality after gridSetup.sh and root.sh have completed. One notable exception is subset of asmcmd afd sub commands with very specific options.

Because of the bugginess or limitations of the asmcmd AFD options, it is critical to be aware of the limitations, and by all means, to get the AFD disk labeling correct the first time.

Grid setup

This section describes the grid setup, with emphasis on the ASM Disk Group screen

  • start X server (Xming) on Windows
  • On Linux, sudo to grid and cd to the grid home
  • Start gridSetup.sh
  • At the Configuration Options screen, select Standalone (Oracle Restart).
  • Click Next to go to the Create ASM Disk Group screen.

Here is where you are going to create your first ASM disk group. gridSetup.sh scans the operating system for labeled disks. Labeled disks matching “/dev/sd*” will appear in your display.

cr.asm.dg.azure

On some AWS systems, you need to search for “/dev/xvd*”. “Click Change Disk Discovery Path”. For example:

ch.dsk.dsc.path

cr.asm.dg

You can change the disk group name, allocation unit size, and redundancy. Select the disk or disks that you want in your first disk group. Check that Configure Oracle ASM Filter Driver is checked. Press Next

Continuing the grid install

After the Create ASM diskgroup screen, installation can proceed smoothly

  • Complete the remaining screens
  • When prompted, run root.sh

After root.sh is finished, all utilities in grid $ORACLE_HOME/bin should operate with full functionality.

Summary

Oracle Grid Infrastructure substantially improves manageability. Grid is less complex to install than Oracle RAC, but requires a disk installation step which requires attention to detail. The Oracle 19c grid installation requires configuration of the ASM Filter Driver prior to running gridSetup.sh. You can use cloud and Linux administrative tools to allocate storage and then use asmcmd to label the disks.

Oracle binaries or patch download from Linux command line

By Brian Fitzgerald

Introduction

Our objective is to download Oracle binaries or patches directly to Linux using the bash shell. For binaries, we will log in to Oracle and accept the license agreement in a Windows browser, and then perform the download in Linux. For patches, we will log in to MOS. The technique is useful for cloud or on-prem.

Linux preparations

Complete the following preparations.

Check your Linux version

$ uname -r
3.10.0-1062.9.1.el7.x86_64

wget download from oracle may fail on Red Hat Enterprise Linux el5 or lower.

Check your ssl version

$ openssl version
OpenSSL 1.0.2k-fips 26 Jan 2017

wget download from oracle may fail on openssl version 0.9 or lower.

Install wget

If necessary, install wget

sudo -i -u root
yum -y install wget

cd to your download area:

Redirect cat standard output to a file.

[grid@ora19c ~]$ mkdir download
[grid@ora19c ~]$ cd download/
[grid@ora19c download]$ cat > cook.txt

Let the cat session wait there.

Windows preparations

browser

Chrome is used in this example.

cookie plugin

  • Install the EditThisCookie plugin.
  • Right click EditThisCookie and navigate to Options
  • For export format, select “Netscape HTTP Cookie File”

cook

other browsers

Export methods exist for other browsers. For example, in Firefox, you may use “Export Cookies” by Rotem Dan. Also, you may try your browser’s builtin inspection tools.

binaries download page

  • Login to oracle.com with your ID and password.
  • Navigate to the download page. For example:

https://www.oracle.com/database/technologies/oracle19c-linux-downloads.html

Export cookies

Complete the cookie export and start the download within 30 minutes:

  • From Chrome click the download icon.
  • When prompted, accept the license agreement.
  • Click the large download button.

accept

  • Interrupt the download.
  • Click the EditThisCookie icon
  • In the EditThisCookie, clear the search box,
  • Enter “oracle.com”
  • Using the scrollbar, check that these cookies are present:
domain name
.oracle.com oraclelicense
edelivery.oracle.com OAMAuthnCookie_edelivery.oracle.com:443

export   oamauth

  • If OAMAuthnCookie_edelivery.oracle.com:443 is missing, it could be because:
    • you did not start the download
    • you did not search oracle.com
  • Click the export button. Note the confirmation message:

exported

  • Paste the cookies into your Linux cat session.
  • Type ctrl-d (end of file) into the cat session.
  • Check Linux cookies file cook.txt with egrep:
[grid@ora19c download]$ egrep 'oraclelicense|OAMAuthnCookie_edelivery' cook.txt
.oracle.com     TRUE    /       FALSE   1566398899      oraclelicense   152015
edelivery.oracle.com    FALSE   /       TRUE    0       OAMAuthnCookie_edelivery.oracle.com:443 0ba9acafe65e911747b1cc43daf2bb6269988c0b%7EbOV%2F2lmNMEtBQ0AQibaanasshtknkgx1dWreuPjqkvXEFXKl3n3wj1zmKsG1lkM56uv%2F2cWPSSrH3HRWYEsWaDh2eGTSS8fizlEZF01NQdIF6hEG8rp5qkdvQv3twKXCO%2BpAgxiixhlxb61xlMvNDiLiv8JpDY97yvTSf1QyDxuY7Fjs03Qb0Zaq6i88NPsqkq0gu1C8fEbwEZbgW55YAUVQQdKfOpN4yR2iXenEJfiiKmHTCISZRvPNzgPf0JxPK3fVnvLsAO9HEFBsu%2Fwx0uUsYqe%2FHPJJwkFjBLudqkD8bO84t7HTCED80o%2BrxzcOJ%2BE%2BuvlaqItiidQ7asFuHQanPUS10CeTEXXWzK3h4Lni4Gp29wKtnQ6ADJfPET%2FCnKehcnEsB9lYeQliSvdhWkRD9v3Azr5SYkhEdsDCF4z%2FWXP1dOT6gB9oQVn1352FcJbnA8Zo6ulnXPhKdwJE21SiskmOAelR0GVvA9M3DNU%2FokD1s2Msy293j9goob%2Fpj0IvtsTyuxOdRc1p4gV1XyZbulYn8q1q6Fnu7SpEq7hxiKh6TjY1737uh2w%2FTO3G%2F1QviFlrG%2BK1Ts76KNp45lYwefYA7U29v9W4H4gJ4Cs%3D

Download

In Linux, start the download with wget:

[grid@ora19c download]$ wget --load-cookies=cook.txt 'https://download.oracle.com/otn/linux/oracle19c/190000/LINUX.X64_193000_grid_home.zip'

Downloading patches from MOS

The procedure for downloading patches from My Oracle Support is simpler because you don’t have to acknowledge the license agreement and then start and interrupt the download. However, MOS download links contain the “&” and “?” characters, so you need to quote the URL.

To download a patch from MOS:

  • Login to MOS
  • open EditThisCookie
  • clear the URL window and enter “oracle.com”
  • click the export button
  • In your shell, issue cat > /tmp/cook.txt
  • Paste your shell into the cat
  • Type ctrl-D
  • navigate to the patch download link. For example:

dl.patch

  • hover your mouse over the download button and right-click
  • select “Copy link address”
  • paste the link into your shell between quotes, such as in this example:
wget --load-cookies=/tmp/cook.txt ''
  • after paste:
 wget --load-cookies /tmp/cook.txt 'https://updates.oracle.com/Orion/Services/download/p30501926_180000_Linux-x86-64.zip?aru=23242022&patch_file=p30501926_180000_Linux-x86-64.zip'
  • You can specify the output file using the “-O” option. i.e.
-O p30501926_180000_Linux-x86-64.zip
  • Putting it all together, you get:
wget --load-cookies=/tmp/cook.txt 'https://updates.oracle.com/Orion/Services/download/p30501926_180000_Linux-x86-64.zip?aru=23242022&patch_file=p30501926_180000_Linux-x86-64.zip' -O p30501926_180000_Linux-x86-64.zip
  • Press enter to start the download

Cookies file check

For MOS downloads, there is only one required cookie, namely Oracle_updates_auth

grep Oracle_updates_auth  /tmp/cook.txt
.updates.oracle.com     TRUE    /       FALSE   1582699648      Oracle_updates_auth     ZDA1YzU4OThkODUyM2Y3NjgzZWUxYmNhNGQzMDFjN2UtNTM2MTZjNzQ2NTY0NWY1ZmY5ZDVkMDZmZGZkMWQyMzBkY2FmZGI5NDdlOWE3NWVkMWY1NWVjYjYzZmE4ZDc3ZjAwN2JkYjMzNzViYjAyMmZkZjUzNmRkYWE3OWRiYzg4NDIzY2JlY2JkMzcwMjk5OWQyZDNlMWYzYTAzZTFhYjU
oracle:/ora_local/download/oracle/PSU_PATCHES

Special characters in the URL

You should quote the URL. Otherwise:

  • If ‘&’ appears in the URL, the shell will interpret the string to the right of the ‘&’ as a separate command.
  • If ‘?’ appears in the URL, the shell may interpret it as a metacharacter.

An example of this problem while downloading a Release Update from MOS:

wget --load-cookies /tmp/cook.txt https://updates.oracle.com/Orion/Services/download/p30501926_180000_Linux-x86-64.zip?aru=23242022&patch_file=p30501926_180000_Linux-x86-64.zip -O p30501926_180000_Linux-x86-64.zip
[1] 13367
--2020-02-25 14:15:12--  https://updates.oracle.com/Orion/Services/download/p30501926_180000_Linux-x86-64.zip?aru=23242022
bash: -O: command not found...
Resolving updates.oracle.com (updates.oracle.com)... oracle:/ora_local/download/oracle/PSU_PATCHES
=>141.146.44.51
Connecting to updates.oracle.com (updates.oracle.com)|141.146.44.51|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://aru-akam.oracle.com/adcarurepos/vol/patch14/PLATFORM/CORE/Linux-x86-64/R600000000071008/p30501926_180000_Linux-x86-64.zip?FilePath=/adcarurepos/vol/patch14/PLATFORM/CORE/Linux-x86-64/R600000000071008/p30501926_180000_Linux-x86-64.zip&File=p30501926_180000_Linux-x86-64.zip&params=UGw0Nnppc2RUZnBGTkx2aUEraURsUTphcnU9MjMyNDIwMjImZW1haWw9YmYyMTkwQG55dS5lZHUmZmlsZV9pZD0xMDY0NzI4MTYmcGF0Y2hfZmlsZT1wMzA1MDE5MjZfMTgwMDAwX0xpbnV4LXg4Ni02NC56aXAmdXNlcmlkPW8tYmYyMTkwQG55dS5lZHUmc2l6ZT0xMTQ4Mjg4MjMmY29udGV4dD1BQDEwK0hAYWFydXZtdHAwNy5vcmFjbGUuY29tK1BAJmRvd25sb2FkX2lkPTQxMjI3Mzc2Mw@@&AuthParam=1582658233_7e9698372721c6852f3ca053d3a8e443 [following]
--2020-02-25 14:15:13--  https://aru-akam.oracle.com/adcarurepos/vol/patch14/PLATFORM/CORE/Linux-x86-64/R600000000071008/p30501926_180000_Linux-x86-64.zip?FilePath=/adcarurepos/vol/patch14/PLATFORM/CORE/Linux-x86-64/R600000000071008/p30501926_180000_Linux-x86-64.zip&File=p30501926_180000_Linux-x86-64.zip&params=UGw0Nnppc2RUZnBGTkx2aUEraURsUTphcnU9MjMyNDIwMjImZW1haWw9YmYyMTkwQG55dS5lZHUmZmlsZV9pZD0xMDY0NzI4MTYmcGF0Y2hfZmlsZT1wMzA1MDE5MjZfMTgwMDAwX0xpbnV4LXg4Ni02NC56aXAmdXNlcmlkPW8tYmYyMTkwQG55dS5lZHUmc2l6ZT0xMTQ4Mjg4MjMmY29udGV4dD1BQDEwK0hAYWFydXZtdHAwNy5vcmFjbGUuY29tK1BAJmRvd25sb2FkX2lkPTQxMjI3Mzc2Mw@@&AuthParam=1582658233_7e9698372721c6852f3ca053d3a8e443
Resolving aru-akam.oracle.com (aru-akam.oracle.com)... 104.124.100.114
Connecting to aru-akam.oracle.com (aru-akam.oracle.com)|104.124.100.114|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 114828823 (110M) [application/zip]
Saving to: ‘p30501926_180000_Linux-x86-64.zip?aru=23242022’

100%[=========================================================================================================================================================================>] 114,828,823 42.1MB/s   in 2.6s

2020-02-25 14:15:15 (42.1 MB/s) - ‘p30501926_180000_Linux-x86-64.zip?aru=23242022’ saved [114828823/114828823]

Notice that:

  • the shell interpreted
patch_file=p30501926_180000_Linux-x86-64.zip -O

as a command and printed this error:

bash: -O: command not found...
  • The file got saved as ‘p30501926_180000_Linux-x86-64.zip?aru=23242022’

Solution:
Quote the URL.

 wget --load-cookies /tmp/cook.txt 'https://updates.oracle.com/Orion/Services/download/p30501926_180000_Linux-x86-64.zip?aru=23242022&patch_file=p30501926_180000_Linux-x86-64.zip'

Special characters in the output file name

You may find that wget saves the file to an unexpected name. For example:

ls
p30501926_180000_Linux-x86-64.zip?aru=23242022&patch_file=p30501926_180000_Linux-x86-64.zip

Solution:

Specify the output file using “-O”, as previously explained.

Summary

Using Chrome, you can log in to Oracle and accept the license agreement. Next, using EditThisCookie, you can export the needed cookies. Finally, using wget and the –load-cookies option, you can start your download.

The possible benefit is that the download will run faster on the Linux  server than on your Windows host. Also, direct download saves you an additional transfer step from Windows.

ssh to postgres issue solved

by Brian Fitzgerald

Scenario

You have double checked everything, but ssh to postgres does not work.

Conditions

  1. You did a postgres package install
  2. ssh to other accounts works
  3. ssh to postgres using a key does not work
  4. Directory .ssh mode is 700
  5. File authorized_keys mode is 600

Selinux

Check whether selinux is enforcing acccess controls.

[postgres@test ~]$ getenforce
Enforcing

Note in /var/log/audit/audit.log, “avc: denied…scontext…sshd…tcontext…postressql_db_t”

type=AVC msg=audit(1562467696.927:316): avc:  denied  { read } for  pid=2282 comm="sshd" name="authorized_keys" dev="sda2" ino=56806 scontext=system_u:system_r:sshd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:postgresql_db_t:s0 tclass=file permissive=0

Note that the inode is 56806. Check ls -i:

[postgres@test ~]$ cd .ssh

[postgres@test .ssh]$ ls -i authorized_keys
56806 authorized_keys

Check the authorized_keys selinux type:

[postgres@test .ssh]$ ls -Z authorized_keys
-rw-------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 authorized_keys

The type is “postgresql_db_t”. In order for ssh to work, the type needs to be “ssh_home_t”.

Solution

Issue restorecon -R .ssh

[postgres@test ~]$ restorecon -R .ssh
[postgres@test ~]$ ls -RZ .ssh
.ssh:
-rw-------. postgres postgres unconfined_u:object_r:ssh_home_t:s0 authorized_keys

Check:

[postgres@pgstby .ssh]$ ssh test date
Sun Jul  7 02:57:21 UTC 2019

ssh works!

Note that new files created under directory .ssh will inherit the necessary ssh_home_t type.

chcon

If the home directory is an incompatible subtype then restorecon will not work. In that case, run chcon. i.e.:

[postgres@test ~]$ chcon -R unconfined_u:object_r:ssh_home_t:s0 ~/.ssh

or

[postgres@test ~]$ chcon -R system_u:object_r:usr_t:s0 ~/.ssh

semamage

If restorecon did not work, you can configure it to work in the future:

[root@test ~]# semanage fcontext -a -t ssh_home_t ~postgres/'.ssh(/.+)?'

Cause

The linux system was built from Azure image Redhat Enterprise Linux 7.6, which has selinux enabled by default.

Postgres was installed from package rh-postgresql10 created the user postgres, and then set the selinux type of most files and directories to postressql_db_t.

The .ssh directory inherited its type from its parent.

Normal home directory

The normal type of a home directory is user_home_dir_t. Subdirectory .ssh gets created as type ssh_home_t

[normal@test ~]$ ls -dZ
drwx------. normal normal unconfined_u:object_r:user_home_dir_t:s0 .
[normal@test ~]$ mkdir .ssh
[normal@test ~]$ ls -dZ .ssh
drwxrwxr-x. normal normal unconfined_u:object_r:ssh_home_t:s0 .ssh

Affected systems

In this example, the system was Redhat Linux on Azure, but it can happen on any system running selinux. The issue was noticed after a PostgreSQL package install. The same problem could appear with other package installs that create home directories.

Recommended practice

On selinux-enabled systems, create the .ssh directory this way:

[postgres@test ~]$ mkdir .ssh
[postgres@test ~]$ chmod 700 .ssh
[postgres@test ~]$ restorecon -R .ssh

Gremlin Graph database statements

By Brian Fitzgerald

Introduction

This blog post covers some basic Gremlin data operations on a graph database. The database platforms tested are TinkerPop implementations in Azure Cosmos DB and AWS Neptune.

Initialization

Azure

Create a Cosmos DB account. For API, select “Gremlin (graph)”.

In data explorer, create a new graph.

AWS

Create a Neptune database and an EC2 instance. Setup your security group. Connect via gremlin.sh as described in the manual.

Insert Data

All statements in this blog post will run as well on Azure and AWS, but there are differences in the output display.

Add vertices.

g.addV('person').property('firstName', 'Marko')
g.addV('person').property('firstName', 'Stephen')
g.addV('person').property('firstName', 'Ben')
g.addV('person').property('firstName', 'Mary')

Add edges.

g.V().has('firstName','Marko').addE('knows').to(g.V().has('firstName','Stephen'))
g.V().has('firstName','Marko').addE('knows').to(g.V().has('firstName','Ben'))
g.V().has('firstName','Marko').addE('knows').to(g.V().has('firstName','Mary'))

Update data (set properties)

Set vertex properties

g.V().has('person','firstName','Marko').property('lastName','Rodriguez')

The output in this section is from  Azure Cosmos DB Data explorer:

[
  {
    "id": "7cf848d1-0b87-412c-87fe-267e6f259a86",
    "label": "person",
    "type": "vertex",
    "properties": {
      "firstName": [
        {
          "id": "b45eb2be-a0b7-4c55-a956-85b985b88b35",
          "value": "Marko"
        }
      ],
      "lastName": [
        {
          "id": "a28f96e7-8e17-4be7-8a15-a3a0d3aeba87",
          "value": "Rodriguez"
        }
      ]
    }
  }
]

Notice that each vertex property has an “id” field. In this example, the “lastName” “id” is “a28f96e7-8e17-4be7-8a15-a3a0d3aeba87”. If you update a property, even by setting it to the same value, the “id” changes.

g.V().has('person','firstName','Marko').property('lastName','Rodriguez')
[
  {
    "id": "7cf848d1-0b87-412c-87fe-267e6f259a86",
    "label": "person",
    "type": "vertex",
    "properties": {
      "firstName": [
        {
          "id": "b45eb2be-a0b7-4c55-a956-85b985b88b35",
          "value": "Marko"
        }
      ],
      "lastName": [
        {
          "id": "0d5e4855-4d78-45d9-85c2-de774ca21e87",
          "value": "Rodriguez"
        }
      ]
    }
  }
]

Now the “lastName” “id” is “0d5e4855-4d78-45d9-85c2-de774ca21e87”

Set edge properties

Add a weight property to each edge.

g.V().has('person','firstName','Marko').outE('knows').as('e').inV().has('firstName','Stephen').select('e').property('weight', 0.055)
g.V().has('person','firstName','Marko').outE('knows').as('e').inV().has('firstName','Ben').select('e').property('weight', 0.075)
g.V().has('person','firstName','Marko').outE('knows').as('e').inV().has('firstName','Mary').select('e').property('weight', 0.075)

Notice that edge properties to not have an “id” attribute:

g.V().has('person','firstName','Marko').outE('knows').as('e').inV().has('firstName','Stephen').select('e')
[
  {
    "id": "a37cdd6d-9e72-44f9-b36a-256f824eba64",
    "label": "knows",
    "type": "edge",
    "inVLabel": "person",
    "outVLabel": "person",
    "inV": "a9067638-e56d-4fd0-9522-c955c4062fa3",
    "outV": "7cf848d1-0b87-412c-87fe-267e6f259a86",
    "properties": {
      "weight": 0.055
    }
  }
]

Search

Search for Stephen

g.V().has('person','firstName','Stephen')

Azure Cosmos DB

Here is what the output looks like in Azure Cosmos DB Data Explorer.

[
  {
    "id": "a9067638-e56d-4fd0-9522-c955c4062fa3",
    "label": "person",
    "type": "vertex",
    "properties": {
      "firstName": [
        {
          "id": "3ac1eb02-6255-4787-a32c-2b61359f127c",
          "value": "Stephen"
        }
      ]
    }
  }
]

AWS

By comparison, here is gremlin console display.

gremlin> g.V().has('person','firstName','Stephen')
==>v[32b57f9f-703e-722a-ac4c-2cca662a7bd9]

Only the vertex id is displayed.

Here are some more search queries. Find one edge from Marko with the highest weight.

g.V().has('person','firstName','Marko').outE('knows').order().by('weight', decr).limit(1)

[
  {
    "id": "7c13d3a5-efca-4fc6-8ef4-38924e80c5eb",
    "label": "knows",
    "type": "edge",
    "inVLabel": "person",
    "outVLabel": "person",
    "inV": "1bcbdc44-6b0d-47c6-bbdb-96665b034847",
    "outV": "7cf848d1-0b87-412c-87fe-267e6f259a86",
    "properties": {
      "weight": 0.075
    }
  }
]

Find one vertex adjacent to Marko connected by an edge with the highest weight.

g.V().has('person','firstName','Marko').outE('knows').order().by('weight', decr).limit(1).inV().properties('firstName').value()
[
  {
    "id": "1bcbdc44-6b0d-47c6-bbdb-96665b034847",
    "label": "person",
    "type": "vertex",
    "properties": {
      "firstName": [
        {
          "id": "ebdf5e09-c511-472f-8b14-405fad956146",
          "value": "Ben"
        }
      ]
    }
  }
]

Notice that that vertex “Ben” id “1bcbdc44-6b0d-47c6-bbdb-96665b034847” matches edge inV.

Drop vertices

Drop one vertex.

g.V().has('firstName','Mary').drop()

Drop all vertices

g.V().drop()

All edges get dropped as well.

Serverless function learning environments across Amazon, Microsoft, and Google clouds

by Brian Fitzgerald

Introduction

If you want to dip your toe into serverless function programming, you will want to try it out in a simple web-based environment with all the needed syntax setup for you. That way, you can at least get to “Hello World!” without delay or error.

Across three cloud providers, Amazon, Microsoft, and Google, online edit availability varies across languages and operating systems. Here is a brief summary.

Amazon Web Services

AWS serverless functions, Lambda, are available in seven languages, C#, Go, Java, JavaScript, Powershell, Python, and Ruby. You can experiment with some simple coding by entering your choice of JavaScript, Python or Ruby code into the online code editor. If you want to use C#, Go, Java, or Powershell, you will have to develop and test your files outside Lambda, put them in a zip file, and upload the zip file. The Lambda console also accepts a jar file for upload. A Lambda java upload needs class and jar files, not java source files. Also, a jar file can contain bytecode compiled from other languages that run in a JRE, so, for example, you can write a Lambda in scala or clojure.

Saving code changes from the AWS Console is quick, usually under one second. Python code is saved without syntax checking. There is one quirk. Tabs in sources get copied to the clipboard as spaces. I refer to the Lambda Management console in Chrome on Windows.

You can export your function, and in that way, get your source files out after you have tested them.

A python Lambda function can return any data type that is JSON serializable, such as  dict, list, tuple, Boolean, scalars, None, and hierarchies of these, but not, for example, set, date, datetime, class, or object.

Azure

Azure Functions are offered in five languages: C#, Java, JavaScript, Powershell, and Python. Azure functions can be administered online in the Azure portal. Azure offers a choice of Windows or Linux for your function, but online edit is only available for the Windows Function Apps. Python runs on Linux only, which rules out online edit. Creating Java or Go functions is supported only by upload. Online edit, therefore, is available for C#, JavaScript, and Powershell.

An Azure function sits inside a FunctionApp. FunctionApp names must be unique across all Azure. You cannot name your Azure FunctionApp “spam” or “eggs”, and you cannot name your Azure FunctionApp “SpamAndEggs” unless I delete my Azure FunctionApp “SpamAndEggs”.

spamandeggs

FunctionApp creation can take more than 1 minute. When creation finishes, the function list displayed in the portal does not refresh when the function is ready, and you could miss the notification. Saving your code from the portal is almost instantaneous. Compile and run takes less than 1 second. You can zip and download your finished code by pressing Download app content.

Press tab in the online code results in saving space characters, which will be less of a problem, since you won’t be editing python source online.

Google

In Google Cloud Platform, you can create a Google Cloud Function. The language choices are Go, JavaScript, and Python, and you can enter all code using the online editor.

When you finish editing, you press “Deploy”, which can run for up to 1 minute.Syntax errors lead to failed deployment. While testing the code, you can view it read-only.  If you want to make a change, you have to go back to the edit screen. You may download your finished code as a zip file.

Google Cloud function return type is limited to string, tuple, Response instance, or WSGI callable.

Summary

Here is a summary of programming languages across cloud providers.

Language AWS Azure Google
C# upload only online edit not available
Go upload only not available online edit
Java upload only upload only not available
JavaScript online edit online edit online edit
Powershell upload only online edit not available
Python online edit upload only online edit
Ruby online edit not available not available

JavaScript is universally available for learning: You can quickly create a Hello World serverless function using an online editor on any cloud platform. On the other hand, if you are a hard-core java programmer, you are going to need to work out how to upload your code. You could upload code from your IDE, for example. If you want to learn C# or Powershell cloud programming, Azure is the place to be. If you want to explore Go, then go to Google.

 

Azure PostgreSQL command line create, connect, delete

By Brian Fitzgerald

Introduction

This blog post demonstrates creating, connecting, and deleting an Azure PostgreSQL database using command line tools.

Create

Configure

C:\>az configure --defaults location=eastus

C:\>az group create --name dflgrp

C:\>az configure --defaults group=dflgrp

Unique name

The postgres server name must be unique across Azure and across public DNS domain postgres.database.azure.com. Common names, like “test” are already taken. The name must be “available”.

When you enter the name in Azure Portal, your browser checks availability by making AJAX calls as you type.

check.name.avail

This command line script accomplishes the same name check.

Script pg_name_avail.py:

from argparse import ArgumentParser
from azure.common.credentials import get_azure_cli_credentials
from azure.mgmt.rdbms.postgresql.postgre_sql_management_client import PostgreSQLManagementClient

ap = ArgumentParser()
ap.add_argument('--name', '-n')
args = ap.parse_args()

cli = PostgreSQLManagementClient(
    *get_azure_cli_credentials()
)
na = cli.check_name_availability.execute(args.name).as_dict()
print(
    'Name "%s" %s' %
    (
        args.name,
        'is available.' if na['name_available']
        else 'is not available. %s.' % na['message']
    )
)

examples:

C:\>python pg_name_avail.py -n test
Name "test" is not available. Specified server name is already used.

C:\>python pg_name_avail.py -n azpgsrv
Name "azpgsrv" is available.

Create server

C:\>az postgres server create -n azpgsrv --sku-name B_Gen5_2 --admin-user azpgusr --admin-password azpgpw..212

Output is JSON:

{
  "administratorLogin": "azpgusr",
  "earliestRestoreDate": "2019-05-07T15:56:16.430000+00:00",
  "fullyQualifiedDomainName": "azpgsrv.postgres.database.azure.com",
  "id": "/subscriptions/7fe8a9c3-0812-42e2-9733-3f567308a0d0/resourceGroups/dflgrp/providers/Microsoft.DBforPostgreSQL/servers/azpgsrv",
  "location": "eastus",
  "masterServerId": "",
  "name": "azpgsrv",
  "replicaCapacity": 5,
  "replicationRole": "None",
  "resourceGroup": "dflgrp",
  "sku": {
    "capacity": 2,
    "family": "Gen5",
    "name": "B_Gen5_2",
    "size": null,
    "tier": "Basic"
  },
  "sslEnforcement": "Enabled",
  "storageProfile": {
    "backupRetentionDays": 7,
    "geoRedundantBackup": "Disabled",
    "storageAutoGrow": "Disabled",
    "storageMb": 5120
  },
  "tags": null,
  "type": "Microsoft.DBforPostgreSQL/servers",
  "userVisibleState": "Ready",
  "version": "9.6"
}

DNS Name

Microsoft Azure pushes a public, unique, fully qualified domain name for your server.

C:\>nslookup azpgsrv.postgres.database.azure.com
Server:  Wireless_Broadband_Router.home
Address:  192.168.1.1

Non-authoritative answer:
Name:    cr1.eastus1-a.control.database.windows.net
Address:  191.238.6.43
Aliases:  azpgsrv.postgres.database.azure.com

Connect

Firewall issue

C:\>psql -h azpgsrv.postgres.database.azure.com -p 5432 -U azpgusr@azpgsrv -d postgres
psql: FATAL:  no pg_hba.conf entry for host "100.12.249.90", user "azpgusr", database "postgres", SSL on
FATAL:  SSL connection is required. Please specify SSL options and retry.

C:\>az postgres server firewall-rule create -n fwrule1 -s azpgsrv --start-ip-address 100.12.249.90 --end-ip-address 100.12.249.90
{
  "endIpAddress": "100.12.249.90",
  "id": "/subscriptions/7fe8a9c3-0812-42e2-9733-3f567308a0d0/resourceGroups/dflgrp/providers/Microsoft.DBforPostgreSQL/servers/azpgsrv/firewallRules/fwrule1",
  "name": "fwrule1",
  "resourceGroup": "dflgrp",
  "startIpAddress": "100.12.249.90",
  "type": "Microsoft.DBforPostgreSQL/servers/firewallRules"
}

Code page issue

C:\>psql -h azpgsrv.postgres.database.azure.com -p 5432 -U azpgusr@azpgsrv -d postgres
Password for user azpgusr@azpgsrv:
psql (10.4, server 9.6.12)
WARNING: Console code page (437) differs from Windows code page (1252)
         8-bit characters might not work correctly. See psql reference
         page "Notes for Windows users" for details.
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.

postgres=> \q

C:\>psql -h azpgsrv.postgres.database.azure.com -p 5432 -U azpgusr@azpgsrv -d postgres

C:\>chcp 1252
Active code page: 1252

Success

C:\>psql -h azpgsrv.postgres.database.azure.com -p 5432 -U azpgusr@azpgsrv -d postgres
Password for user azpgusr@azpgsrv:
psql (10.4, server 9.6.12)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.

Python connection

C:\Users\Brian Fitzgerald\python\blog>type pgcon.py
import psycopg2

conn = psycopg2.connect(
    dbname='postgres',
    user='azpgusr@azpgsrv',
    host='azpgsrv.postgres.database.azure.com',
    password='azpgpw..212',
    port=5432,
    sslmode='require'
)
print('connected')
C:\>python pgcon.py
connected

Delete

C:\>az postgres server firewall-rule delete -y -n fwrule1 -s azpgsrv

C:\>az postgres server delete -y -n azpgsrv