Dual stack listeners on Linux

By Brian Fitzgerald

Question

$ netstat -ntl | grep :1521
tcp6 0 0 :::1521 :::* LISTEN

Q: Does the netstat output shown here mean that the listener accepts only IPv6 connections?

A: No. By default, a Linux listener uses a dual stack socket.

Oracle listener trace

Here is an strace of the Oracle TNS listener socket binding.

32649 socket(AF_INET6, SOCK_STREAM, IPPROTO_IP) = 8
32649 setsockopt(8, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
32649 bind(8, {sa_family=AF_INET6, sin6_port=htons(1521), inet_pton(AF_INET6, "::", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0
32649 listen(8, 128)

In the bind call, notice that the socket address family is AF_INET6, and the IP address shown is “::”, meaning listen on all IP addresses on the local host. The netstat output looks like this:

[grid@ip-172-32-10-34 ~]$ netstat -ntl | grep :1521
tcp6 0 0 :::1521 :::* LISTEN

However, the listener will accept either IPv4 or IPv6. You can check this by testing IPv4 and IPv6 one at a time:

[ec2-user@ip-172-32-10-34 ~]$ nc -v -4 localhost 1521 < /dev/null
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 127.0.0.1:1521.
Ncat: 0 bytes sent, 0 bytes received in 0.02 seconds.
[ec2-user@ip-172-32-10-34 ~]$ nc -v -6 localhost 1521 < /dev/null
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to ::1:1521.
Ncat: 0 bytes sent, 0 bytes received in 0.03 seconds.

The connect calls were:

connect(3, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
connect(3, {sa_family=AF_INET6, sin6_port=htons(1521), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 EINPROGRESS (Operation now in progress)

You can compare the single-stack Oracle listener to other listeners that use separate sockets.

Linux sshd trace

By comparison, here is a trace of the sshd listener socket bindings.

1142 socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
1142 fcntl(3, F_GETFL) = 0x2 (flags O_RDWR)
1142 fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0
1142 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
1142 bind(3, {sa_family=AF_INET, sin_port=htons(22), sin_addr=inet_addr("0.0.0.0")}, 16) = 0
1142 listen(3, 128) = 0
...
1142 socket(AF_INET6, SOCK_STREAM, IPPROTO_TCP) = 4
1142 fcntl(4, F_GETFL) = 0x2 (flags O_RDWR)
1142 fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0
1142 setsockopt(4, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
1142 setsockopt(4, SOL_IPV6, IPV6_V6ONLY, [1], 4) = 0
1142 bind(4, {sa_family=AF_INET6, sin6_port=htons(22), inet_pton(AF_INET6, "::", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0
1142 listen(4, 128)

Notice that for socket 3, the address family is AF_INET and the listener IP address is given as “0.0.0.0”, again meaning listen on all IP addresses. Examining socket 4 trace carefully, we see that before the bind call, socket option IPV6_V6ONLY is set. The netstat output looks like this:

[ec2-user@ip-172-32-10-34 ~]$ netstat -ntl | grep :22
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp6 0 0 :::22 :::* LISTEN

netstat displays one output line per socket. sshd has two listener sockets, one for IPv4 and a separate IPv6 socket.

IPv6-only listener

You can demonstrate an IPv6-only listener:

[ec2-user@ip-172-32-10-34 ~]$ nc -6 -l 6666
[ec2-user@ip-172-32-10-34 ~]$ netstat -ntl | grep :6666
tcp6       0      0 :::6666                 :::*                    LISTEN

An IPv4 connection fails:

[ec2-user@ip-172-32-10-34 ~]$ nc -4 localhost 6666
Ncat: Connection refused.

The nc utility makes two connection attempts:

socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
fcntl(3, F_GETFL)                       = 0x2 (flags O_RDWR)
fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK)    = 0
connect(3, {sa_family=AF_INET, sin_port=htons(6666), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
...
socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 4
fcntl(4, F_GETFL)                       = 0x2 (flags O_RDWR)
fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK)    = 0
connect(4, {sa_family=AF_INET, sin_port=htons(6666), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)

IPv4-only listener

Likewise, you can demonstrate an IPv4-only listener.

[ec2-user@ip-172-32-10-34 ~]$ nc -4 -l 4444

Netstat:

[ec2-user@ip-172-32-10-34 ~]$ netstat -ntl | grep :4444
tcp 0 0 0.0.0.0:4444 0.0.0.0:* LISTEN

IPv6 connection fails:

[ec2-user@ip-172-32-10-34 ~]$ nc -6 localhost 4444
Ncat: Connection refused.

The connect call:

connect(3, {sa_family=AF_INET6, sin6_port=htons(4444), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 EINPROGRESS (Operation now in progress)

Conclusion

In the netstat output,

[ec2-user@ip-172-32-10-34 ~]$ netstat -ntl | grep :1521
tcp6 0 0 :::1521 :::* LISTEN

The lack of a line such as

tcp 0 0 0.0.0.0:1521 0.0.0.0:* LISTEN

does not mean that the listener does not accept IPv4 connections. It could mean that the listener implements a dual stack socket.

grid 19c install with ASM filter driver

Introduction

Starting in Oracle 12.2, the grid installation experience changed in two major ways:

  1. In the past, you unzipped the binaries into a download folder and executed runInstaller. Oracle Universal Installer (OUI) copied the binaries to your grid oracle home. In the new procedure, you unzip the grid binaries into the grid home and then run gridSetup.sh to configure the software in-place.
  2. Instead of configuring asmlib, you are going to initialize the disks using ASM Filter Driver (AFD).

These changes raise new organizational issues and introduce a new, critical step.

Scope

The scope of this article is:

  • Installation to cloud virtual machine (Azure or AWS EC2)
  • Oracle Restart (Not RAC)
  • x86_64 hardware
  • Linux 7
  • udev is not considered
  • New install, not an upgrade
  • No preexisting asmlib

Grid infrastructure in the cloud

RAC

The main purpose of oracle Grid Infrastructure was to support Real Application Clusters (RAC). RAC requires sharing storage volumes across hosts, which most Cloud providers disallow. An iscsi server can be used to share drives, but at the cost of added complexity. RAC deployments are, therefore, less common in the cloud.

ASM

Oracle Grid Infrastructure is extremely useful, even without RAC. Automatic Storage Management is a main component of grid. Some benefits of ASM are:

  • Convenient, consolidated, managed storage
  • Efficient RMAN and Data Guard administration
  • Monitoring and managing space in ASM using Oracle Enterprise Manager

restart

An Oracle standalone (non-RAC) database that is running on grid infrastructure is known as a “restart”. You use grid to manage oracle startup and shutdown, and to monitor the database instance. Oracle restart works very well in the cloud.

Organizational issues

Previously, during the operating system setup, asmlib could be configured along with other root steps, prior to grid installation. However, AFD labeling must wait until grid installation has begun. In an organization with segregation of SA and DBA duties, the DBA is going to need to schedule three separate tasks for the SA.

  1. Setup the operating system
  2. Label ASM disks using AFD
  3. Run root.sh during the grid setup

Instead of personal handoffs, you can do one of these:

  1. Have one person with root and grid access perform all steps.
  2. Use an automated deployment tool.
  3. Run a script as root with some steps su’d to grid.

This blog article covers manual command line steps as root or grid, and use of the gridSetup.sh X windows GUI.

Steps leading up to AFD

The steps leading up to ASM disk labeling are, briefly:

  • Select a compatible machine image (Linux on x86_64)
  • Create the OS (ex: 2 CPU, 8G RAM, 32G operating system disk)
  • Update and install packages
  • Configure the operating system (swap, sysctl.conf)
  • Create the grid user and directories
  • Download and unzip the software

Now you are ready to configure your ASM disks.

Disk allocation

In the cloud, you can allocate new disks from the web console. You can also use a command line tool. Finally, you can write a script to do the job, in python for example. You can select from among  available type (hard drive or SSD), you can specify the size, and in some cases, you can specify the IOPS.

You can attach a disk to a live system, and you can detach a disk from a live system. Before allocating new disks, list the existing disks

[root@grid19c ~]# df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 33025276 12824288 20200988 39% /
[root@grid19c ~]# swapon
NAME TYPE SIZE USED PRIO
/mnt/resource/swapfile file 2G 0B -2
[root@grid19c ~]# ls -l /dev/sd*
brw-rw----. 1 root disk 8, 0 Aug 23 12:40 /dev/sda
brw-rw----. 1 root disk 8, 1 Aug 23 12:40 /dev/sda1
brw-rw----. 1 root disk 8, 2 Aug 23 12:40 /dev/sda2
brw-rw----. 1 root disk 8, 16 Aug 23 12:40 /dev/sdb
brw-rw----. 1 root disk 8, 17 Aug 23 12:40 /dev/sdb1

AWS

We are mainly interested in Linux systems that are available in the AWS Marketplace. The virtualization type in the AWS Marketplace right now is predominantly Hardware Virtual Machine (HVM). In the systems that I have checked, the possible device names are /dev/xvd[a-z], or only 26 devices, which is not very flexible. The allowable device size range is 1 GiB to 16384 GiB. The largest SSD devices are burstable up to 64000 IOPS. In AWS, an attached device can be resized.

aws.disks

AWS lets you choose the Linux device node name. You can retrieve the node name from the console, the command line, or the API. For example:

>>> from boto3 import resource
>>> resource('ec2').Volume('vol-0679cfe2a209db2ed').attachments[0]['Device']
'/dev/sdg'

Azure

In Azure, disks range in size from 32 GiB to 32767 GiB. A disk can be resized, and the type (HDD or SSD) may be modified only when the disk is unattached. The largest premium SSD offers IOPS up to 20000.

azure.disks

Azure does not identify the new disk names for you. I have not found a way to retrieve the operating system node name from the Azure cloud. The closest I can get is to display the attachment status.

>>> from azure.common.credentials import get_azure_cli_credentials
>>> from azure.mgmt.compute import ComputeManagementClient
>>> cli = ComputeManagementClient(
... *get_azure_cli_credentials()
... )
>>> cli.disks.get(
... resource_group_name='ora',
... disk_name='asm-0'
... ).disk_state
'Attached'

This seems like a gap to me. You want a certain method to distinguish new disks from existing disks, and you want a positive method to identify each disk.

Identify the new disks

Example:

brw-rw----. 1 root disk 8, 32 Aug 23 12:40 /dev/sdc
brw-rw----. 1 root disk 8, 48 Aug 23 12:40 /dev/sdd

AFD label

Prior to starting the grid install, you need to label the disk or disks that you intend to use for the first ASM disk group. We will use asmcmd afd_label. This is a critical step.

afd_label

To label your disks, sudo to root, and execute

asmcmd afd_label label devicenode –init

For example:

[root@grid19c ~]# export ORACLE_HOME=/u01/app/19.3.0/grid
[root@grid19c ~]# export ORACLE_BASE=/tmp
[root@grid19c ~]# cd /u01/app/19.3.0/grid/bin
[root@grid19c bin]# ./asmcmd afd_label DATA01_00001 /dev/sdc --init
  • ORACLE_BASE=/tmp avoids creating root-owned files under the grid oracle base.
  • Exercise care to follow this instruction exactly because an incorrect invocation may not print an error message at all.

separation of duties

If root steps are not done by the DBA, then the DBA should prepare a script for the AFD labeling. For example:

#!/bin/bash

export ORACLE_HOME=/u01/app/19.3.0/grid
export ORACLE_BASE=/tmp
cd /u01/app/19.3.0/grid/bin
./asmcmd afd_label DATA01_00001 /dev/sdc --init

afd_lslbl

If your disk discovery string is the default value ‘/dev/sd*’, as in Azure, you will be able to list your ASM disks with this simple command:

[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
DATA01_00001 /dev/sdc

On AWS, your disk string might be ‘/dev/xvd*’, so this might not work:

[root@ip-172-31-86-22 bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'No devices to be scanned.

In that case, you need to specify the disk string on the command line:

[root@ip-172-31-86-22 bin]# ./asmcmd afd_lslbl '/dev/xvd*'
--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
ASM01_00001 /dev/xvdc

After you run root.sh, you do not need to specify the disk discovery string.

[root@ip-172-31-25-179 bin]# ./asmcmd afd_lslbl
--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
ASM01_00001 /dev/xvdc

After root.sh, afd_lslbl gets its disk string from file /etc/oracleafd.conf:

# cat /etc/oracleafd.conf
afd_diskstring='/dev/xvd*'

afd_unlabel

You can erase the AFD label with:

[root@grid19c bin]# ./asmcmd afd_unlabel /dev/sdc --init
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'No devices to be scanned.

Bugs and limitations

the afd_label –init option

If you leave out the –init option afd_label might not write a label to your disk, but it will also print no error message, and will return a success code.

[root@grid19c bin]# ./asmcmd afd_label DATA01_00001 /dev/sdc
[root@grid19c bin]# echo $?
0
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'No devices to be scanned.

You can use dd to verify that the disk is empty

[root@grid19c bin]# dd if=/dev/sdc ibs=8192 obs=8192 count=1 | od -X
0000000 00000000 00000000 00000000 00000000
*
0020000
1+0 records in
1+0 records out
8192 bytes (8,2 kB) copied, 0.032763 s, 25.0 MB/s

the afd_unlabel –init option

Likewise, if you run afd_unlabel without the –init option, asmcmd may silently not unlabel your disk.

[root@grid19c bin]# ./asmcmd afd_label DATA01_00001 /dev/sdc --init
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
DATA01_00001 /dev/sdc
[root@grid19c bin]# ./asmcmd afd_unlabel /dev/sdc
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
DATA01_00001 /dev/sdc

You can also use dd to view the label

[root@grid19c bin]# dd if=/dev/sdc ibs=8192 obs=8192 count=1 | od -X
0000000 00000000 00000000 00000000 1f75bc96
0000020 00000000 00000000 00000000 00000000
0000040 4c43524f 4b534944 41544144 305f3130
0000060 31303030 00000000 00000000 00000000
0000100 00000000 00000000 00000000 00000000
*
0000440 00000000 00000000 0500000a 5d5ee7d3
0000460 00000000 00000000 00000000 00000000
*
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied*
0020000

other utilities

In this context, other afd commands might run, produce no output, do nothing, and exit with success code.

[root@grid19c bin]# ./asmcmd afd_state
[root@grid19c bin]# echo $?
0
[root@grid19c bin]# ./asmcmd afd_configure
[root@grid19c bin]# echo $?
0

At this stage so far in the grid installation, only commands afd_label is needed. You can also run afd_unlabel, and afd_lslbl. If you experiment with other commands or options, you may end up in a state that is difficult to diagnose.

grid installation directory

Oracle grid can be installed in virtually any directory. Originally, however, path”/u01/app/19.0.0/grid” is hard-coded into scripts such as kfod. You cannot kfod and other such scripts at this early stage. Later in the installation procedure, root.sh replaces such files with scripts that can run out of any installation directory.

[root@grid19c bin]# ./asmcmd
/u01/app/19.3.0/grid/bin/kfod: line 22: /u01/app/19.0.0/grid/bin/kfod.bin: No such file or directory
Use of uninitialized value $result[0] in scalar chomp at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5982.
Use of uninitialized value $result[0] in split at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5985.
Use of uninitialized value $clus_mode in scalar chomp at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5987.
Use of uninitialized value $clus_mode in concatenation (.) or string at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5988.
Use of uninitialized value $clus_mode in string eq at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5993.
...
ASMCMD> exit

conclusion

Most utilities are designed to run with full functionality after gridSetup.sh and root.sh have completed. One notable exception is subset of asmcmd afd sub commands with very specific options.

Because of the bugginess or limitations of the asmcmd AFD options, it is critical to be aware of the limitations, and by all means, to get the AFD disk labeling correct the first time.

Grid setup

This section describes the grid setup, with emphasis on the ASM Disk Group screen

  • start X server (Xming) on Windows
  • On Linux, sudo to grid and cd to the grid home
  • Start gridSetup.sh
  • At the Configuration Options screen, select Standalone (Oracle Restart).
  • Click Next to go to the Create ASM Disk Group screen.

Here is where you are going to create your first ASM disk group. gridSetup.sh scans the operating system for labeled disks. Labeled disks matching “/dev/sd*” will appear in your display.

cr.asm.dg.azure

On some AWS systems, you need to search for “/dev/xvd*”. “Click Change Disk Discovery Path”. For example:

ch.dsk.dsc.path

cr.asm.dg

You can change the disk group name, allocation unit size, and redundancy. Select the disk or disks that you want in your first disk group. Check that Configure Oracle ASM Filter Driver is checked. Press Next

Continuing the grid install

After the Create ASM diskgroup screen, installation can proceed smoothly

  • Complete the remaining screens
  • When prompted, run root.sh

After root.sh is finished, all utilities in grid $ORACLE_HOME/bin should operate with full functionality.

Summary

Oracle Grid Infrastructure substantially improves manageability. Grid is less complex to install than Oracle RAC, but requires a disk installation step which requires attention to detail. The Oracle 19c grid installation requires configuration of the ASM Filter Driver prior to running gridSetup.sh. You can use cloud and Linux administrative tools to allocate storage and then use asmcmd to label the disks.

Oracle binaries or patch download from Linux command line

By Brian Fitzgerald

Introduction

Our objective is to download Oracle binaries or patches directly to Linux using the bash shell. For binaries, we will log in to Oracle and accept the license agreement in a Windows browser, and then perform the download in Linux. For patches, we will log in to MOS. The technique is useful for cloud or on-prem.

Linux preparations

Complete the following preparations.

Check your Linux version

$ uname -r
3.10.0-1062.9.1.el7.x86_64

wget download from oracle may fail on Red Hat Enterprise Linux el5 or lower.

Check your ssl version

$ openssl version
OpenSSL 1.0.2k-fips 26 Jan 2017

wget download from oracle may fail on openssl version 0.9 or lower.

Install wget

If necessary, install wget

sudo -i -u root
yum -y install wget

cd to your download area:

Redirect cat standard output to a file.

[grid@ora19c ~]$ mkdir download
[grid@ora19c ~]$ cd download/
[grid@ora19c download]$ cat > cook.txt

Let the cat session wait there.

Windows preparations

browser

Chrome is used in this example.

cookie plugin

  • Install the EditThisCookie plugin.
  • Right click EditThisCookie and navigate to Options
  • For export format, select “Netscape HTTP Cookie File”

cook

other browsers

Export methods exist for other browsers. For example, in Firefox, you may use “Export Cookies” by Rotem Dan. Also, you may try your browser’s builtin inspection tools.

binaries download page

  • Login to oracle.com with your ID and password.
  • Navigate to the download page. For example:

https://www.oracle.com/database/technologies/oracle19c-linux-downloads.html

Export cookies

Complete the cookie export and start the download within 30 minutes:

  • From Chrome click the download icon.
  • When prompted, accept the license agreement.
  • Click the large download button.

accept

  • Interrupt the download.
  • Click the EditThisCookie icon
  • In the EditThisCookie, clear the search box,
  • Enter “oracle.com”
  • Using the scrollbar, check that these cookies are present:
domain name
.oracle.com oraclelicense
edelivery.oracle.com OAMAuthnCookie_edelivery.oracle.com:443

export   oamauth

  • If OAMAuthnCookie_edelivery.oracle.com:443 is missing, it could be because:
    • you did not start the download
    • you did not search oracle.com
  • Click the export button. Note the confirmation message:

exported

  • Paste the cookies into your Linux cat session.
  • Type ctrl-d (end of file) into the cat session.
  • Check Linux cookies file cook.txt with egrep:
[grid@ora19c download]$ egrep 'oraclelicense|OAMAuthnCookie_edelivery' cook.txt
.oracle.com     TRUE    /       FALSE   1566398899      oraclelicense   152015
edelivery.oracle.com    FALSE   /       TRUE    0       OAMAuthnCookie_edelivery.oracle.com:443 0ba9acafe65e911747b1cc43daf2bb6269988c0b%7EbOV%2F2lmNMEtBQ0AQibaanasshtknkgx1dWreuPjqkvXEFXKl3n3wj1zmKsG1lkM56uv%2F2cWPSSrH3HRWYEsWaDh2eGTSS8fizlEZF01NQdIF6hEG8rp5qkdvQv3twKXCO%2BpAgxiixhlxb61xlMvNDiLiv8JpDY97yvTSf1QyDxuY7Fjs03Qb0Zaq6i88NPsqkq0gu1C8fEbwEZbgW55YAUVQQdKfOpN4yR2iXenEJfiiKmHTCISZRvPNzgPf0JxPK3fVnvLsAO9HEFBsu%2Fwx0uUsYqe%2FHPJJwkFjBLudqkD8bO84t7HTCED80o%2BrxzcOJ%2BE%2BuvlaqItiidQ7asFuHQanPUS10CeTEXXWzK3h4Lni4Gp29wKtnQ6ADJfPET%2FCnKehcnEsB9lYeQliSvdhWkRD9v3Azr5SYkhEdsDCF4z%2FWXP1dOT6gB9oQVn1352FcJbnA8Zo6ulnXPhKdwJE21SiskmOAelR0GVvA9M3DNU%2FokD1s2Msy293j9goob%2Fpj0IvtsTyuxOdRc1p4gV1XyZbulYn8q1q6Fnu7SpEq7hxiKh6TjY1737uh2w%2FTO3G%2F1QviFlrG%2BK1Ts76KNp45lYwefYA7U29v9W4H4gJ4Cs%3D

Download

In Linux, start the download with wget:

[grid@ora19c download]$ wget --load-cookies=cook.txt 'https://download.oracle.com/otn/linux/oracle19c/190000/LINUX.X64_193000_grid_home.zip'

Downloading patches from MOS

The procedure for downloading patches from My Oracle Support is simpler because you don’t have to acknowledge the license agreement and then start and interrupt the download. However, MOS download links contain the “&” and “?” characters, so you need to quote the URL.

To download a patch from MOS:

  • Login to MOS
  • open EditThisCookie
  • clear the URL window and enter “oracle.com”
  • click the export button
  • In your shell, issue cat > /tmp/cook.txt
  • Paste your shell into the cat
  • Type ctrl-D
  • navigate to the patch download link. For example:

dl.patch

  • hover your mouse over the download button and right-click
  • select “Copy link address”
  • paste the link into your shell between quotes, such as in this example:
wget --load-cookies=/tmp/cook.txt ''
  • after paste:
 wget --load-cookies /tmp/cook.txt 'https://updates.oracle.com/Orion/Services/download/p30501926_180000_Linux-x86-64.zip?aru=23242022&patch_file=p30501926_180000_Linux-x86-64.zip'
  • You can specify the output file using the “-O” option. i.e.
-O p30501926_180000_Linux-x86-64.zip
  • Putting it all together, you get:
wget --load-cookies=/tmp/cook.txt 'https://updates.oracle.com/Orion/Services/download/p30501926_180000_Linux-x86-64.zip?aru=23242022&patch_file=p30501926_180000_Linux-x86-64.zip' -O p30501926_180000_Linux-x86-64.zip
  • Press enter to start the download

Cookies file check

For MOS downloads, there is only one required cookie, namely Oracle_updates_auth

grep Oracle_updates_auth  /tmp/cook.txt
.updates.oracle.com     TRUE    /       FALSE   1582699648      Oracle_updates_auth     ZDA1YzU4OThkODUyM2Y3NjgzZWUxYmNhNGQzMDFjN2UtNTM2MTZjNzQ2NTY0NWY1ZmY5ZDVkMDZmZGZkMWQyMzBkY2FmZGI5NDdlOWE3NWVkMWY1NWVjYjYzZmE4ZDc3ZjAwN2JkYjMzNzViYjAyMmZkZjUzNmRkYWE3OWRiYzg4NDIzY2JlY2JkMzcwMjk5OWQyZDNlMWYzYTAzZTFhYjU
oracle:/ora_local/download/oracle/PSU_PATCHES

Special characters in the URL

You should quote the URL. Otherwise:

  • If ‘&’ appears in the URL, the shell will interpret the string to the right of the ‘&’ as a separate command.
  • If ‘?’ appears in the URL, the shell may interpret it as a metacharacter.

An example of this problem while downloading a Release Update from MOS:

wget --load-cookies /tmp/cook.txt https://updates.oracle.com/Orion/Services/download/p30501926_180000_Linux-x86-64.zip?aru=23242022&patch_file=p30501926_180000_Linux-x86-64.zip -O p30501926_180000_Linux-x86-64.zip
[1] 13367
--2020-02-25 14:15:12--  https://updates.oracle.com/Orion/Services/download/p30501926_180000_Linux-x86-64.zip?aru=23242022
bash: -O: command not found...
Resolving updates.oracle.com (updates.oracle.com)... oracle:/ora_local/download/oracle/PSU_PATCHES
=>141.146.44.51
Connecting to updates.oracle.com (updates.oracle.com)|141.146.44.51|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://aru-akam.oracle.com/adcarurepos/vol/patch14/PLATFORM/CORE/Linux-x86-64/R600000000071008/p30501926_180000_Linux-x86-64.zip?FilePath=/adcarurepos/vol/patch14/PLATFORM/CORE/Linux-x86-64/R600000000071008/p30501926_180000_Linux-x86-64.zip&File=p30501926_180000_Linux-x86-64.zip&params=UGw0Nnppc2RUZnBGTkx2aUEraURsUTphcnU9MjMyNDIwMjImZW1haWw9YmYyMTkwQG55dS5lZHUmZmlsZV9pZD0xMDY0NzI4MTYmcGF0Y2hfZmlsZT1wMzA1MDE5MjZfMTgwMDAwX0xpbnV4LXg4Ni02NC56aXAmdXNlcmlkPW8tYmYyMTkwQG55dS5lZHUmc2l6ZT0xMTQ4Mjg4MjMmY29udGV4dD1BQDEwK0hAYWFydXZtdHAwNy5vcmFjbGUuY29tK1BAJmRvd25sb2FkX2lkPTQxMjI3Mzc2Mw@@&AuthParam=1582658233_7e9698372721c6852f3ca053d3a8e443 [following]
--2020-02-25 14:15:13--  https://aru-akam.oracle.com/adcarurepos/vol/patch14/PLATFORM/CORE/Linux-x86-64/R600000000071008/p30501926_180000_Linux-x86-64.zip?FilePath=/adcarurepos/vol/patch14/PLATFORM/CORE/Linux-x86-64/R600000000071008/p30501926_180000_Linux-x86-64.zip&File=p30501926_180000_Linux-x86-64.zip&params=UGw0Nnppc2RUZnBGTkx2aUEraURsUTphcnU9MjMyNDIwMjImZW1haWw9YmYyMTkwQG55dS5lZHUmZmlsZV9pZD0xMDY0NzI4MTYmcGF0Y2hfZmlsZT1wMzA1MDE5MjZfMTgwMDAwX0xpbnV4LXg4Ni02NC56aXAmdXNlcmlkPW8tYmYyMTkwQG55dS5lZHUmc2l6ZT0xMTQ4Mjg4MjMmY29udGV4dD1BQDEwK0hAYWFydXZtdHAwNy5vcmFjbGUuY29tK1BAJmRvd25sb2FkX2lkPTQxMjI3Mzc2Mw@@&AuthParam=1582658233_7e9698372721c6852f3ca053d3a8e443
Resolving aru-akam.oracle.com (aru-akam.oracle.com)... 104.124.100.114
Connecting to aru-akam.oracle.com (aru-akam.oracle.com)|104.124.100.114|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 114828823 (110M) [application/zip]
Saving to: ‘p30501926_180000_Linux-x86-64.zip?aru=23242022’

100%[=========================================================================================================================================================================>] 114,828,823 42.1MB/s   in 2.6s

2020-02-25 14:15:15 (42.1 MB/s) - ‘p30501926_180000_Linux-x86-64.zip?aru=23242022’ saved [114828823/114828823]

Notice that:

  • the shell interpreted
patch_file=p30501926_180000_Linux-x86-64.zip -O

as a command and printed this error:

bash: -O: command not found...
  • The file got saved as ‘p30501926_180000_Linux-x86-64.zip?aru=23242022’

Solution:
Quote the URL.

 wget --load-cookies /tmp/cook.txt 'https://updates.oracle.com/Orion/Services/download/p30501926_180000_Linux-x86-64.zip?aru=23242022&patch_file=p30501926_180000_Linux-x86-64.zip'

Special characters in the output file name

You may find that wget saves the file to an unexpected name. For example:

ls
p30501926_180000_Linux-x86-64.zip?aru=23242022&patch_file=p30501926_180000_Linux-x86-64.zip

Solution:

Specify the output file using “-O”, as previously explained.

Summary

Using Chrome, you can log in to Oracle and accept the license agreement. Next, using EditThisCookie, you can export the needed cookies. Finally, using wget and the –load-cookies option, you can start your download.

The possible benefit is that the download will run faster on the Linux  server than on your Windows host. Also, direct download saves you an additional transfer step from Windows.

Install Oracle on Red Hat 8 attempt

Introduction

Here are a few issues you can run across if you attempt to install Oracle 19c on Red Hat Linux 8.

Issues

No python

python -V
-bash: python: command not found

solution:

yum -y install python3
alternatives --set python /usr/bin/python3

No xorg-x11-apps available

# yum install xorg-x11-apps
No match for argument: xorg-x11-apps

Solution:

Subscribe to Red Hat

.vimrc not read

When you startup vim, the settings in your .vimrc does not get read.

Solution:

mv .vimrc .virc

libnsl misssing

/u01/app/19.0.0/grid/perl/bin/perl: error while loading shared libraries: libnsl.so.1: cannot open shared object file: No such file or directory

Solution:

yum -y install libnsl

strings missing

strings < /dev/null

-bash: /usr/bin/strings: No such file or directory

solution

yum -y install binutils

make missing

# make
-bash: make: command not found

Solution: install make

# yum install make

Fail supported OS check

grid install

./gridSetup.sh

[WARNING] [INS-08101] Unexpected error while executing the action at state: ‘supportedOSCheck’

supportedos

database install

supportedosdb

strace shows that the underlying command was:

/bin/rpm -q --whatprovides --qf %{V} redhat-release

Return:

8.0

Red Hat 8.0 is not supported. If you click “yes”, OUI does not proceed. It just waits at a blank screen until you exit.

Stopping here.

Conclusion

Red Hat 8 was released May 7, 2019. Oracle Grid 19c was developed and tested prior to 2019 and before Red Hat 8 was available. The Oracle 19c Grid installation manual does not mention Red Hat 8 as an allowable operating system. One should expect that Oracle Grid 19c Installation on Red Hat 8 will be troublesome and not supportable.

X windows for Oracle DBAs

By Brian Fitzgerald

Introduction

This is for DBAs on who want to use Oracle X-windows based tools such as OUI or DBCA. Techniques described here can lead to a smooth installation experience. Beyond just a cookbook, this blog article also demonstrates the principles behind X forwarding. Red Hat Linux on AWS EC2 is presented here as an example platform.

In hurry? Skip to tl;dr, at the end.

X-windows

The X windows system consists of a server, a display, and clients. The X server accepts requests for graphical output from the clients and renders those requests on the display.  In older usage, the server and the client processes usually ran on the same host, a desktop Unix system. In the present context, the X server runs on your Windows PC and displays to your screen. The X server usually listens on TCP port 6000.

In X-windows lingo, applications and utilities are called clients. In the present context, clients run on the Oracle host system, an instance of AWS EC2. Examples of clients include xterm and Oracle Universal Installer (OUI), which run on separate server hardware. Our objective, therefore, is to have  an Oracle installation tool, such as OUI, and connect back to the X server on your PC so you can view and control the graphical user interface (GUI). Paradoxically, the X server does not run on “server” hardware, but on your windows PC. This table should clear up any confusion.

Description X Server X Client
Hardware PC server
Operating system Windows Linux
Has display monitor yes no
X server port (default 6000) binds
listens
accepts connections
connects to
Software Xming xterm, dbca, etc.

The client application connects to the X server via sshd on Linux and via PuTTY on your PC.

AWS EC2

Red Hat Linux on Amazon EC2 is presented as an example platform. AWS EC2 instances are built from an available Amazon Machine Image (AMI). For an Oracle Database installation, be sure to choose an AMI with a supported processor type, such as Intel x86_64 (not ARM), and a supported operating system, such as Red Hat (not Ubuntu or Amazon Linux).

The final step in EC2 launch is to select an existing key pair or create and download a new key pair. A key pair file from AWS has extension .pem.

Linux packages

Linux package installs are required before you can run X clients. In addition, it is helpful to install basic troubleshooting tools.

After you launch a new EC2 instance, run yum update.

# yum -y update

Running yum update may get you a few new tools, such as lsipc and lsmem.

Depending on the AMI, you may find that basic Linux tools are missing. Installing those tools makes troubleshooting easier. For example:

# yum -y install nc lsof unzip strace traceroute

You want to set up a working X windows environment, but before you do, notice three things from the login shell:

[ec2-user@ip-172-31-89-75 ~]$ echo $DISPLAY

[ec2-user@ip-172-31-89-75 ~]$ netstat -ntl | grep :60..
  1. The DISPLAY environment variable is not set
  2. There is no listener in the 6000 range
  3. No file .Xauthority is present

Now install the X windows packages.

# yum -y install motif xorg-x11-utils xterm xauth

xterm here is for testing X windows.

In /etc/ssh/sshd_config, check:

X11Forwarding yes
X11UseLocalhost no

If necessary, edit /etc/ssh/sshd_config and then issue:

service sshd restart

After installing xauth, login using a new PuTTY session. On the first login, the following output will appear:

/usr/bin/xauth: file /home/ec2-user/.Xauthority does not exist

Now recheck:

[ec2-user@ip-172-31-89-75 ~]$ echo $DISPLAY
localhost:10.0
[ec2-user@ip-172-31-89-75 ~]$ netstat -ntl | grep :60..
tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN
tcp6       0      0 ::1:6010                :::*                    LISTEN
  1. The DISPLAY environment variable is set
  2. There is a new listener in the 6000 range.
  3. A new .Xauthority file exists

The “10” in DISPLAY refers to the offset from 6000. When you set your DISPLAY to localhost:10.0, all X clients will connect to port 6010 by default.

Notice which process is listening;

[root@ip-172-31-88-44 ~]# lsof -Pi :6010
COMMAND  PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
sshd    1351 ec2-user    9u  IPv6  20649      0t0  TCP localhost:6010 (LISTEN)
sshd    1351 ec2-user   10u  IPv4  20650      0t0  TCP localhost:6010 (LISTEN)

The sshd process is listening on port 6010. You are going to run X-client application on the Linux box. The client will connect to port 6010. The sshd process will forward the X-requests to your PC.

X server

There are several MS Windows-based X-servers available, Xming, for example. By default, the X server listens on port 6000. Confirmation:

C:\>netstat -na | findstr :6000 | findstr LISTEN
TCP 0.0.0.0:6000 0.0.0.0:0 LISTENING

X server issues

Can’t load font

A message such as this appears

xterm: cannot load font -misc-fixed-medium-r-semicondensed--13-120-75-75-c-60-iso10646-1

Solution:

If you are using Xming, then install Xming fonts. For example: Xming-fonts-7-7-0-10-setup.exe

xdpyinfo

Behind the scenes, OUI runs xdpyinfo. If xdpyinfo is missing, this message will appear:

ERROR: Unable to verify the graphical display setup. This application requires X display. Make sure that xdpyinfo exist under PATH variable.

To get xdpyinfo, install package xorg-x11-utils. An example of normal OUI output is:

Checking monitor: must be configured to display at least 256 colors.   Actual 16777216    Passed

Warning: No xauth data; using fake authentication data for X11 forwarding.

Symptom:

You run ssh -X and observe a warning message such as:

brian@PQHKEA * xauth $ ssh -X -i ~/.ssh/acme-key.pem ec2-user@3.84.232.118
Warning: No xauth data; using fake authentication data for X11 forwarding.
Last login: Sat Nov 7 18:31:30 2020 from pool-98-113-87-97.nycmny.fios.verizon.net

The message is from the local ssh client, not the remote host. You can confirm this by referring to package https://github.com/openssh/openssh-portable, file clientloop.c.

Fix:

xauth add :${DISPLAY#*:} . `mcookie`

Explanation.

:${DISPLAY#*:} gives you the DISPLAY value without the IP address.

mcookie a 128-bit random hexadecimal number for use with the X authority system

xterm issue

message:

Warning: locale not supported by C library, locale unchanged

Solution:

export LC_ALL=C

X libraries

The manual may instruct you to install certain X windows runtime libraries, or OUI may so instruct you. The library list depends on the Oracle product and the version. Example:

yum -y install libX11 libXau libXi libXtst libXrender

If you have installed xterm, then you already have these dependent libraries.

mwm

mwm is the Motif Window Manager. It lets you move and resize windows. Without a window manager, you may encounter situations where you have a dialog box waiting for input but is inaccessible because it is behind another window. Here is the X root window after dragging three windows to non-overlapping locations.

mwm

The shell commands leading up to this were:

xterm & disown
mwm & disown
unzip -q ~/download/LINUX.X64_193000_db_home.zip
./runInstaller

Network

The X windows display technique described here assumes that you can connect via the secure shell (ssh) from your PC directly to the Oracle server host. The AWS EC2 instance has a Public IP address that is reachable from your PC.

The X windows protocol is a very “chatty” protocol, meaning that clients make a high number of small server requests. Chattiness makes every button press on the GUI respond very slowly. Having a fast network connection is essential. One might give some thought to enabling SSH compression; however, SSH compression does not mitigate the chattiness.

putty.compression

Enabling SSH compression will usually not result in a noticeable improvement in responsiveness.

Running an X windows client across sudo

In a prior section, xterm worked fine from the login shell (ec2-user). In practice, you will sudo to the oracle account to execute runInstaller or dbca. You may also sudo to the grid account to run gridSetup.sh. First, test xterm from the login account:

[ec2-user@ip-172-31-82-194 ~]$ echo $DISPLAY
localhost:11.0
[ec2-user@ip-172-31-82-194 ~]$ xterm
(it works)

One thing you will notice here in this case is that the offset in the DISPLAY variable is 11. The offset is not always 10, and depends on what port is available for X forwarding when you log in.

[ec2-user@ip-172-31-82-194 ~]$ netstat -ntl | grep :60..
tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN
tcp        0      0 127.0.0.1:6011          0.0.0.0:*               LISTEN
tcp6       0      0 ::1:6010                :::*                    LISTEN
tcp6       0      0 ::1:6011                :::*                    LISTEN

In this example, port 6010 was is in use when I logged in, so sshd started listening on port 6011 and set offset to 11.

If you try to run an X client across a sudo, you will get this error:

[ec2-user@ip-172-31-82-194 ~]$ sudo -i -u oracle
[oracle@ip-172-31-82-194 ~]$ xterm
xterm: Xt error: Can't open display:
xterm: DISPLAY is not set
[oracle@ip-172-31-82-194 ~]$ echo $DISPLAY

The DISPLAY environment setting did not carry over to the sudo shell. You can try setting DISPLAY, but you will get a different error:

[oracle@ip-172-31-82-194 ~]$ export DISPLAY=localhost:11.0
[oracle@ip-172-31-82-194 ~]$ xterm
PuTTY X11 proxy: wrong authorisation protocol attemptedError: Can't open display: localhost:11.0

To fix the authorization error, copy .Xauthority from the login account to the oracle account:

[ec2-user@ip-172-31-82-194 ~]$ sudo cp -up ~/.Xauthority ~oracle/.Xauthority
[ec2-user@ip-172-31-82-194 ~]$ sudo chown oracle:` id -gn oracle ` ~oracle/.Xauthority
[ec2-user@ip-172-31-82-194 ~]$ sudo -i -u oracle
[oracle@ip-172-31-82-194 ~]$ xterm

To automate the copying, I added this code to the login user .bash_profile:

if [ -z "$SUDO_USER" -a -f ~/.Xauthority ] ; then
    for u in oracle grid ; do
        if id $u > /dev/null 2>&1 ; then
            eval hd=~$u
            sudo cp -up ~/.Xauthority $hd/.Xauthority
            sudo chown $u:$(id -gn $u) $hd/.Xauthority
        fi
    done
fi

To automate setting DISPLAY, I added this code to the oracle .bash_profile:

if [ -z "$DISPLAY" ] ; then
    for port in {6010..6019} ; do
        if netstat -ntl | grep -q :$port ; then
            offset=$(( port - 6000 ))
            export DISPLAY=localhost:$offset.0
            break
        fi
    done
fi

To summarize, before sudo, copy .Xauthority from the login account to oracle. After sudo, set the DISPLAY environment variable. Automate those settings in .bash_profile.

puttygen

To connect PuTTY from Windows to AWS EC2 you need to generate a Putty Private Key (PPK) file. Start with the PEM (Privacy Enhanced Mail) file that you downloaded when you created your EC2 instance. Startup PuTTY Key Generator (puttygen). Load the PEM file. Click Save private key. Provide a name. Example:

us-east-1-key.pem
us-east-1-key.ppk

puttygen

X forwarding with PuTTY

To establish a session with PuTTY, navigate the PuTTY screens and make these settings.

Screen Item Example
Session Host Name (or IP address) 3.86.243.42
Connection->Data Auto-login username ec2-user
Connection->SSH->Auth Private key file for authorization C:\Users\Brian Fitzgerald\.aws\us-east-1-key.ppk
Connection->SSH->X11 Enable X11 forwarding checked
Connection->SSH->X11 X display location localhost:0.0

putty.host

You may save your settings for future reuse.

putty.user

ppk.png

The private key file is the ppk file that you converted from the pem file.

putty.x11

X display location refers to the X server on your PC, and is usually localhost:0.0.

A simple command line example

Instead of using PuTTy, you could use ssh. This example uses bash on Windows Subsystem for Linux. The hypothetical hosts are lcl and rem.

Generate your key

lcl $ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/bf/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/bf/.ssh/id_rsa.
Your public key has been saved in /home/bf/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Jb9cfmEZI8r1FNe611gM0dBLYInq+/CXs35NPpSlB88 bf@lcl
The key's randomart image is:
+---[RSA 2048]----+
|            .o*=o|
|           ....++|
|        . o o =+.|
|         * o +o=+|
|        S + . =O+|
|         o + .++E|
|         .+ . +=.|
|         .o  = oo|
|          .ooo+ .|
+----[SHA256]-----+

Copy your key to remote

lcl $ ssh-copy-id rem
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/bf/.ssh/id_rsa.pub"
The authenticity of host 'rem (10.130.33.179)' can't be established.
ECDSA key fingerprint is SHA256:Gr8YlEJrgTQdsNMGXOZoU+6ugg4TPyX5B9lCLTE0g/8.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys


bf@rem's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'rem'"
and check to make sure that only the key(s) you wanted were added.

Set your DISPLAY

Startup Xming, then run:

lcl $ export DISPLAY=127.0.0.1:0.0

ssh using the -X flag

lcl $ ssh -X rem
/usr/bin/xauth:  file /home/bf/.Xauthority does not exist
rem $

test

rem $ xdpyinfo | wc
    287    1136    9544

permit your .Xauthority file

rem $ chmod 750 ~
rem $ chmod 640 ~/.Xauthority

sudo to oracle

rem $ sudo su - oracle
[sudo] password for bf:

copy .Xauthority

oracle@rem $ cp -p ~bf/.Xauthority .Xauthority

set your DISPLAY

oracle@rem $ export DISPLAY=10.130.33.179:10.0

test

oracle@rem $ xdpyinfo | wc
    287    1136    9544

Oracle Universal Installer

oracle@rem $ cd /u01/download/oracle/em13400/
oracle@rem $ ./em13400_linux64.bin -J-Djava.io.tmpdir=/u01/tmp

oui.em

Notes

Display

rem $ echo $DISPLAY
10.130.33.179:10.0

Notice :10.0, not :0.0

Ports

rem $ netstat -nat | grep :60
tcp        0      0 0.0.0.0:6010            0.0.0.0:*               LISTEN
tcp6       0      0 :::6010                 :::*

Notice 6010, not 6000

File

rem $ ls -l .Xauthority
-rw-------. 1 bf dba 100 Jul  6 17:41 .Xauthority

X forwarding via bastion host

You can use ssh from the Windows command line to setup X forwarding via a tunnel at the bastion. Identify the following items:

item example
bastion key file us-east-1-key.pem
arbitrary Windows local port 3333
oracle box private IP address 172.31.95.14
ssh port 22
bastion host user brian
bastion public IP address 52.90.61.241

and use them as follows:

ssh -i keyfile -C -N -L localport:oraclebox:22 bastionuser@bastionhost

For example:

C:\>ssh -i us-east-1-key.pem -N -L 3333:172.31.95.14:22 ec2-user@52.90.61.241
  • The bastion key file is the pem file that you downloaded when you created the EC2 instance
  • The local port is a free port on your PC that you will choose arbitrarily.
  • The oracle box is the private IP address or the oracle box that you want to ssh to.
  • Bastion user is your login on the bastion host, i.e. ec2-user.
  • bastionhost is the public IP address of your bastion host.

Next, in a second cmd window:

set DISPLAY=localhost:0.0
ssh -i us-east-1-key.pem -Y -p 3333 ec2-user@localhost
Warning: No xauth data; using fake authentication data for X11 forwarding.
Last login: Tue Nov 19 02:50:02 2019 from ip-172-31-88-110.ec2.internal
Last login: Tue Nov 19 02:50:02 2019 from ip-172-31-88-110.ec2.internal
ec2-user@ip-172-31-95-14 ~$ xterm

The requirements for the bastion host are different from the oracle box.

  • Establishing ssh trust from the bastion to the oracle box is not required.
  • The tunneling session is not responsible for X forwarding. Option -X is not required.
  • Disabling X11Forwarding at the bastion does not disable X forwarding as described here because the X11 forwarding happens at the Oracle host and in PuTTY.
  • The bastion login user does not have to be the same as the oracle box login user.
  • The bastion host does not have to be the same hardware or operating system architecture as the oracle box. Amazon Linux works.
  • In some bastion EC2 systems, no configuration is required. ssh tunneling works out of the box.
  • If the bastion /etc/ssh/sshd_config has “AllowTcpForwarding no”, then ssh tunneling is disabled.

You do not need to log on to the bastion with an interactive shell. After the tunneling session is established, you will not interact with it.

X windows without X11 forwarding

If you have a connection from Linux to the X11 port, usually 6000, on your PC, then you can attempt X windows without X11 forwarding. You are going to set DISPLAY=pc.dnsname.or.ipaddress:6000

Verify port 6000 with:

C:\>netstat -na | findstr LISTEN | findstr :6000
TCP 0.0.0.0:6000 0.0.0.0:0 LISTENING

“0.0.0.0:6000” refers to listening on port 6000 on all IP addresses on all network interfaces. If you see

 TCP 127.0.0.1:6000 0.0.0.0:0 LISTENING

then your X server will not accept connections from outside your PC.

You can find your IP address with:

C:\>ipconfig | findstr Address
Link-local IPv6 Address . . . . . : fe80::f9f0:5830:cd8e:7502%24
IPv4 Address. . . . . . . . . . . : 192.168.1.7
Link-local IPv6 Address . . . . . : fe80::85d7:7339:ff49:e5e6%51
IPv4 Address. . . . . . . . . . . : 192.168.74.113

The addresses shown in this example are private IP addresses. If the oracle box and the PC are not on the same private network, you cannot refer to this address.

To get the NATted IP address you can use netstat:

[ec2-user@ip-172-31-86-22 ~]$ netstat -nat | grep :22 | grep ESTABLISHED
tcp 0 64 172.31.86.22:22 74.101.107.146:49459 ESTABLISHED
tcp 0 0 172.31.86.22:22 74.101.107.146:54036 ESTABLISHED

or last:

[ec2-user@ip-172-31-86-22 ~]$ last -1ai ec2-user
ec2-user pts/1 Tue Aug 27 09:05 still logged in 74.101.107.146

The PC IP address is, therefore, 74.101.107.146. Instead of the IP address, you may refer to DNS name. The DNS name appears when you login.

Using username "ec2-user".
Authenticating with public key "imported-openssh-key"
Last login: Mon Aug 26 15:57:51 2019 from pool-74-101-107-146.nycmny.fios.verizon.net

or from last:

[ec2-user@ip-172-31-86-22 ~]$ last -1a ec2-user
ec2-user pts/1        Tue Aug 27 09:05   still logged in    pool-74-101-107-146.nycmny.fios.verizon.net

Before attempting runInstaller, test connectivity using nc

[ec2-user@ip-172-31-86-22 ~]$ time nc -v 74.101.107.146 6000
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connection timed out.

real    0m10.029s
user    0m0.012s
sys     0m0.004s

By default, nc times out in 10 seconds.

If the nc connection times out, then an X-windows client will also time out.

[ec2-user@ip-172-31-86-22 ~]$ export DISPLAY=74.101.107.146:0.0
[ec2-user@ip-172-31-86-22 ~]$ time xterm
xterm: Xt error: Can't open display: 74.101.107.146:0.0

real    2m7.366s
user    0m0.021s
sys     0m0.016s

The TCP timeout depends on tcp_syn_retries

[root@ip-172-31-86-22 ~]# cat /proc/sys/net/ipv4/tcp_syn_retries
6

If tcp_syn_retries = 6, then the TCP timeout works out to 127 seconds, calculated thus:

tcp.timeout

You can monitor the connection in a separate window.

[ec2-user@ip-172-31-86-22 ~]$ netstat -nat | grep :6000
tcp        0      1 172.31.86.22:36310      74.101.107.146:6000     SYN_SENT

SYN_SENT lasting for a long time means that the connection is about to time out. runInstaller will take more than twice as long as xterm does

to time out.

[oracle@ip-172-31-86-22 dbhome_1]$ export DISPLAY=74.101.107.146:0.0
[oracle@ip-172-31-86-22 dbhome_1]$ time ./runInstaller
ERROR: Unable to verify the graphical display setup. This application requires X display. Make sure that xdpyinfo exist under PATH variable.

Can't connect to X11 window server using '74.101.107.146:0.0' as the value of the DISPLAY variable.

real    4m17.524s
user    0m2.835s
sys     0m0.303s

First, xdpyinfo times out:

[ec2-user@ip-172-31-86-22 ~]$ ps -t pts/1
  PID TTY          TIME CMD
 3324 pts/1    00:00:00 bash
11279 pts/1    00:00:00 sudo
11281 pts/1    00:00:00 bash
11311 pts/1    00:00:00 runInstaller
11315 pts/1    00:00:00 xdpyinfo

Then OUI (java) times out:

[ec2-user@ip-172-31-86-22 ~]$ ps -t pts/1
  PID TTY          TIME CMD
 3324 pts/1    00:00:00 bash
11279 pts/1    00:00:00 sudo
11281 pts/1    00:00:00 bash
11311 pts/1    00:00:00 runInstaller
11436 pts/1    00:00:00 perl
11455 pts/1    00:00:01 java

To save time, test your connection using nc, before attempting runInstaller.

dbca issues

dbca NoClassDefFoundError: sun.awt.X11.XToolkit

The DISPLAY environment variable is set, but this messsage appears:

[oracle@ip-172-31-88-246 ~]$ dbca
Exception in thread "main" java.lang.NoClassDefFoundError: Could not initialize class sun.awt.X11.XToolkit
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:264)
        at java.awt.Toolkit$2.run(Toolkit.java:860)
        at java.awt.Toolkit$2.run(Toolkit.java:855)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.awt.Toolkit.getDefaultToolkit(Toolkit.java:854)
        at com.jgoodies.looks.LookUtils.isLowResolution(LookUtils.java:484)
        at com.jgoodies.looks.LookUtils.(LookUtils.java:249)
        at com.jgoodies.looks.plastic.PlasticLookAndFeel.(PlasticLookAndFeel.java:135)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at javax.swing.SwingUtilities.loadSystemClass(SwingUtilities.java:1874)
        at javax.swing.UIManager.setLookAndFeel(UIManager.java:582)
        at oracle.install.commons.util.Application.startup(Application.java:940)
        at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:181)
        at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:198)
        at oracle.assistants.common.base.driver.AssistantApplication.startup(AssistantApplication.java:328)
        at oracle.assistants.dbca.driver.DBConfigurator.startup(DBConfigurator.java:378)
        at oracle.assistants.dbca.driver.DBConfigurator.main(DBConfigurator.java:513)

Cause: The X server is down or not reachable. Troubleshoot as you would an xterm issue, such as:

[oracle@ip-172-31-88-246 ~]$ xterm
xterm: Xt error: Can't open display: localhost:10.0

Client startup step by step

If you follow these steps one at a time and resolve issues as you go, it is less likely that you will encounter difficulty when you try to run an Oracle tool

  • In PuTTY, configure X11 forwarding.
  • Connect PuTTY and log in.
  • Check the DISPLAY environment variable.
  • Test xterm.
  • Test xdpyinfo.
xdpyinfo > /dev/null
  • copy .Xauthority to the oracle account.
  • sudo to oracle.
  • Set the DISPLAY environment variable.
  • Test xterm again.
  • startup mwm.
  • finally, start your Oracle tool. Ex:
./runInstaller

Summary

It is extremely helpful to configure a proper Linux and X windows environment before attempting to run an Oracle tool such as OUI. Commands nc, lsof, strace, traceroute, lsipc, and lsmem were used while writing this blog article. From xorg-x11-utils, xlsclients and xev were used. From motif, mwm was used. xauth is required for X windows. An oracle installation may require unzip and xdpyinfo. Not all needed utilities installed by default. You must install them yourself.

If you Google “x windows forwarding not working” you will find cases where one person’s advice works for one person and not another. Trying random tips and tricks without understanding can lead to your own solutions that don’t work all the time, and enabling unnecessary options. To improve understanding, I have added diagnostic output from netstat and lsof.

Be sure to understand that the X server runs on your PC and listens on port 6000 by default. On the oracle box, sshd listens on port 6000 + an offset, often 10. Setting DISPLAY=localhost:10.0 will lead X clients to connect to the sshd listening on port 6010 on the local host. sshd will forward the X traffic back to the X server on your PC.

It is well to break up the setup and testing into multiple, simple steps, rather than attempting to startup runInstaller in one go.

tl;dr

  • Install needed packages
  • Copy .Xauthority to the oracle account
  • Set the DISPLAY environment variable

 

ssh to postgres issue solved

by Brian Fitzgerald

Scenario

You have double checked everything, but ssh to postgres does not work.

Conditions

  1. You did a postgres package install
  2. ssh to other accounts works
  3. ssh to postgres using a key does not work
  4. Directory .ssh mode is 700
  5. File authorized_keys mode is 600

Selinux

Check whether selinux is enforcing acccess controls.

[postgres@test ~]$ getenforce
Enforcing

Note in /var/log/audit/audit.log, “avc: denied…scontext…sshd…tcontext…postressql_db_t”

type=AVC msg=audit(1562467696.927:316): avc:  denied  { read } for  pid=2282 comm="sshd" name="authorized_keys" dev="sda2" ino=56806 scontext=system_u:system_r:sshd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:postgresql_db_t:s0 tclass=file permissive=0

Note that the inode is 56806. Check ls -i:

[postgres@test ~]$ cd .ssh

[postgres@test .ssh]$ ls -i authorized_keys
56806 authorized_keys

Check the authorized_keys selinux type:

[postgres@test .ssh]$ ls -Z authorized_keys
-rw-------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 authorized_keys

The type is “postgresql_db_t”. In order for ssh to work, the type needs to be “ssh_home_t”.

Solution

Issue restorecon -R .ssh

[postgres@test ~]$ restorecon -R .ssh
[postgres@test ~]$ ls -RZ .ssh
.ssh:
-rw-------. postgres postgres unconfined_u:object_r:ssh_home_t:s0 authorized_keys

Check:

[postgres@pgstby .ssh]$ ssh test date
Sun Jul  7 02:57:21 UTC 2019

ssh works!

Note that new files created under directory .ssh will inherit the necessary ssh_home_t type.

chcon

If the home directory is an incompatible subtype then restorecon will not work. In that case, run chcon. i.e.:

[postgres@test ~]$ chcon -R unconfined_u:object_r:ssh_home_t:s0 ~/.ssh

or

[postgres@test ~]$ chcon -R system_u:object_r:usr_t:s0 ~/.ssh

semamage

If restorecon did not work, you can configure it to work in the future:

[root@test ~]# semanage fcontext -a -t ssh_home_t ~postgres/'.ssh(/.+)?'

Cause

The linux system was built from Azure image Redhat Enterprise Linux 7.6, which has selinux enabled by default.

Postgres was installed from package rh-postgresql10 created the user postgres, and then set the selinux type of most files and directories to postressql_db_t.

The .ssh directory inherited its type from its parent.

Normal home directory

The normal type of a home directory is user_home_dir_t. Subdirectory .ssh gets created as type ssh_home_t

[normal@test ~]$ ls -dZ
drwx------. normal normal unconfined_u:object_r:user_home_dir_t:s0 .
[normal@test ~]$ mkdir .ssh
[normal@test ~]$ ls -dZ .ssh
drwxrwxr-x. normal normal unconfined_u:object_r:ssh_home_t:s0 .ssh

Affected systems

In this example, the system was Redhat Linux on Azure, but it can happen on any system running selinux. The issue was noticed after a PostgreSQL package install. The same problem could appear with other package installs that create home directories.

Recommended practice

On selinux-enabled systems, create the .ssh directory this way:

[postgres@test ~]$ mkdir .ssh
[postgres@test ~]$ chmod 700 .ssh
[postgres@test ~]$ restorecon -R .ssh

Expanding the EC2 root file system

By Brian Fitzgerald

Introduction

Expanding the AWS EC2 Linux root file system size for Red Hat version 7.1 and up can be handled by a few simple AWS EC2 console or CLI steps. For Redhat version 7.0, additional Linux command line steps are required.

Initial Conditions

We’ll start out with a small root file system size, 8G.

[ec2-user@ip-10-0-1-244 ~]$ df -H /
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      8.6G  1.3G  7.4G  15% /

Review the instance and the volume in the AWS console:

ebs-id

If you want to use command line tools, the note these facts:

Instance ID: i-0be13d6ba7d191ebe
EBS ID: vol-0aeecfd5a36a070a1

Procedure

To resize to 100G, for example, from the console, navigate to EC2, select the EC2 instance. Select the root block device.

inst.png

resizvol

or from the CLI, issue:

C:\>aws ec2 modify-volume --volume-id vol-0aeecfd5a36a070a1 --size 100
VOLUMEMODIFICATION      modifying       100     10      gp2     0       2019-06-17T22:42:37.000Z    300 100     gp2     vol-0aeecfd5a36a070a1

Optionally, check on the status:

C:\>aws ec2 describe-volumes-modifications --volume-id vol-0aeecfd5a36a070a1
VOLUMESMODIFICATIONS    optimizing      100     10      gp2     0       2019-06-17T22:42:37.000Z    300 100     gp2     vol-0aeecfd5a36a070a1

If your system is Amazon Linux, or Red Hat Linux version 7.1 and up, reboot the instance and you are done.

Fix 7.0 with parted

On some systems, Redhat Linux 7.0, for example, resizing the volume is not enough. You must fix the partition table with parted and then adjust the root partition size. Start parted on your root device:

[root@ip-10-0-1-244 ~]# parted /dev/xvda
GNU Parted 3.1
Using /dev/xvda
Welcome to GNU Parted! Type 'help' to view a list of commands.
                               display the version number and copyright information of GNU Parted

Print the partition table with “p”:

(parted) p

Parted will find problems and offer to fix them. Respond “f”:

Error: The backup GPT table is not at the end of the disk, as it should be.  This might mean that another operating
system believes the disk is smaller.  Fix, by moving the backup to the end (and removing the old backup)?
Fix/Ignore/Cancel? f
Warning: Not all of the space available to /dev/xvda appears to be used, you can fix the GPT to use all of the space
(an extra 188743680 blocks) or continue with the current setting?
Fix/Ignore? f
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvda: 107GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: pmbr_boot

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  2097kB  1049kB                     bios_grub
 2      2097kB  6445MB  6442MB  xfs

Next, quit parted and startup fdisk.:

(parted) q
[root@ip-10-0-1-244 ~]# fdisk /dev/xvda
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Print the partition table:

Command (m for help): p

Disk /dev/xvda: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: C43F888F-F4D2-422F-9DE9-3755F19BB874
# Start End Size Type Name
1 2048 4095 1M BIOS boot
2 4096 12587007 6G Microsoft basic

Delete partition 2 by entering “d”. On the next line, accept the default partition number.

Command (m for help): d
Partition number (1,2, default 2):
Partition 2 is deleted

Re-create partition 2 by entering “n”. Accept the defaults for partition number, first sector, and last sector.

Command (m for help): n
Partition number (2-128, default 2):
First sector (34-209715166, default 4096):
Last sector, +sectors or +size{K,M,G,T,P} (4096-209715166, default 209715166):
Created partition 2

Write out the partition table:

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

Proceed to the next section. Reboot.

Reboot

Reboot from the AWS EC2 console, the AWS CLI, or the Linux shell:

C:\>aws ec2 reboot-instances --instance-ids i-0be13d6ba7d191ebe

or

[root@ip-10-0-1-244 ~]# reboot

Login and check:

[ec2-user@ip-10-0-1-244 ~]$ df -H /
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      108G  1.4G  106G   2% /

That’s it!

Conclusion

The procedure for resizing the root file system depends on the operating system version. In Redhat 7.1 and up, simply resize the EBS volume and reboot.

In Redhat 7.0:

  • Resize the EBS volume.
  • Fix the partition table with parted.
  • Adjust the partition table with fdisk.
  • Reboot.

Connect AWS Lambda to RDS SQL Server with pyodbc

By Brian Fitzgerald

Introduction

We want to connect Lambda to Microsoft SQL Server RDS using python ODBC connnector pyodbc. pyodbc calls the Microsoft SQL Server driver, which sits on top of linuxODBC. Installing ODBC drivers into AWS Lambda has frustrated some users in the past. This blog outlines a simple approach.

Staging on EC2

We’re going to create a complete set of files for uploading to Lambda. We’ll stage those files on EC2, zip them, and upload the zip to Lambda.

Create RDS

For this article, I created Microsoft SQL Server RDS instance, as described in this table.

Parameter Value
Instance name odbcblog
Engine SQL Server Express Edition
Engine version 14.00.3049.1.v1
Class db.t2.micro
security group sg-04ed8240
Endpoint IP address odbcblog.p0p3rwmlj3hf.us-east-1.rds.amazonaws.com
Endpoint Port 1433
Master user odbcuser
Master password odbcuser

connectivity

Testing from EC2, I get:

[ec2-user@ip-172-251-80-17 ~]$ nc -v odbcblog.p0p3rwmlj3hf.us-east-1.rds.amazonaws.com 1433
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connection timed out.

RDS instance odbcblog is in security group sg-04ed8240. After associating security group sg-04ed8240 to our EC2, we are good to go:

[ec2-user@ip-172-251-80-17 ~]$ nc -v odbcblog.p0p3rwmlj3hf.us-east-1.rds.amazonaws.com 1433
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 172.251.58.192:1433.

install SQL Server ODBC

[ec2-user@ip-172-251-80-17 ~]$ sudo bash

[root@ip-172-251-80-17 download]# curl packages.microsoft.com/config/rhel/6/prod.repo > /etc/yum.repos.d/mssql-release.repo
[root@ip-172-251-80-17 download]# yum -y install msodbcsql17

Review the output and notice that dependent package unixODBC also gets installed.

 Installing : unixODBC-2.3.1-11.amzn2.0.1.x86_64

We’ll use that fact later.

Notice the /etc/odbcinst.ini entry:

[ODBC Driver 17 for SQL Server]
Description=Microsoft ODBC Driver 17 for SQL Server
Driver=/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.3.so.1.1
UsageCount=1

install pyodbc

[root@ip-172-251-80-17 download]# yum -y install gcc-c++
[root@ip-172-251-80-17 download]# yum -y install python3-devel
[root@ip-172-251-80-17 download]# yum -y install unixODBC-devel
[root@ip-172-251-80-17 download]# pip3 install pyodbc
WARNING: Running pip install with root privileges is generally not a good idea. 
 Try `pip3 install --user` instead.

test pyodbc from EC2

testodbc.py:

import pyodbc

con = pyodbc.connect(
    driver = 'ODBC Driver 17 for SQL Server',
    server = 'odbcblog.p0p3rwmlj3hf.us-east-1.rds.amazonaws.com',
    port = 1433,
    user = 'odbcuser',
    password = 'odbcuser',
    timeout = 5
)
sql = 'select @@version'
crsr = con.cursor()
crsr.execute(sql)
row = crsr.fetchone()
print (row[0])

Execute:

[ec2-user@ip-172-251-80-17 test]$ python3 testodbc.py

Output:

Microsoft SQL Server 2017 (RTM-CU13-OD) (KB4483666) - 14.0.3049.1 (X64)
Dec 15 2018 11:16:42
Copyright (C) 2017 Microsoft Corporation
Express Edition (64-bit) on Windows Server 2016 Datacenter 10.0  
  (Build 14393: ) (Hypervisor)

cool.

Stage Lambda code on EC2

Download packages

Let’s start over and download the packages

[ec2-user@ip-172-251-80-17 download]$ yumdownloader unixODBC.x86_64
[ec2-user@ip-172-251-80-17 download]$ yumdownloader msodbcsql17
[ec2-user@ip-172-251-80-17 download]$ pip3 download pyodbc
[ec2-user@ip-172-251-80-17 download]$ ls -1
msodbcsql17-17.3.1.1-1.x86_64.rpm
pyodbc-4.0.26.tar.gz
unixODBC-2.3.1-11.amzn2.0.1.x86_64.rpm

Identify a Lambda staging directory on EC2

[ec2-user@ip-172-251-80-17 testodbc]$ mkdir -p /home/ec2-user/lambdas/testodbc
[ec2-user@ip-172-251-80-17 testodbc]$ cd /home/ec2-user/lambdas/testodbc

Install the rpms

[ec2-user@ip-172-251-80-17 testodbc]$ rpm2cpio /home/ec2-user/lambdas/download/unixODBC-2.3.1-11.amzn2.0.1.x86_64.rpm | cpio -id
2504 blocks
[ec2-user@ip-172-251-80-17 testodbc]$ rpm2cpio /home/ec2-user/lambdas/download/msodbcsql17-17.3.1.1-1.x86_64.rpm | cpio -id
4486 blocks

Install pyodbc

Create a python library directory

[ec2-user@ip-172-251-80-17 lib]$ mkdir -p /home/ec2-user/lambdas/testodbc/python/lib

Install

[ec2-user@ip-172-251-80-17 ~]$ pip3 install --target /home/ec2-user/lambdas/testodbc/python/lib /home/ec2-user/lambdas/download/pyodbc-4.0.26.tar.gz

Directory structure

Observe the directory structure so far

[ec2-user@ip-172-251-80-17 ~]$ cd /home/ec2-user/lambdas/testodbc
[ec2-user@ip-172-251-80-17 testodbc]$ ls -1F
etc/
opt/
python/
usr/
[ec2-user@ip-172-251-80-17 testodbc]$ cd usr/
[ec2-user@ip-172-251-80-17 usr]$ ls -1F
bin/
lib64/
share/

Library directory

The Lambda function is going to load pyodbc. pyodbc is going to look for libodbc.so.2, but it is not going to search usr/lib64. It will do you no good to set Lambda’s LD_LIBRARY_PATH because the Lambda’s containing runtime starts before the Lambda’s environment gets set. Lambda will search lib, so move the library directory there:

[ec2-user@ip-172-251-80-17 testodbc]$ mv usr/lib64 lib

odbcinst.ini

AWS will install our Lambda code in a virtual machine under /var/task. Edit odbcinst.ini

[ec2-user@ip-172-251-80-17 testodbc]$ vi etc/odbcinst.ini

Replace the contents:

[ODBC Driver 17 for SQL Server]
Description=Microsoft ODBC Driver 17 for SQL Server
Driver=/var/task/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.3.so.1.1
UsageCount=1

Python application code directory

To avoid clutter, we will put our own application code in a subdirectory.

[ec2-user@ip-172-251-80-17 testodbc]$ mkdir py

Summary

The directory structure at the top level is now:

[ec2-user@ip-172-251-80-17 testodbc]$ ls -1F
etc/
lib/
opt/
py/
python/
usr/

Create the lambda code

File:

[ec2-user@ip-172-251-80-17 testodbc]$ vi py/testodbc.py

Contents:

import pyodbc
from json import dumps

def lam(ev, cx):
    con = pyodbc.connect(
        driver = 'ODBC Driver 17 for SQL Server',
        server = 'odbcblog.p0p3rwmlj3hf.us-east-1.rds.amazonaws.com',
        port = 1433,
        user = 'odbcuser',
        password = 'odbcuser',
        timeout = 5
    )
    sql = 'select @@version'
    crsr = con.cursor()
    crsr.execute(sql)
    row = crsr.fetchone()
    version = row[0]
    ret = {
	'version': version
    }
    return dumps(ret)

The handler will, therefore be testodbc.lam

Fun fact: You cannot name an AWS Lambda python handler “lambda”.

Create the Lambda

Initial creation

Create a basic lambda function by any method. For example, use the Lambda console.

Configuration Value
Name testOdbc
Runtime python 3.7
Timeout 5 minutes
Handler testodbc.lam

Set two environment variables:

variable value
ODBCSYSINI /var/task/etc
PYTHONPATH /var/runtime:/var/task/py:/var/task/python/lib

Code upload

Zip all libraries, configuration files, and code:

[ec2-user@ip-172-251-80-17 testodbc]$ zip -rq ../testodbc.zip *

Upload the files

[ec2-user@ip-172-251-80-17 testodbc]$ aws lambda update-function-code
   --function-name testOdbc
   --zip-file fileb://../testodbc.zip

Networking

This section must be handled with care. Otherwise, you are going to get ODBC driver timeouts. For Lambda to successfully connect to RDS, two conditions must be in place.

Elastic Network Interface

Lambda needs basic execution role for basic Cloudwatch access. In addition, Your Lambda needs to be able to bind to an Elastic Network Interface.

In IAM Console, create a new role having these roles. Ex: odbcLamRole

  • AWSLambdaBasicExecutionRole
  • AWSLambdaENIManagementAccess

In Lambda Console, assign the role to the Lambda.

VPC

In Lambda console in the Network pane, if you see “No VPC”, switch to your VPC. select two or more subnets, and select your security group.

network

Review

You may review the configuration from the CLI.

[ec2-user@ip-172-251-80-17 ~]$ aws lambda get-function-configuration --function-name testOdbc

Output:

{
    "FunctionName": "testOdbc",
    "LastModified": "2019-05-02T20:29:51.551+0000",
    "RevisionId": "df676edb-e545-42dc-90c3-0cf5dc16ed81",
    "MemorySize": 128,
    "Environment": {
        "Variables": {
            "PYTHONPATH": "/var/runtime:/var/task/py:/var/task/python/lib",
            "ODBCSYSINI": "/var/task/etc"
        }
    },
    "Version": "$LATEST",
    "Role": "arn:aws:iam::665575760545:role/odbcLamRole",
    "Timeout": 300,
    "Runtime": "python3.7",
    "TracingConfig": {
        "Mode": "PassThrough"
    },
    "CodeSha256": "VV3g7pLL1G+y3PoEPyX+UcbwMn40KIiOUbCu5ApYowM=",
    "Description": "",
    "VpcConfig": {
        "SubnetIds": [
            "subnet-8c036bd0",
            "subnet-b7214ed0",
            "subnet-aa197384",
            "subnet-364a757c",
            "subnet-af9a2991",
            "subnet-0f476600"
        ],
        "VpcId": "vpc-0d398177",
        "SecurityGroupIds": [
            "sg-04ed8240"
        ]
    },
    "CodeSize": 2322598,
    "FunctionArn": "arn:aws:lambda:us-east-1:665575760545:function:testOdbc",
    "Handler": "testodbc.lam"
}

run the Lambda

command:

[ec2-user@ip-172-251-80-17 ~]$ aws lambda invoke --function-name testOdbc out.json

cli output:
{
    "ExecutedVersion": "$LATEST",
    "StatusCode": 200
}

Lambda return:

[ec2-user@ip-172-251-80-17 ~]$ cat out.json
"{\"version\": \"Microsoft SQL Server 2017 (RTM-CU13-OD) (KB4483666) - 14.0.3049.1 (X64) \\n\\tDec 15 2018 11:16:42 \\n\\tCopyright (C) 2017 Microsoft Corporation\\n\\tExpress Edition (64-bit) on Windows Server 2016 Datacenter 10.0  (Build 14393: ) (Hypervisor)\\n\"}"

The Lambda performed these steps

  • Load all application code and dependent libraries
  • Load Unix ODBC driver
  • In the handler, load the MS ODBC driver
  • Connect to the RDS SQL Server
  • Allocate a cursor
  • Execute a SQL statement
  • Retrieve the result set
  • Parse the result set as a version
  • Return the version as JSON from the lambda handler

Summary

We accomplished these items

  • setup EC2, RDS, and Lambda in a VPC
  • install pyodbc and underlying drivers in EC2
  • test python code by connecting from EC2 to RDS
  • stage all needed drivers, configuration files python libraries, and python application code on EC2
  • upload the code to Lambda
  • run the Lambda

We have therefore established connectivity from a Python Lambda to a SQL Server RDS.

netcat on AWS EC2

By Brian Fitzgerald

Issue

You want to run netcat on Amazon AWS EC2

[ec2-user@ip-172-31-80-17 ~]$ nc odbcblog.crzco2g2glv6.us-east-1.rds.amazonaws.com 1433
-bash: nc: command not found

Solution

Install RPM nmap-ncat

[ec2-user@ip-172-31-80-17 ~]$ sudo yum -y install nmap-ncat
[ec2-user@ip-172-31-80-17 ~]$ nc -v odbcblog.crzco2g2glv6.us-east-1.rds.amazonaws.com 1433
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connection timed out.

 

Upload AWS Lambda code from command line or from python

By Brian Fitzgerald

Introduction

For many first-time users, creating the Lambda function is done in the AWS Management Console. The offered code entry choices are “Edit code inline”, “Upload a .zip file” (from your PC, or wherever you are running your browser), or “Upload a file from Amazon S3”.

code.entry.type

You can, however, save a few steps by uploading your code directly from its development, staging, or testing location. That way, you don’t need to log on to the console and work the menus, and you don’t need to copy the zip file to your PC, or to S3.

The AWS API

Tasks done from AWS Management Console are communicated to AWS using an API library, which communicates to AWS vi JSON. However, Amazon supplies a command line interpreter (CLI) built on top of the same library, so you can accomplish your tasks without using the browser. You can also use the API to write your own code to accomplish the same task.

Upload Lambda from EC2 using the CLI

If your Lambda code is in EC2, then it is convenient to upload directly from EC2 using the AWS CLI.

Configure

The AWS CLI command is “aws”, and is already installed in EC2.If you have not done so already, you should run aws configure, a one-time setup. If you have not already done so, generate an AWS acccess key and use the actual values in place of “AKI…” and “Gtj…”. Choose your own region. In the beginning, you are better off keeping all your code in a single region.

[ec2-user@ip-172-31-80-17 ~]$ aws configure
AWS Access Key ID [None]: AKI...
AWS Secret Access Key [None]: Gtj...
Default region name [us-east-1]:
Default output format [None]:

Let’s assume that you have some code to upload:

[ec2-user@ip-172-31-80-17 lambdas]$ unzip -l trc.zip
Archive: trc.zip
Length Date Time Name
--------- ---------- ----- ----
0 04-29-2019 01:37 bin/
854664 07-30-2018 20:05 bin/strace
119 05-01-2019 01:28 lg.py
84 04-30-2019 01:54 sllg.py
254 04-30-2019 02:07 trcp.py
119 05-01-2019 01:28 trc.py
--------- -------
855240 6 files

Assume, for example, that the destination Lambda is called “lamLocal”. The upload command is (showing lines folded):

[ec2-user@ip-172-31-80-17 lambdas]$ aws lambda update-function-code 
  --function-name lamLocal 
  --zip-file fileb://trc.zip

The response is in JSON. A normal response looks like this:

{
  "FunctionName": "lamLocal",
  "LastModified": "2019-05-01T19:13:47.011+0000",
  "RevisionId": "a139363f-2f31-4aa2-818f-ec20057a981b",
  "MemorySize": 128,
  "Version": "$LATEST",
  "Role": "arn:aws:iam::549357536367:role/service-role/lamLocal-role-gdmmx0de",
  "Timeout": 3,
  "Runtime": "python3.6",
  "TracingConfig": {
    "Mode": "PassThrough"
  },
  "CodeSha256": "45ELCm6smYw5Q/fFxMR+756GwfvSEGeLxVIF0kyFhac=",
  "Description": "",
  "VpcConfig": {
    "SubnetIds": [],
    "VpcId": "",
    "SecurityGroupIds": []
  },
  "CodeSize": 314844,
  "FunctionArn": "arn:aws:lambda:us-east-1:549357536367:function:lamLocal",
  "Handler": "trcp.lam"
}

If an error occurs, the CLI displays no JSON and an exception message (folded):

[ec2-user@ip-172-31-80-17 lambdas]$ aws lambda update-function-code
  --function-name zamLocal --zip-file fileb://trc.zip

An error occurred (ResourceNotFoundException) 
  when calling the UpdateFunctionCode operation: 
  Function not found: arn:aws:lambda:us-east-1:549357536367:function:zamLocal

This is a basic example of updating code on an existing Lambda that was previously created in the AWS Management Console. It is also possible to create, invoke, and delete a Lambda from the AWS CLI.

Upload Lambda from EC2 from code

Now we’ll upload the zip to lambda by making a python call. First some setup

[ec2-user@ip-172-31-80-17 lambdas]$ sudo bash
[root@ip-172-31-80-17 lambdas]# yum -y update
[root@ip-172-31-80-17 lambdas]# pip3 install boto3
WARNING: Running pip install with root privileges 
  is generally not a good idea. 
  Try `pip3 install --user` instead.

File uplam.py:

from boto3 import client
from json import dumps

awskey = 'AKI...'
awskeysec = 'Gtj...'
lam = 'lamLocal'
zf = 'trc.zip'

cli = client(
    'lambda',
    aws_access_key_id= awskey,
    aws_secret_access_key= awskeysec
)

with open(zf, 'rb') as f:
    ret = cli.update_function_code(
        FunctionName = lam,
        ZipFile = f.read()
    )
    print(dumps(ret, indent=4, sort_keys=True))

Execution:

[ec2-user@ip-172-31-80-17 lambdas]$ python3 uplam.py

Output:

{
    "CodeSha256": "45ELCm6smYw5Q/fFxMR+756GwfvSEGeLxVIF0kyFhac=",
    "CodeSize": 314844,
    "Description": "",
    "FunctionArn": "arn:aws:lambda:us-east-1:549357536367:function:lamLocal",
    "FunctionName": "lamLocal",
    "Handler": "trcp.lam",
    "LastModified": "2019-05-01T22:46:44.426+0000",
    "MemorySize": 128,
    "ResponseMetadata": {
        "HTTPHeaders": {
            "connection": "keep-alive",
            "content-length": "675",
            "content-type": "application/json",
            "date": "Wed, 01 May 2019 22:46:44 GMT",
            "x-amzn-requestid": "fe6d9520-6c62-11e9-a5fb-27bb7e5a8a89"
        },
        "HTTPStatusCode": 200,
        "RequestId": "fe6d9520-6c62-11e9-a5fb-27bb7e5a8a89",
        "RetryAttempts": 0
    },
    "RevisionId": "08722516-76c6-4b96-a9ab-dd6f89fadf1f",
    "Role": "arn:aws:iam::549357536367:role/service-role/lamLocal-role-gdmmx0de",
    "Runtime": "python3.6",
    "Timeout": 3,
    "TracingConfig": {
        "Mode": "PassThrough"
    },
    "Version": "$LATEST",
    "VpcConfig": {
        "SecurityGroupIds": [],
        "SubnetIds": [],
        "VpcId": ""
    }
}

In case of error, update_function_code throws an exception and does not return the JSON value.

[ec2-user@ip-172-31-80-17 lambdas]$ python3 uplam.py
Traceback (most recent call last):
  File "uplam.py", line 18, in 
    ZipFile = f.read()
  File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 357,
    in _api_call return self._make_api_call(operation_name, kwargs)
  File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 661,
    in _make_api_call raise error_class(parsed_response, operation_name)
botocore.errorfactory.ResourceNotFoundException: An error occurred 
    (ResourceNotFoundException) 
    when calling the UpdateFunctionCode operation: 
    Function not found: 
    arn:aws:lambda:us-east-1:549357536367:function:zamLocal

Other Languages

I have demonstrated Lambda upload using python3 and boto3. The upload program could have as well been written in any of these languages:

  • Java
  • .NET
  • Node.js
  • PHP
  • Ruby
  • Go
  • C++

Each of these APIs has access to the Lambda service and a method for updateFunctionCode.

Conclusion

An introduction to Lambda functions will lead the user to the AWS Management Console. From there, code entry methods are inline editor, upload .zip file from PC, and upload .zip file from S3. Instead, you may upload your code by using the AWS CLI. Finally, you may upload a Lambda function from within a python script, or a program written in any of seven other languages.