Trace only one logon on AWS RDS Oracle

By Brian Fitzgerald

Introduction

Given an Oracle RDS database, multiple application servers and multiple connections, the task at hand is to

  • Create a logon trigger that will …
  • Run when the first connection arrives and not again after that, and …
  • Run trace 10046 …
  • Up to 10 MB only.
  • Identify the trace file
  • Download the trace file
  • Run tkprof

Background

  • A logon trigger fires for every login that matches the firing condition. If we trace every session, we could file the file system. We only want one example trace file.
  • In RDS, you do not have direct access to the operating system. You must somehow copy out the trace file.

Let’s go!

Grant

The first thing you need to get right is the grant. Trace requires “alter session” rights, which must be granted directly, not through a role. Suppose the user is “inuser”

grant alter session to inuser;

Create a sequence

create sequence inuser.user_trace_trg_seq;

Create the trigger

The trigger must be owned by the user that needs to be traced.

create or replace trigger inuser.user_trace_trg
after logon on database
when ( user = 'INUSER' )
declare
  l_val number;
begin
  l_val := inuser.user_trace_trg_seq.nextval;
  if l_val = 1
  then
    execute immediate
    q'{alter session set max_dump_file_size = '10m'}';
    execute immediate
    q'{alter session set tracefile_identifier = '}'||user||q'{_user_trace_trg'}';
    execute immediate
    q'{alter session set events '10046 trace name context forever, level 12'}';
  end if;
end;
/

Now wait for the user to login. The first the trigger fires time, nextval equals 1, so the trigger body runs. After that, the trigger body does not run.

Identify the trace file

select * from table(
  rdsadmin.rds_file_util.listdir('BDUMP')
)
where filename like '%INUSER_user_trace_trg.trc'
order by mtime;
FILENAME TYPE FILESIZE MTIME
ORCL_ora_9022_INUSER_user_trace_trg.trc file 10485856 2022-04-17 00:17:04

“Download” the trace file

You could transfer the file from rds to s3, and then from s3 to a local system. The other way is:

select text from table(
  rdsadmin.rds_file_util.read_text_file(
   :dir,
   :filename
  )
)

Using sqlalchemy and boto3, I have integrated this statement with AWS secrets manager in a convenient python script

$ download-rds-ora-dir-files.py --secret-id $sec --dir BDUMP --pat ORCL_ora_9022_INUSER_user_trace_trg.trc
number of files: 1

You could also run sqlplus and spool the output to a file.

set feedback off
set linesize 32767
set trimspool on
set pagesize 0

set termout off
spool ORCL_ora_9022_INUSER_user_trace_trg.trc

select text from table(
    rdsadmin.rds_file_util.read_text_file(
        'BDUMP',
        'ORCL_ora_9022_INUSER_user_trace_trg.trc'
    )
)
;

Run tkprof

tkprof ORCL_ora_9022_INUSER_user_trace_trg.trc ORCL_ora_9022_INUSER_user_trace_trg.tkp

Done!

Conclusion

The following concepts were covered

  • How to make a logon trigger body that runs only once.
  • Permissions and object ownership for a logon trigger.
  • How to download a trace file from RDS.

Session objects in python APIs

By Brian Fitzgerald

Introduction

Database administrators understand “session” to mean an established database connection. It may come as a surprise that “session”, in several database-related libraries, refers to a data structure in the absence of any established connection.

Database session

A database session is an instance memory structure that holds an established connection’s state. The session contains facts such as a unique session id and the user name. The database session exists only after the transport (network connection) is established, the user has authenticated, and the memory structures have been allocated. The client application uses an API which holds a handle to the server side session. Application code uses the database API handle to issue commands and to receive responses and query results. The API is layered on top of a database driver, which associates the API handle with an operating system or network connection. In this way, client application to a server session via an API and a database driver. In databases, then, a client “handle” connects to a server “session”.

Other meanings of “session”

DBAs work with software APIs, such as python libraries, that do not establish a connection while allocating what they call a “session”. I will mention three APIs: requests, sqlalchemy, and boto3, in which “session” refers to a data structure that is not always associated with a connection.

Requests

Requests is a python HTTP library. HTTP is a connectionless protocol. Each request to the HTTP server can be implemented by a separate, new network connection that is disconnected after each call. In the following example, the call makes a TCP connection to the server, sends a request, receives a response, and closes the connection.

>>> import requests
>>> from json import loads
>>> r = requests.get('https://httpbin.org/get?q=Hello')
>>> type(r)
<class 'requests.models.Response'>

Let’s verify that no connection remains. Suspend your python repl by pressing ctrl-Z.

>>>
[1]+ Stopped python3
$

Is requests still connected?

$ lsof -n -p $(pgrep python3) | grep https

No output here means that requests is no longer connected.

Requests offers an advanced option that preserves state and a connection across calls.

>>> from requests import Session
>>> ses = Session()
>>> type(ses)
<class 'requests.sessions.Session'>
>>> ses.get('https://httpbin.org/get?q=Hello')
<Response [200]>

Is ses still connected? Let’s see.

$ lsof -n -p $(pgrep python3) | grep https
python3 3387 bf2190 3u IPv4 125425570 0t0 TCP 10.130.99.84:36878->52.86.136.68:https (ESTABLISHED)

Yes. The Session is connected to the https server. If you make another call within a short time, requests will make that call on the same socket. If the connection is idle for approximately 2.5 minutes, then requests will close the connection. If you make another call after that, requests will open a new socket.

To summarize, a requests Session is a client side handle that is not connected when it is created, but gets connected on first use. The session remains connected during use, and disconnects after an idle timeout.

SQL Alchemy

SQL Alchemy is a python SQL toolkit and object relational mapper. Two objects used for establishing a connection are the Engine and the Session. The Engine contains information about the driver and the connection string.

from sqlalchemy import create_engine

eng = create_engine(
    connection_string,
    max_identifier_length=128
)
print(type(eng))
<class 'sqlalchemy.engine.base.Engine'>

The Session is an object that will eventually hold an established connection.

from sqlalchemy.orm import sessionmaker
db_sess = sessionmaker(bind=eng)()
print(type(db_sess))
<class 'sqlalchemy.orm.session.Session'>

So far, the Session is not connected, but will be after a call to connection()

conn = db_sess.connection()
print(type(conn))
<class 'sqlalchemy.engine.base.Connection'>

conn holds an established connection. The other way to connect is to simply use the session.

for row in db_sess.execute(text('select dummy from dual')):
    print('%s' % row['dummy'])
X

Now db_sess holds an established connection. In conclusion, the SQLAlchemy Session is an object that is not initially connected. The Session connects by a call to connection() or on first use.

boto3

boto3 is the AWS python library. It is used to manage AWS services. One such service is Relational Database Service (RDS). In boto3, you make API calls to manage your environment. For example, you can list all RDS db instance details by calling describe_db_instances. To make a call, you must first allocate a boto3 client. The constructor for the RDS service is client(‘rds’). You can call the constructor and run describe directly like so:

>>> from boto3 import client
>>> len(client('rds').describe_db_instances()['DBInstances'])
30

The output from describe_db_instances is verbose, but the results are not needed for this example, so len is used here to summarize the result (30 db instances). Because describe was called directly from the constructor, the client object goes out of scope immediately, and the connection is dropped:

>>> 
[1]+ Stopped python3 
$ lsof -n -p $(pgrep python3) | grep https 
$

In the next example, the boto3 client is saved to variable rds

>>> from boto3 import client
>>> rds = client('rds')
>>> type(rds)
<class 'botocore.client.RDS'>
>>> len(rds.describe_db_instances()['DBInstances'])
30
>>>
[1]+ Stopped python3
$ lsof -n -p $(pgrep python3) | grep https
python3 7128 bf2190 3u IPv4 125436458 0t0 TCP 10.130.99.84:60276->52.46.159.85:https (ESTABLISHED)

In this case, the client remains connected after the results are returned. After approximately 2 minutes of idle time, boto3 closes the connection. If you make another call after that, boto3 opens a new connection. In the next case, the client is within the scope of a function. When the function returns, the client does out of scope, and the connection gets dropped.

>>> from boto3 import client
>>>
>>> def desc():
... rds = client('rds')
... rds.describe_db_instances()['DBInstances']
...
>>> desc()
>>>
[1]+ Stopped python3
$ lsof -n -p $(pgrep python3) | grep https
$
In the final example, client rds is set outside the scope. When the function returns, the client remains in scope and connected to the AWS server.
>>> from boto3 import client
>>>
>>> rds = client('rds')
>>> def desc():
... rds.describe_db_instances()['DBInstances']
...
>>> desc()
>>>
[1]+ Stopped python3
$ lsof -n -p $(pgrep python3) | grep https
python3 7631 bf2190 3u IPv4 125437676 0t0 TCP 10.130.99.84:42598->54.239.31.3:https (ESTABLISHED)

The boto3 session

The boto3 Session is not a connection at all. It is an object that holds access credentials. The Session creates objects that connect to AWS, particularly the Client. You can use the session to manage across multiple regions in an account, multiple accounts, multiple users, multiple roles, or across tokens acquired at different times. You can use the session to override the access keys in your profile, in your environment, or in your service account.
In this example, we loop over two regions and count the number of option groups. We allocate a list of sessions
>>> from boto3.session import Session
>>>
>>> AWS_ACCESS_KEY_ID="ASIAW3NNBKAMBK2CUGNE"
>>> AWS_SECRET_ACCESS_KEY="8V7Qaama1Yvl6ADRfrFI0vNTegx+V56S3sY5/uX0"
>>> AWS_SESSION_TOKEN="IQoJb3JpZ2luX2VjENz//////////wEaCXVzLWVhc3QtMSJIMEYCIQDOnKeV4UGWCVxuSvmD+O1FPUJHbxZOCPa/njhGlxBo1gIhAIRky8kx0jUNl40dg0NKGHjn1aOjYKe9Hdhs6msAlmq7KokDCFQQARoMNDcxMTk4NjgzMTYwIgzXqUo5F7yN89PdSxkq5gL1HhFXOOjVa8Irz8fVwUazE4VAKVtYthND8bXkqTLm1BE3x0hbD54rzpae80UxmcCnPv2Ec3WH8L6N2a7AvqSfhAJa34pzqJ+AcLy4blcdlbwE8I7d8ds/rvoNFiowEf5EhOaP41ZOdtngzdIUN1byE01Vsij8l9Ai74iE5DUnFqmIajtbr2T0Ah9a5ovQmBBB5nyjhY0DBcV9Bj0B6R8swoDnd6NBDcdcxQNvPqzJfwHsqrOYNJVzn1/FQLj86sgwTumHa1WyEugKWO3tlgEuOb+4Y3SqaMQ/vgC7OV+cdKfRzCrK/H/XYkWliVt7Ct1fQ8sGkEyxBlxyemgfvXybJnrXmEx5YisIO4LFMAkpzqsKYNTNybEwDKZmOAN3geP2c2QxoGNGHZ0NcT9nXDmtwHFuO0x2OlPf/qYmdZmlFKIRqBbGNCOPaJ5YgwjuGtx0TqULwKpNlYuF9WkZz5VGvFo6C1MSMLnpupEGOqUBmaqzHmvTykPbaH03SR5J1pKIvHI/XVMcvaThT2UEl7xQhwY0OE/AwAgP0+dtm0G6Mt7I6F2oZ5x6S2gnFHlxb5ctrgC8CRavVkks+ehHM8iAeBIm8CMk4Cb9jFBWj6jXldhHZDOCnOaqVAFSOlTaqXD3ZPlSRW1iyN1vOrmHIMJsGKeb6PiaGz0a9omCE/HCewQkcqYJXBhaTHSRUF1oJL/+GCvw"
>>>
>>>
>>> sess = []
>>> for region in 'us-east-1 us-west-1'.split():
... ses = Session(
... aws_access_key_id=AWS_ACCESS_KEY_ID,
... aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
... aws_session_token=AWS_SESSION_TOKEN,
... region_name=region
... )
... print(type(ses))
... sess.append(ses)
...
<class 'boto3.session.Session'>
<class 'boto3.session.Session'>

next we loop over the sessions, and run describe in a function.

>>> for ses in sess:
... res = ses.client('rds').describe_option_groups()['OptionGroupsList']
... print(len(res)).
File "<stdin>", line 3
print(len(res)).
^
SyntaxError: invalid syntax
>>> for ses in sess:
... res = ses.client('rds').describe_option_groups()['OptionGroupsList']
... print(len(res))
...
4
0
>>>
[1]+ Stopped python3
$ cat ../pgrep.bash
#!/bin/bash
lsof -n -p $(pgrep python3) | grep https
$ ../pgrep.bash
$

There are 4 option groups in us-east-1 and 0 in us-west-1. The describe ran directly off the client constructor. The client went out of scope once the describe finished. No connection remains connected. Instead, let’s allocate a list of clients.

>>> from boto3.session import Session
>>>
>>> AWS_ACCESS_KEY_ID="ASIAW3NNBKAMBK2CUGNE"
>>> AWS_SECRET_ACCESS_KEY="8V7Qaama1Yvl6ADRfrFI0vNTegx+V56S3sY5/uX0"
>>> AWS_SESSION_TOKEN="IQoJb3JpZ2luX2VjENz//////////wEaCXVzLWVhc3QtMSJIMEYCIQDOnKeV4UGWCVxuSvmD+O1FPUJHbxZOCPa/njhGlxBo1gIhAIRky8kx0jUNl40dg0NKGHjn1aOjYKe9Hdhs6msAlmq7KokDCFQQARoMNDcxMTk4NjgzMTYwIgzXqUo5F7yN89PdSxkq5gL1HhFXOOjVa8Irz8fVwUazE4VAKVtYthND8bXkqTLm1BE3x0hbD54rzpae80UxmcCnPv2Ec3WH8L6N2a7AvqSfhAJa34pzqJ+AcLy4blcdlbwE8I7d8ds/rvoNFiowEf5EhOaP41ZOdtngzdIUN1byE01Vsij8l9Ai74iE5DUnFqmIajtbr2T0Ah9a5ovQmBBB5nyjhY0DBcV9Bj0B6R8swoDnd6NBDcdcxQNvPqzJfwHsqrOYNJVzn1/FQLj86sgwTumHa1WyEugKWO3tlgEuOb+4Y3SqaMQ/vgC7OV+cdKfRzCrK/H/XYkWliVt7Ct1fQ8sGkEyxBlxyemgfvXybJnrXmEx5YisIO4LFMAkpzqsKYNTNybEwDKZmOAN3geP2c2QxoGNGHZ0NcT9nXDmtwHFuO0x2OlPf/qYmdZmlFKIRqBbGNCOPaJ5YgwjuGtx0TqULwKpNlYuF9WkZz5VGvFo6C1MSMLnpupEGOqUBmaqzHmvTykPbaH03SR5J1pKIvHI/XVMcvaThT2UEl7xQhwY0OE/AwAgP0+dtm0G6Mt7I6F2oZ5x6S2gnFHlxb5ctrgC8CRavVkks+ehHM8iAeBIm8CMk4Cb9jFBWj6jXldhHZDOCnOaqVAFSOlTaqXD3ZPlSRW1iyN1vOrmHIMJsGKeb6PiaGz0a9omCE/HCewQkcqYJXBhaTHSRUF1oJL/+GCvw"
>>>
>>> rdss = []
>>> for region in 'us-east-1 us-west-1'.split():
... rds = Session(
... aws_access_key_id=AWS_ACCESS_KEY_ID,
... aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
... aws_session_token=AWS_SESSION_TOKEN,
... region_name=region
... ).client('rds')
... rdss.append(rds)
...

Now loop over the clients.

>>> for rds in rdss:
... res = rds.describe_option_groups()['OptionGroupsList']
... print(len(res))
...
4
0
>>>
[1]+ Stopped python3
$ ../pgrep.bash
python3 6587 ec2-user 3u IPv4 26002 0t0 TCP 10.0.16.52:40794->52.119.197.147:https (ESTABLISHED)
python3 6587 ec2-user 4u IPv4 26004 0t0 TCP 10.0.16.52:39852->176.32.118.192:https (ESTABLISHED)
Because the list is in the outer scope, the clients remain in place after the describe finishes and the clients remain connected to the AWS server.
One session maps to one account and region, but that Session can create multiple Clients, where each Client can connect to AWS. In conclusion, the boto3 Session is an object containing tokens that are needed create session objects that can connect to AWS. The boto3 Session itself does not connect to the AWS server.

Conclusion

When I started applying python to my database administration job, I ran across various APIs that used the a “session” that was not a connection to the server. In the three APIs mentioned here, requests, SQL Alchemy, and boto3, the Session is not itself a connection, but it can connect and remain connected for a time. The SQL Alchemy session can be used create a connection to a database. The boto3 Session can create a client that connects to the AWS server.

pagination and python 3.8 assignment expressions in AWS boto3

By Brian Fitzgerald

Introduction

AWS query operations return their results all at once, or in smaller, manageable pages. boto3 provides the programmer with to ways to manage the retrieval of paginated results.

Python version 3.8 introduces assignment expressions, a way to assign to variables within an expression using the notation := expr. PEP-572 explains how assignment expressions are useful in control statements, such as if and while, and in comprehensions. Assignment expressions can be used to improve coding style in pagination code.

describe

To query an Amazon AWS service, one calls an API using a procedure with a name beginning with “describe”, or sometimes “list” or “get”. The result is a dictionary with at least two elements, namely the payload, and ResponseMetadata. If more results are pending, NextToken will appear. The payload is your query result and consists of an array of objects. Examples of payload data include EC2 instance Reservations, EC2 EBS Volumes, and RDS DBInstances.

Using python API boto3, you can write:

import boto3
ec2 = boto3.client('ec2')
resp = { 'NextToken' : '' }
while 'NextToken' in resp:
    resp = ec2.describe_instances(
        NextToken = resp['NextToken']
    )
    for resv in resp['Reservations']:
        for inst in resv['Instances']:
            print ( inst['InstanceId'])

resp is a dict, and appears four times in the above code:

  • initialization with a dummy NextToken
  • returned from describe_instances
  • used to get describe_instances argument NextToken
  • used to get Reservations

The dictionary keys are Reservations, ResponseMetadata, and when needed, NextToken.

python 3.8 assignment expression

In python 3.8, we can assign to a variable, and then use the assignment as an expression. Specifically, we can assign resp, and access element Reservations.

import boto3
ec2 = boto3.client('ec2')
resp = { 'NextToken' : '' }
while 'NextToken' in resp:
    for resv in (resp := ec2.describe_instances(
        NextToken = resp['NextToken']
    ))['Reservations']:
        for inst in resv['Instances']:
            print ( inst['InstanceId'])

There is one fewer line. Some may find the revised code appealing.

paginator

AWS boto3 provides paginators, python iterators, that handle NextToken for you.

import boto3
ec2 = boto3.client('ec2')
pagr = ec2.get_paginator('describe_instances')
for page in pagr.paginate():
    for resv in page['Reservations']:
        for inst in resv['Instances']:
            print( inst['InstanceId'])

There are two fewer lines. NextToken does not appear in the code.

Paginate returns the same results as the prior “describe” type methods. The returned value is a dict, again with keys Reservations, ResponseMetadata, and when needed, NextToken. However, the returned NextToken has no role in the new code.

paginator relationship to API call

A debug trace of describe_instances() shows an underlying call to client.py, line 357, function _api_call().

(Pdb) where
  /usr/lib64/python3.7/bdb.py(585)run()
-> exec(cmd, globals, locals)
  (1)()
  /home/ec2-user/git/aws/insts.desc.3.7.py(8)()
-> NextToken = resp['NextToken']
  /usr/local/lib/python3.7/site-packages/botocore/client.py(357)_api_call()
-> return self._make_api_call(operation_name, kwargs)

Debug trace of paginate approach shows a call to the same point via paginate.py.

(Pdb) where
  /usr/lib64/python3.7/bdb.py(585)run()
-> exec(cmd, globals, locals)
  (1)()
  /home/ec2-user/git/aws/insts.page.3.7.py(6)()
-> for page in pagr.paginate():
  /usr/local/lib/python3.7/site-packages/botocore/paginate.py(255)__iter__()
-> response = self._make_request(current_kwargs)
  /usr/local/lib/python3.7/site-packages/botocore/paginate.py(332)_make_request()
-> return self._method(**current_kwargs)
  /usr/local/lib/python3.7/site-packages/botocore/client.py(357)_api_call()

You can see where paginate is handling NextToken for you. For example, this code:

NextToken = resp['NextToken']

is handled behind the scenes as:

> /usr/local/lib/python3.7/site-packages/botocore/paginate.py(303)__iter__()
-> previous_next_token = next_token
(Pdb) p next_token
{'NextToken': 'eyJ2IjoiMiIsImMiOiJ5eHV0K011N2crQlBBaFhoSWU2SUpad0c3V3VaUFBvKzBPbDRIWFAvaXJSb3poeDFDNks3TkxGMkU5R1UxRjk4UlVnNFViRzNjSUlWWXhMbHk3ejU1Qjd1ZGhERHNBVktCR1g0cW5RZk9FdStZckViM0NjOXljV1p0SWplckhkV2ZISkNvc0NXdjhnMXA4RVBMWDFiVzNkS3k1NW5CdlZmUlhWUEpzeUZNbnhMS3VzdEo4eHFIWHRYNytpcEdWbHJKMFRqTlNLQ3A0Rk9VaEZGckdBMTVOYU44WGhvYkYyZVBBYjRrMVVaYXNFTCIsInMiOiIxIn0='}

The describe and paginate approaches are functionally equivalent. Neither one has a performance advantage. Using paginate leads to cleaner code.

limiting the results (cloud side)

You can limit the number of items returned thus:

...
for page in pagr.paginate(PaginationConfig={'MaxItems': 250}):
...

In that case, the total number of items returned will be at most 250. The limiting is done on the cloud side. The items could be returned across more than one page.

limiting the results (client side)

Suppose you want all the ec2 instance types that have memory less than or equal to 8 GB. Two problems:

  1. AWS pricing has no filter for memory.
  2. The AWS pricing filter has no “less than” operator.

In that case, you need to retrieve your products, and apply additional filtering in client side code.

#!/home/ec2-user/sw/python/3.8/bin/python3.8

from boto3 import client
from pfilt import Pfilt
from json import dumps, loads


class ResultsLimitExceededError(Exception):
    def __init__(self, limit):
        fmt = format('results limit %s exceeded')
        msg = (fmt % limit)
        super(ResultsLimitExceededError, self).__init__(msg)


class Prices:

    @classmethod
    def instancetypes(cls):
        cli = client('pricing')
        pag = cli.get_paginator('get_products')

        max_products_in = 1000
        max_products_out = 100
        num_products_in = 0
        num_products_out = 0
        for page in pag.paginate(
                ServiceCode='AmazonEC2',
                Filters=Pfilt.filters,
                PaginationConfig={'MaxItems': max_products_in}
        ):
            print('page size %s' % len(page['PriceList']))
            for skitem in [loads(itm) for itm in page['PriceList']]:
                num_products_in += 1
                if (filtered_skitem := cls.client_filter(cls.enrich(skitem))) is not None:
                    if (num_products_out := num_products_out + 1) > max_products_out:
                        raise ResultsLimitExceededError(max_products_out)
                    cls.processitm(filtered_skitem)

        print('number of products in %s' % num_products_in)
        print('number of products out %s' % num_products_out)

    @classmethod
    def enrich(cls, skitem):
        eskitem = skitem
        product = skitem['product']
        memGB = float(product['attributes']['memory'].split()[0].replace(',', ''))
        eskitem['memGB'] = memGB
        return eskitem

    @classmethod
    def client_filter(cls, eskitem):
        maxmem = 8
        return eskitem if eskitem['memGB'] <= maxmem else None

    @classmethod
    def processitm(cls, skitem):
        pass


if __name__ == '__main__':
    Prices.instancetypes()

In this case, we set a cloud-side limit of 1000 records, and a client-side limit of 100 records.

Example output:

$ ./prices.pag.py
page size 100
page size 100
page size 39
number of products in 239
number of products out 33

You could manage your client-side memory by reducing the page size:

PaginationConfig={'MaxItems': max_products_in, 'PageSize': 50}

Setting PageSize does not affect the results. The output, then, is:

page size 50
page size 50
page size 50
page size 50
page size 39
number of products in 239
number of products out 33

To demonstrate the exception, we can change:

max_products_out = 20

in that case, we get:

$ ./prices.pag.py
Traceback (most recent call last):
  File "./prices.pag.py", line 62, in 
    Prices.instancetypes()
  File "./prices.pag.py", line 36, in instancetypes
    raise ResultsLimitExceededError(max_products_out)
__main__.ResultsLimitExceededError: results limit 20 exceeded

We used the new Python 3.8 assignment expression in two places:

if (fskitem := cls.client_filter(cls.enrich(skitem))) is not None:
    if (num_products_out := num_products_out + 1) > max_products_out:

We assign fskitem and then test it. Likewise, we assign num_products_out and test it.

pagination availability

Not all AWS operations can be paginated — it depends on the service. If an operation cannot be paginated, it will not return NextToken. If an operation cannot be paginated, you may have issues managing the results. You can use this script to find out if an operation can be paginated.

#!/usr/bin/python

from boto3 import client
from argparse import ArgumentParser


class CanPag:
    args = None

    @classmethod
    def prs(cls):
        ap = ArgumentParser(
            description='Check whether an operation can paginate'
        )
        ap.add_argument(
            '--service', '-s', required=True,
            help='AWS service name (ec2, s3, pricing, etc.)'
        )
        ap.add_argument(
            '--operation', '-o', required=True,
            help='AWS service operation (describe_instances, etc.)'
        )

        cls.args = ap.parse_args()

    @classmethod
    def canpage(cls):
        cli = client(cls.args.service)
        cp = cli.can_paginate(cls.args.operation)
        print(
            '%s %s %s paginate.' % (
                cls.args.service,
                cls.args.operation,
                'can' if cp else 'cannot')
        )


if __name__ == '__main__':
    CanPag.prs()
    CanPag.canpage()

Examples:

$ ./can.paginate.py -s ec2 -o describe_instances
ec2 describe_instances can paginate.
$ ./can.paginate.py -s ec2 -o describe_volumes
ec2 describe_volumes can paginate.
$ ./can.paginate.py -s ec2 -o describe_images
ec2 describe_images cannot paginate.
$ ./can.paginate.py -s pricing -o get_products
pricing get_products can paginate.

Conclusion

I wrote this article for two reasons: To identify a good paginator coding practice, and to try out the new Python 3.8 assignment expression.

AWS queries are often returned in chunks, or pages. You could write your own code to manage the retrieval, but you are better off using the provided paginator. You can configure the paginator as to and MaxItems and PageSize. Use can_paginate() to find out which operations can be paginated.

Python version 3.8 assignment expression was demonstrated. Three cases were presented. In each case, the pattern was “assign and test”. The assigned variables in the examples were resp, filtered_skitem, and num_products_out. The assigned value was needed elsewhere in the routine.

grid 19c install with ASM filter driver

Introduction

Starting in Oracle 12.2, the grid installation experience changed in two major ways:

  1. In the past, you unzipped the binaries into a download folder and executed runInstaller. Oracle Universal Installer (OUI) copied the binaries to your grid oracle home. In the new procedure, you unzip the grid binaries into the grid home and then run gridSetup.sh to configure the software in-place.
  2. Instead of configuring asmlib, you are going to initialize the disks using ASM Filter Driver (AFD).

These changes raise new organizational issues and introduce a new, critical step.

Scope

The scope of this article is:

  • Installation to cloud virtual machine (Azure or AWS EC2)
  • Oracle Restart (Not RAC)
  • x86_64 hardware
  • Linux 7
  • udev is not considered
  • New install, not an upgrade
  • No preexisting asmlib

Grid infrastructure in the cloud

RAC

The main purpose of oracle Grid Infrastructure was to support Real Application Clusters (RAC). RAC requires sharing storage volumes across hosts, which most Cloud providers disallow. An iscsi server can be used to share drives, but at the cost of added complexity. RAC deployments are, therefore, less common in the cloud.

ASM

Oracle Grid Infrastructure is extremely useful, even without RAC. Automatic Storage Management is a main component of grid. Some benefits of ASM are:

  • Convenient, consolidated, managed storage
  • Efficient RMAN and Data Guard administration
  • Monitoring and managing space in ASM using Oracle Enterprise Manager

restart

An Oracle standalone (non-RAC) database that is running on grid infrastructure is known as a “restart”. You use grid to manage oracle startup and shutdown, and to monitor the database instance. Oracle restart works very well in the cloud.

Organizational issues

Previously, during the operating system setup, asmlib could be configured along with other root steps, prior to grid installation. However, AFD labeling must wait until grid installation has begun. In an organization with segregation of SA and DBA duties, the DBA is going to need to schedule three separate tasks for the SA.

  1. Setup the operating system
  2. Label ASM disks using AFD
  3. Run root.sh during the grid setup

Instead of personal handoffs, you can do one of these:

  1. Have one person with root and grid access perform all steps.
  2. Use an automated deployment tool.
  3. Run a script as root with some steps su’d to grid.

This blog article covers manual command line steps as root or grid, and use of the gridSetup.sh X windows GUI.

Steps leading up to AFD

The steps leading up to ASM disk labeling are, briefly:

  • Select a compatible machine image (Linux on x86_64)
  • Create the OS (ex: 2 CPU, 8G RAM, 32G operating system disk)
  • Update and install packages
  • Configure the operating system (swap, sysctl.conf)
  • Create the grid user and directories
  • Download and unzip the software

Now you are ready to configure your ASM disks.

Disk allocation

In the cloud, you can allocate new disks from the web console. You can also use a command line tool. Finally, you can write a script to do the job, in python for example. You can select from among  available type (hard drive or SSD), you can specify the size, and in some cases, you can specify the IOPS.

You can attach a disk to a live system, and you can detach a disk from a live system. Before allocating new disks, list the existing disks

[root@grid19c ~]# df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 33025276 12824288 20200988 39% /
[root@grid19c ~]# swapon
NAME TYPE SIZE USED PRIO
/mnt/resource/swapfile file 2G 0B -2
[root@grid19c ~]# ls -l /dev/sd*
brw-rw----. 1 root disk 8, 0 Aug 23 12:40 /dev/sda
brw-rw----. 1 root disk 8, 1 Aug 23 12:40 /dev/sda1
brw-rw----. 1 root disk 8, 2 Aug 23 12:40 /dev/sda2
brw-rw----. 1 root disk 8, 16 Aug 23 12:40 /dev/sdb
brw-rw----. 1 root disk 8, 17 Aug 23 12:40 /dev/sdb1

AWS

We are mainly interested in Linux systems that are available in the AWS Marketplace. The virtualization type in the AWS Marketplace right now is predominantly Hardware Virtual Machine (HVM). In the systems that I have checked, the possible device names are /dev/xvd[a-z], or only 26 devices, which is not very flexible. The allowable device size range is 1 GiB to 16384 GiB. The largest SSD devices are burstable up to 64000 IOPS. In AWS, an attached device can be resized.

aws.disks

AWS lets you choose the Linux device node name. You can retrieve the node name from the console, the command line, or the API. For example:

>>> from boto3 import resource
>>> resource('ec2').Volume('vol-0679cfe2a209db2ed').attachments[0]['Device']
'/dev/sdg'

Azure

In Azure, disks range in size from 32 GiB to 32767 GiB. A disk can be resized, and the type (HDD or SSD) may be modified only when the disk is unattached. The largest premium SSD offers IOPS up to 20000.

azure.disks

Azure does not identify the new disk names for you. I have not found a way to retrieve the operating system node name from the Azure cloud. The closest I can get is to display the attachment status.

>>> from azure.common.credentials import get_azure_cli_credentials
>>> from azure.mgmt.compute import ComputeManagementClient
>>> cli = ComputeManagementClient(
... *get_azure_cli_credentials()
... )
>>> cli.disks.get(
... resource_group_name='ora',
... disk_name='asm-0'
... ).disk_state
'Attached'

This seems like a gap to me. You want a certain method to distinguish new disks from existing disks, and you want a positive method to identify each disk.

Identify the new disks

Example:

brw-rw----. 1 root disk 8, 32 Aug 23 12:40 /dev/sdc
brw-rw----. 1 root disk 8, 48 Aug 23 12:40 /dev/sdd

AFD label

Prior to starting the grid install, you need to label the disk or disks that you intend to use for the first ASM disk group. We will use asmcmd afd_label. This is a critical step.

afd_label

To label your disks, sudo to root, and execute

asmcmd afd_label label devicenode –init

For example:

[root@grid19c ~]# export ORACLE_HOME=/u01/app/19.3.0/grid
[root@grid19c ~]# export ORACLE_BASE=/tmp
[root@grid19c ~]# cd /u01/app/19.3.0/grid/bin
[root@grid19c bin]# ./asmcmd afd_label DATA01_00001 /dev/sdc --init
  • ORACLE_BASE=/tmp avoids creating root-owned files under the grid oracle base.
  • Exercise care to follow this instruction exactly because an incorrect invocation may not print an error message at all.

separation of duties

If root steps are not done by the DBA, then the DBA should prepare a script for the AFD labeling. For example:

#!/bin/bash

export ORACLE_HOME=/u01/app/19.3.0/grid
export ORACLE_BASE=/tmp
cd /u01/app/19.3.0/grid/bin
./asmcmd afd_label DATA01_00001 /dev/sdc --init

afd_lslbl

If your disk discovery string is the default value ‘/dev/sd*’, as in Azure, you will be able to list your ASM disks with this simple command:

[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
DATA01_00001 /dev/sdc

On AWS, your disk string might be ‘/dev/xvd*’, so this might not work:

[root@ip-172-31-86-22 bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'No devices to be scanned.

In that case, you need to specify the disk string on the command line:

[root@ip-172-31-86-22 bin]# ./asmcmd afd_lslbl '/dev/xvd*'
--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
ASM01_00001 /dev/xvdc

After you run root.sh, you do not need to specify the disk discovery string.

[root@ip-172-31-25-179 bin]# ./asmcmd afd_lslbl
--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
ASM01_00001 /dev/xvdc

After root.sh, afd_lslbl gets its disk string from file /etc/oracleafd.conf:

# cat /etc/oracleafd.conf
afd_diskstring='/dev/xvd*'

afd_unlabel

You can erase the AFD label with:

[root@grid19c bin]# ./asmcmd afd_unlabel /dev/sdc --init
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'No devices to be scanned.

Bugs and limitations

the afd_label –init option

If you leave out the –init option afd_label might not write a label to your disk, but it will also print no error message, and will return a success code.

[root@grid19c bin]# ./asmcmd afd_label DATA01_00001 /dev/sdc
[root@grid19c bin]# echo $?
0
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'No devices to be scanned.

You can use dd to verify that the disk is empty

[root@grid19c bin]# dd if=/dev/sdc ibs=8192 obs=8192 count=1 | od -X
0000000 00000000 00000000 00000000 00000000
*
0020000
1+0 records in
1+0 records out
8192 bytes (8,2 kB) copied, 0.032763 s, 25.0 MB/s

the afd_unlabel –init option

Likewise, if you run afd_unlabel without the –init option, asmcmd may silently not unlabel your disk.

[root@grid19c bin]# ./asmcmd afd_label DATA01_00001 /dev/sdc --init
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
DATA01_00001 /dev/sdc
[root@grid19c bin]# ./asmcmd afd_unlabel /dev/sdc
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
DATA01_00001 /dev/sdc

You can also use dd to view the label

[root@grid19c bin]# dd if=/dev/sdc ibs=8192 obs=8192 count=1 | od -X
0000000 00000000 00000000 00000000 1f75bc96
0000020 00000000 00000000 00000000 00000000
0000040 4c43524f 4b534944 41544144 305f3130
0000060 31303030 00000000 00000000 00000000
0000100 00000000 00000000 00000000 00000000
*
0000440 00000000 00000000 0500000a 5d5ee7d3
0000460 00000000 00000000 00000000 00000000
*
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied*
0020000

other utilities

In this context, other afd commands might run, produce no output, do nothing, and exit with success code.

[root@grid19c bin]# ./asmcmd afd_state
[root@grid19c bin]# echo $?
0
[root@grid19c bin]# ./asmcmd afd_configure
[root@grid19c bin]# echo $?
0

At this stage so far in the grid installation, only commands afd_label is needed. You can also run afd_unlabel, and afd_lslbl. If you experiment with other commands or options, you may end up in a state that is difficult to diagnose.

grid installation directory

Oracle grid can be installed in virtually any directory. Originally, however, path”/u01/app/19.0.0/grid” is hard-coded into scripts such as kfod. You cannot kfod and other such scripts at this early stage. Later in the installation procedure, root.sh replaces such files with scripts that can run out of any installation directory.

[root@grid19c bin]# ./asmcmd
/u01/app/19.3.0/grid/bin/kfod: line 22: /u01/app/19.0.0/grid/bin/kfod.bin: No such file or directory
Use of uninitialized value $result[0] in scalar chomp at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5982.
Use of uninitialized value $result[0] in split at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5985.
Use of uninitialized value $clus_mode in scalar chomp at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5987.
Use of uninitialized value $clus_mode in concatenation (.) or string at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5988.
Use of uninitialized value $clus_mode in string eq at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5993.
...
ASMCMD> exit

conclusion

Most utilities are designed to run with full functionality after gridSetup.sh and root.sh have completed. One notable exception is subset of asmcmd afd sub commands with very specific options.

Because of the bugginess or limitations of the asmcmd AFD options, it is critical to be aware of the limitations, and by all means, to get the AFD disk labeling correct the first time.

Grid setup

This section describes the grid setup, with emphasis on the ASM Disk Group screen

  • start X server (Xming) on Windows
  • On Linux, sudo to grid and cd to the grid home
  • Start gridSetup.sh
  • At the Configuration Options screen, select Standalone (Oracle Restart).
  • Click Next to go to the Create ASM Disk Group screen.

Here is where you are going to create your first ASM disk group. gridSetup.sh scans the operating system for labeled disks. Labeled disks matching “/dev/sd*” will appear in your display.

cr.asm.dg.azure

On some AWS systems, you need to search for “/dev/xvd*”. “Click Change Disk Discovery Path”. For example:

ch.dsk.dsc.path

cr.asm.dg

You can change the disk group name, allocation unit size, and redundancy. Select the disk or disks that you want in your first disk group. Check that Configure Oracle ASM Filter Driver is checked. Press Next

Continuing the grid install

After the Create ASM diskgroup screen, installation can proceed smoothly

  • Complete the remaining screens
  • When prompted, run root.sh

After root.sh is finished, all utilities in grid $ORACLE_HOME/bin should operate with full functionality.

Summary

Oracle Grid Infrastructure substantially improves manageability. Grid is less complex to install than Oracle RAC, but requires a disk installation step which requires attention to detail. The Oracle 19c grid installation requires configuration of the ASM Filter Driver prior to running gridSetup.sh. You can use cloud and Linux administrative tools to allocate storage and then use asmcmd to label the disks.

Install Oracle on Red Hat 8 attempt

Introduction

Here are a few issues you can run across if you attempt to install Oracle 19c on Red Hat Linux 8.

Issues

No python

python -V
-bash: python: command not found

solution:

yum -y install python3
alternatives --set python /usr/bin/python3

No xorg-x11-apps available

# yum install xorg-x11-apps
No match for argument: xorg-x11-apps

Solution:

Subscribe to Red Hat

.vimrc not read

When you startup vim, the settings in your .vimrc does not get read.

Solution:

mv .vimrc .virc

libnsl misssing

/u01/app/19.0.0/grid/perl/bin/perl: error while loading shared libraries: libnsl.so.1: cannot open shared object file: No such file or directory

Solution:

yum -y install libnsl

strings missing

strings < /dev/null

-bash: /usr/bin/strings: No such file or directory

solution

yum -y install binutils

make missing

# make
-bash: make: command not found

Solution: install make

# yum install make

Fail supported OS check

grid install

./gridSetup.sh

[WARNING] [INS-08101] Unexpected error while executing the action at state: ‘supportedOSCheck’

supportedos

database install

supportedosdb

strace shows that the underlying command was:

/bin/rpm -q --whatprovides --qf %{V} redhat-release

Return:

8.0

Red Hat 8.0 is not supported. If you click “yes”, OUI does not proceed. It just waits at a blank screen until you exit.

Stopping here.

Conclusion

Red Hat 8 was released May 7, 2019. Oracle Grid 19c was developed and tested prior to 2019 and before Red Hat 8 was available. The Oracle 19c Grid installation manual does not mention Red Hat 8 as an allowable operating system. One should expect that Oracle Grid 19c Installation on Red Hat 8 will be troublesome and not supportable.

Serverless function learning environments across Amazon, Microsoft, and Google clouds

by Brian Fitzgerald

Introduction

If you want to dip your toe into serverless function programming, you will want to try it out in a simple web-based environment with all the needed syntax setup for you. That way, you can at least get to “Hello World!” without delay or error.

Across three cloud providers, Amazon, Microsoft, and Google, online edit availability varies across languages and operating systems. Here is a brief summary.

Amazon Web Services

AWS serverless functions, Lambda, are available in seven languages, C#, Go, Java, JavaScript, Powershell, Python, and Ruby. You can experiment with some simple coding by entering your choice of JavaScript, Python or Ruby code into the online code editor. If you want to use C#, Go, Java, or Powershell, you will have to develop and test your files outside Lambda, put them in a zip file, and upload the zip file. The Lambda console also accepts a jar file for upload. A Lambda java upload needs class and jar files, not java source files. Also, a jar file can contain bytecode compiled from other languages that run in a JRE, so, for example, you can write a Lambda in scala or clojure.

Saving code changes from the AWS Console is quick, usually under one second. Python code is saved without syntax checking. There is one quirk. Tabs in sources get copied to the clipboard as spaces. I refer to the Lambda Management console in Chrome on Windows.

You can export your function, and in that way, get your source files out after you have tested them.

A python Lambda function can return any data type that is JSON serializable, such as  dict, list, tuple, Boolean, scalars, None, and hierarchies of these, but not, for example, set, date, datetime, class, or object.

Azure

Azure Functions are offered in five languages: C#, Java, JavaScript, Powershell, and Python. Azure functions can be administered online in the Azure portal. Azure offers a choice of Windows or Linux for your function, but online edit is only available for the Windows Function Apps. Python runs on Linux only, which rules out online edit. Creating Java or Go functions is supported only by upload. Online edit, therefore, is available for C#, JavaScript, and Powershell.

An Azure function sits inside a FunctionApp. FunctionApp names must be unique across all Azure. You cannot name your Azure FunctionApp “spam” or “eggs”, and you cannot name your Azure FunctionApp “SpamAndEggs” unless I delete my Azure FunctionApp “SpamAndEggs”.

spamandeggs

FunctionApp creation can take more than 1 minute. When creation finishes, the function list displayed in the portal does not refresh when the function is ready, and you could miss the notification. Saving your code from the portal is almost instantaneous. Compile and run takes less than 1 second. You can zip and download your finished code by pressing Download app content.

Press tab in the online code results in saving space characters, which will be less of a problem, since you won’t be editing python source online.

Google

In Google Cloud Platform, you can create a Google Cloud Function. The language choices are Go, JavaScript, and Python, and you can enter all code using the online editor.

When you finish editing, you press “Deploy”, which can run for up to 1 minute.Syntax errors lead to failed deployment. While testing the code, you can view it read-only.  If you want to make a change, you have to go back to the edit screen. You may download your finished code as a zip file.

Google Cloud function return type is limited to string, tuple, Response instance, or WSGI callable.

Summary

Here is a summary of programming languages across cloud providers.

Language AWS Azure Google
C# upload only online edit not available
Go upload only not available online edit
Java upload only upload only not available
JavaScript online edit online edit online edit
Powershell upload only online edit not available
Python online edit upload only online edit
Ruby online edit not available not available

JavaScript is universally available for learning: You can quickly create a Hello World serverless function using an online editor on any cloud platform. On the other hand, if you are a hard-core java programmer, you are going to need to work out how to upload your code. You could upload code from your IDE, for example. If you want to learn C# or Powershell cloud programming, Azure is the place to be. If you want to explore Go, then go to Google.

 

Azure PostgreSQL command line create, connect, delete

By Brian Fitzgerald

Introduction

This blog post demonstrates creating, connecting, and deleting an Azure PostgreSQL database using command line tools.

Create

Configure

C:\>az configure --defaults location=eastus

C:\>az group create --name dflgrp

C:\>az configure --defaults group=dflgrp

Unique name

The postgres server name must be unique across Azure and across public DNS domain postgres.database.azure.com. Common names, like “test” are already taken. The name must be “available”.

When you enter the name in Azure Portal, your browser checks availability by making AJAX calls as you type.

check.name.avail

This command line script accomplishes the same name check.

Script pg_name_avail.py:

from argparse import ArgumentParser
from azure.common.credentials import get_azure_cli_credentials
from azure.mgmt.rdbms.postgresql.postgre_sql_management_client import PostgreSQLManagementClient

ap = ArgumentParser()
ap.add_argument('--name', '-n')
args = ap.parse_args()

cli = PostgreSQLManagementClient(
    *get_azure_cli_credentials()
)
na = cli.check_name_availability.execute(args.name).as_dict()
print(
    'Name "%s" %s' %
    (
        args.name,
        'is available.' if na['name_available']
        else 'is not available. %s.' % na['message']
    )
)

examples:

C:\>python pg_name_avail.py -n test
Name "test" is not available. Specified server name is already used.

C:\>python pg_name_avail.py -n azpgsrv
Name "azpgsrv" is available.

Create server

C:\>az postgres server create -n azpgsrv --sku-name B_Gen5_2 --admin-user azpgusr --admin-password azpgpw..212

Output is JSON:

{
  "administratorLogin": "azpgusr",
  "earliestRestoreDate": "2019-05-07T15:56:16.430000+00:00",
  "fullyQualifiedDomainName": "azpgsrv.postgres.database.azure.com",
  "id": "/subscriptions/7fe8a9c3-0812-42e2-9733-3f567308a0d0/resourceGroups/dflgrp/providers/Microsoft.DBforPostgreSQL/servers/azpgsrv",
  "location": "eastus",
  "masterServerId": "",
  "name": "azpgsrv",
  "replicaCapacity": 5,
  "replicationRole": "None",
  "resourceGroup": "dflgrp",
  "sku": {
    "capacity": 2,
    "family": "Gen5",
    "name": "B_Gen5_2",
    "size": null,
    "tier": "Basic"
  },
  "sslEnforcement": "Enabled",
  "storageProfile": {
    "backupRetentionDays": 7,
    "geoRedundantBackup": "Disabled",
    "storageAutoGrow": "Disabled",
    "storageMb": 5120
  },
  "tags": null,
  "type": "Microsoft.DBforPostgreSQL/servers",
  "userVisibleState": "Ready",
  "version": "9.6"
}

DNS Name

Microsoft Azure pushes a public, unique, fully qualified domain name for your server.

C:\>nslookup azpgsrv.postgres.database.azure.com
Server:  Wireless_Broadband_Router.home
Address:  192.168.1.1

Non-authoritative answer:
Name:    cr1.eastus1-a.control.database.windows.net
Address:  191.238.6.43
Aliases:  azpgsrv.postgres.database.azure.com

Connect

Firewall issue

C:\>psql -h azpgsrv.postgres.database.azure.com -p 5432 -U azpgusr@azpgsrv -d postgres
psql: FATAL:  no pg_hba.conf entry for host "100.12.249.90", user "azpgusr", database "postgres", SSL on
FATAL:  SSL connection is required. Please specify SSL options and retry.

C:\>az postgres server firewall-rule create -n fwrule1 -s azpgsrv --start-ip-address 100.12.249.90 --end-ip-address 100.12.249.90
{
  "endIpAddress": "100.12.249.90",
  "id": "/subscriptions/7fe8a9c3-0812-42e2-9733-3f567308a0d0/resourceGroups/dflgrp/providers/Microsoft.DBforPostgreSQL/servers/azpgsrv/firewallRules/fwrule1",
  "name": "fwrule1",
  "resourceGroup": "dflgrp",
  "startIpAddress": "100.12.249.90",
  "type": "Microsoft.DBforPostgreSQL/servers/firewallRules"
}

Code page issue

C:\>psql -h azpgsrv.postgres.database.azure.com -p 5432 -U azpgusr@azpgsrv -d postgres
Password for user azpgusr@azpgsrv:
psql (10.4, server 9.6.12)
WARNING: Console code page (437) differs from Windows code page (1252)
         8-bit characters might not work correctly. See psql reference
         page "Notes for Windows users" for details.
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.

postgres=> \q

C:\>psql -h azpgsrv.postgres.database.azure.com -p 5432 -U azpgusr@azpgsrv -d postgres

C:\>chcp 1252
Active code page: 1252

Success

C:\>psql -h azpgsrv.postgres.database.azure.com -p 5432 -U azpgusr@azpgsrv -d postgres
Password for user azpgusr@azpgsrv:
psql (10.4, server 9.6.12)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.

Python connection

C:\Users\Brian Fitzgerald\python\blog>type pgcon.py
import psycopg2

conn = psycopg2.connect(
    dbname='postgres',
    user='azpgusr@azpgsrv',
    host='azpgsrv.postgres.database.azure.com',
    password='azpgpw..212',
    port=5432,
    sslmode='require'
)
print('connected')
C:\>python pgcon.py
connected

Delete

C:\>az postgres server firewall-rule delete -y -n fwrule1 -s azpgsrv

C:\>az postgres server delete -y -n azpgsrv

AWS Simple Queue Service implementation

By Brian Fitzgerald

Introduction

This is an AWS Simple Queue Service (SQS) python implementation with Lambda enqueue and dequeue functions. Some may find this procedure more straightforward than the techniques found in the manual, or in other blogs.

In this implementation, we’re not going to use public IPs, the public internet, internet gateway, Network Address Translation (NAT) instance, VPN connection, or AWS Direct Connect connection. However, we’re also not going to use CloudFormation or EC2. We are only going to use Endpoints, SQS, and Lambda.

Overview

We are going to attack the problem in this order:

  1. Create an endpoint
  2. Create the queue
  3. Create enqueue and dequeue Lambda functions

Create SQS endpoint

Navigate like this:

  • AWS Management Console
  • Services
  • In the left navigation bar, under the “Virtual Private Cloud”, click “Endpoints”
  • Click “Create Endpoint”
  • Select the sqs service for your region, i.e. com.amazonaws.us-east-1.sqs
  • Select your VPC
  • Select two or more subnets
  • Enable Private DNS Name: Leave checked (important)
  • Select your security group
  • Click “Create endpoint”

Note the output:

VPC Endpoint ID vpce-02534f0e3cac4a30d

Caution!

Endpoints are not free! If you are experimenting, then delete your endpoint when you are through. Endpoint charges will accrue on endpoints even if you are not actively using them. $1.44 per day is an example charge.

Create queue

command:

C:\>aws sqs create-queue --queue-name blogQ

output:

https://queue.amazonaws.com/394755372005/blogQ

Observe that the URL is internet facing.

C:\>curl https://queue.amazonaws.com/394755372005/blogQ

The queue could still be secure, because queue access still requires authentication. The queue can be secured further by limiting network access.

Edit permissions

The easiest way is to start with the management console, create a starting policy document, and then edit the document. Navigate:

  • Services
  • Simple Queue Service
  • Select your queue
  • At bottom, click the Permissions tab
  • Click Add a Permission
  • Select Effect: Allow
  • Principal: Click Everybody
  • Actions: Select
    • DeleteMessage
    • ReceiveMessage
    • SendMessage
  • Click Add Permission

edit.queue

Click Edit Policy Document (Advanced)

Put a comma at the end of the “Resource” line and add a condition such as indicated in boldface.

{
  "Version": "2012-10-17",
  "Id": "arn:aws:sqs:us-east-1:394755372005:blogQ/SQSDefaultPolicy",
  "Statement": [
    {
      "Sid": "1",
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "SQS:DeleteMessage",
        "SQS:ReceiveMessage",
        "SQS:SendMessage"
      ],
      "Resource": "arn:aws:sqs:us-east-1:394755372005:blogQ",
      "Condition": {
        "StringEquals": {
          "aws:sourceVpce": "vpce-02534f0e3cac4a30d"
        }
      }
    }
  ]
}
  • Review Policy
  • Save Changes

Create Lambda functions

Enqueue

Create a Lambda, noting these details:

  • VPC Access
  • Select your VPC, Subnets and Security Group
  • Execution Role. Make sure your role has:
    • AWSLambdaBasicExecutionRole
    • AWSLambdaENIManagementAccess
  • Python 3.7

Message

For this blog, each queue message will be a dict with a timestamp and a four-character random string, like this:

{
  'rnd': 'LDOR',
  'ts': '2019-05-07 04:03:24.145866'
}

Files

Create two files:

enq.py

from random import choice
from string import ascii_uppercase
from json import dumps
from datetime import datetime
from sqs import Sqs


def lam(ev, c):
    cli = Sqs.cli()
    rsp = cli.send_message(
        QueueUrl=Sqs.url(),
        DelaySeconds=1,
        MessageBody=dumps(bod())
    )
    return {}


def bod():
    stringLength = 4
    rnd = ''.join(choice(ascii_uppercase) for i in range(stringLength))
    return {
        'rnd': rnd,
        'ts': str(datetime.now())
    }

sqs.py

A class to hide SQS details, common to enq and deq

from boto3 import client


class Sqs:

    @staticmethod
    def cli():
        epurl = 'https://sqs.us-east-1.amazonaws.com/'
        return client(
            service_name='sqs',
            endpoint_url=epurl
        )

    @staticmethod
    def url():
        return 'https://sqs.us-east-1.amazonaws.com/394755372005/blogQ'

The file arrangement looks like this:

enq.lam

In the Handler box, enter “enq.lam”, click “Save”, then click “Test”. Check for “Succeeded”. Click “Test” a few times to enqueue some messages.

Dequeue

Create file deq.py:

from sqs import Sqs


def lam(ev, cx):
    cli = Sqs.cli()
    numdeq = 0
    while True:
        rsp = cli.receive_message(
            QueueUrl=Sqs.url(),
            MaxNumberOfMessages=10,
            WaitTimeSeconds=1
        )

        if 'Messages' not in rsp:
            break
        msgs = rsp['Messages']
        for msg in msgs:
            cli.delete_message(
                QueueUrl=Sqs.url(),
                ReceiptHandle=msg['ReceiptHandle']
            )
            print(msg['Body'])
            numdeq += 1

    print('numdeq = %s' % numdeq)
    ret = {
        'numdeq': numdeq
    }
    return ret

Also, create file sqs.py as before

deq.lam

 

In the Handler box, enter “deq.lam”, click “Save”, then click “Test”. Check for “Succeeded”. Check in the output that all your messages got dequeued.

Summary

We implemented an AWS queue using the most basic tools available, namely VPC Endpoint, SQS, and Lambda

Connect AWS Lambda to RDS SQL Server with pyodbc

By Brian Fitzgerald

Introduction

We want to connect Lambda to Microsoft SQL Server RDS using python ODBC connnector pyodbc. pyodbc calls the Microsoft SQL Server driver, which sits on top of linuxODBC. Installing ODBC drivers into AWS Lambda has frustrated some users in the past. This blog outlines a simple approach.

Staging on EC2

We’re going to create a complete set of files for uploading to Lambda. We’ll stage those files on EC2, zip them, and upload the zip to Lambda.

Create RDS

For this article, I created Microsoft SQL Server RDS instance, as described in this table.

Parameter Value
Instance name odbcblog
Engine SQL Server Express Edition
Engine version 14.00.3049.1.v1
Class db.t2.micro
security group sg-04ed8240
Endpoint IP address odbcblog.p0p3rwmlj3hf.us-east-1.rds.amazonaws.com
Endpoint Port 1433
Master user odbcuser
Master password odbcuser

connectivity

Testing from EC2, I get:

[ec2-user@ip-172-251-80-17 ~]$ nc -v odbcblog.p0p3rwmlj3hf.us-east-1.rds.amazonaws.com 1433
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connection timed out.

RDS instance odbcblog is in security group sg-04ed8240. After associating security group sg-04ed8240 to our EC2, we are good to go:

[ec2-user@ip-172-251-80-17 ~]$ nc -v odbcblog.p0p3rwmlj3hf.us-east-1.rds.amazonaws.com 1433
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 172.251.58.192:1433.

install SQL Server ODBC

[ec2-user@ip-172-251-80-17 ~]$ sudo bash

[root@ip-172-251-80-17 download]# curl packages.microsoft.com/config/rhel/6/prod.repo > /etc/yum.repos.d/mssql-release.repo
[root@ip-172-251-80-17 download]# yum -y install msodbcsql17

Review the output and notice that dependent package unixODBC also gets installed.

 Installing : unixODBC-2.3.1-11.amzn2.0.1.x86_64

We’ll use that fact later.

Notice the /etc/odbcinst.ini entry:

[ODBC Driver 17 for SQL Server]
Description=Microsoft ODBC Driver 17 for SQL Server
Driver=/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.3.so.1.1
UsageCount=1

install pyodbc

[root@ip-172-251-80-17 download]# yum -y install gcc-c++
[root@ip-172-251-80-17 download]# yum -y install python3-devel
[root@ip-172-251-80-17 download]# yum -y install unixODBC-devel
[root@ip-172-251-80-17 download]# pip3 install pyodbc
WARNING: Running pip install with root privileges is generally not a good idea. 
 Try `pip3 install --user` instead.

test pyodbc from EC2

testodbc.py:

import pyodbc

con = pyodbc.connect(
    driver = 'ODBC Driver 17 for SQL Server',
    server = 'odbcblog.p0p3rwmlj3hf.us-east-1.rds.amazonaws.com',
    port = 1433,
    user = 'odbcuser',
    password = 'odbcuser',
    timeout = 5
)
sql = 'select @@version'
crsr = con.cursor()
crsr.execute(sql)
row = crsr.fetchone()
print (row[0])

Execute:

[ec2-user@ip-172-251-80-17 test]$ python3 testodbc.py

Output:

Microsoft SQL Server 2017 (RTM-CU13-OD) (KB4483666) - 14.0.3049.1 (X64)
Dec 15 2018 11:16:42
Copyright (C) 2017 Microsoft Corporation
Express Edition (64-bit) on Windows Server 2016 Datacenter 10.0  
  (Build 14393: ) (Hypervisor)

cool.

Stage Lambda code on EC2

Download packages

Let’s start over and download the packages

[ec2-user@ip-172-251-80-17 download]$ yumdownloader unixODBC.x86_64
[ec2-user@ip-172-251-80-17 download]$ yumdownloader msodbcsql17
[ec2-user@ip-172-251-80-17 download]$ pip3 download pyodbc
[ec2-user@ip-172-251-80-17 download]$ ls -1
msodbcsql17-17.3.1.1-1.x86_64.rpm
pyodbc-4.0.26.tar.gz
unixODBC-2.3.1-11.amzn2.0.1.x86_64.rpm

Identify a Lambda staging directory on EC2

[ec2-user@ip-172-251-80-17 testodbc]$ mkdir -p /home/ec2-user/lambdas/testodbc
[ec2-user@ip-172-251-80-17 testodbc]$ cd /home/ec2-user/lambdas/testodbc

Install the rpms

[ec2-user@ip-172-251-80-17 testodbc]$ rpm2cpio /home/ec2-user/lambdas/download/unixODBC-2.3.1-11.amzn2.0.1.x86_64.rpm | cpio -id
2504 blocks
[ec2-user@ip-172-251-80-17 testodbc]$ rpm2cpio /home/ec2-user/lambdas/download/msodbcsql17-17.3.1.1-1.x86_64.rpm | cpio -id
4486 blocks

Install pyodbc

Create a python library directory

[ec2-user@ip-172-251-80-17 lib]$ mkdir -p /home/ec2-user/lambdas/testodbc/python/lib

Install

[ec2-user@ip-172-251-80-17 ~]$ pip3 install --target /home/ec2-user/lambdas/testodbc/python/lib /home/ec2-user/lambdas/download/pyodbc-4.0.26.tar.gz

Directory structure

Observe the directory structure so far

[ec2-user@ip-172-251-80-17 ~]$ cd /home/ec2-user/lambdas/testodbc
[ec2-user@ip-172-251-80-17 testodbc]$ ls -1F
etc/
opt/
python/
usr/
[ec2-user@ip-172-251-80-17 testodbc]$ cd usr/
[ec2-user@ip-172-251-80-17 usr]$ ls -1F
bin/
lib64/
share/

Library directory

The Lambda function is going to load pyodbc. pyodbc is going to look for libodbc.so.2, but it is not going to search usr/lib64. It will do you no good to set Lambda’s LD_LIBRARY_PATH because the Lambda’s containing runtime starts before the Lambda’s environment gets set. Lambda will search lib, so move the library directory there:

[ec2-user@ip-172-251-80-17 testodbc]$ mv usr/lib64 lib

odbcinst.ini

AWS will install our Lambda code in a virtual machine under /var/task. Edit odbcinst.ini

[ec2-user@ip-172-251-80-17 testodbc]$ vi etc/odbcinst.ini

Replace the contents:

[ODBC Driver 17 for SQL Server]
Description=Microsoft ODBC Driver 17 for SQL Server
Driver=/var/task/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.3.so.1.1
UsageCount=1

Python application code directory

To avoid clutter, we will put our own application code in a subdirectory.

[ec2-user@ip-172-251-80-17 testodbc]$ mkdir py

Summary

The directory structure at the top level is now:

[ec2-user@ip-172-251-80-17 testodbc]$ ls -1F
etc/
lib/
opt/
py/
python/
usr/

Create the lambda code

File:

[ec2-user@ip-172-251-80-17 testodbc]$ vi py/testodbc.py

Contents:

import pyodbc
from json import dumps

def lam(ev, cx):
    con = pyodbc.connect(
        driver = 'ODBC Driver 17 for SQL Server',
        server = 'odbcblog.p0p3rwmlj3hf.us-east-1.rds.amazonaws.com',
        port = 1433,
        user = 'odbcuser',
        password = 'odbcuser',
        timeout = 5
    )
    sql = 'select @@version'
    crsr = con.cursor()
    crsr.execute(sql)
    row = crsr.fetchone()
    version = row[0]
    ret = {
	'version': version
    }
    return dumps(ret)

The handler will, therefore be testodbc.lam

Fun fact: You cannot name an AWS Lambda python handler “lambda”.

Create the Lambda

Initial creation

Create a basic lambda function by any method. For example, use the Lambda console.

Configuration Value
Name testOdbc
Runtime python 3.7
Timeout 5 minutes
Handler testodbc.lam

Set two environment variables:

variable value
ODBCSYSINI /var/task/etc
PYTHONPATH /var/runtime:/var/task/py:/var/task/python/lib

Code upload

Zip all libraries, configuration files, and code:

[ec2-user@ip-172-251-80-17 testodbc]$ zip -rq ../testodbc.zip *

Upload the files

[ec2-user@ip-172-251-80-17 testodbc]$ aws lambda update-function-code
   --function-name testOdbc
   --zip-file fileb://../testodbc.zip

Networking

This section must be handled with care. Otherwise, you are going to get ODBC driver timeouts. For Lambda to successfully connect to RDS, two conditions must be in place.

Elastic Network Interface

Lambda needs basic execution role for basic Cloudwatch access. In addition, Your Lambda needs to be able to bind to an Elastic Network Interface.

In IAM Console, create a new role having these roles. Ex: odbcLamRole

  • AWSLambdaBasicExecutionRole
  • AWSLambdaENIManagementAccess

In Lambda Console, assign the role to the Lambda.

VPC

In Lambda console in the Network pane, if you see “No VPC”, switch to your VPC. select two or more subnets, and select your security group.

network

Review

You may review the configuration from the CLI.

[ec2-user@ip-172-251-80-17 ~]$ aws lambda get-function-configuration --function-name testOdbc

Output:

{
    "FunctionName": "testOdbc",
    "LastModified": "2019-05-02T20:29:51.551+0000",
    "RevisionId": "df676edb-e545-42dc-90c3-0cf5dc16ed81",
    "MemorySize": 128,
    "Environment": {
        "Variables": {
            "PYTHONPATH": "/var/runtime:/var/task/py:/var/task/python/lib",
            "ODBCSYSINI": "/var/task/etc"
        }
    },
    "Version": "$LATEST",
    "Role": "arn:aws:iam::665575760545:role/odbcLamRole",
    "Timeout": 300,
    "Runtime": "python3.7",
    "TracingConfig": {
        "Mode": "PassThrough"
    },
    "CodeSha256": "VV3g7pLL1G+y3PoEPyX+UcbwMn40KIiOUbCu5ApYowM=",
    "Description": "",
    "VpcConfig": {
        "SubnetIds": [
            "subnet-8c036bd0",
            "subnet-b7214ed0",
            "subnet-aa197384",
            "subnet-364a757c",
            "subnet-af9a2991",
            "subnet-0f476600"
        ],
        "VpcId": "vpc-0d398177",
        "SecurityGroupIds": [
            "sg-04ed8240"
        ]
    },
    "CodeSize": 2322598,
    "FunctionArn": "arn:aws:lambda:us-east-1:665575760545:function:testOdbc",
    "Handler": "testodbc.lam"
}

run the Lambda

command:

[ec2-user@ip-172-251-80-17 ~]$ aws lambda invoke --function-name testOdbc out.json

cli output:
{
    "ExecutedVersion": "$LATEST",
    "StatusCode": 200
}

Lambda return:

[ec2-user@ip-172-251-80-17 ~]$ cat out.json
"{\"version\": \"Microsoft SQL Server 2017 (RTM-CU13-OD) (KB4483666) - 14.0.3049.1 (X64) \\n\\tDec 15 2018 11:16:42 \\n\\tCopyright (C) 2017 Microsoft Corporation\\n\\tExpress Edition (64-bit) on Windows Server 2016 Datacenter 10.0  (Build 14393: ) (Hypervisor)\\n\"}"

The Lambda performed these steps

  • Load all application code and dependent libraries
  • Load Unix ODBC driver
  • In the handler, load the MS ODBC driver
  • Connect to the RDS SQL Server
  • Allocate a cursor
  • Execute a SQL statement
  • Retrieve the result set
  • Parse the result set as a version
  • Return the version as JSON from the lambda handler

Summary

We accomplished these items

  • setup EC2, RDS, and Lambda in a VPC
  • install pyodbc and underlying drivers in EC2
  • test python code by connecting from EC2 to RDS
  • stage all needed drivers, configuration files python libraries, and python application code on EC2
  • upload the code to Lambda
  • run the Lambda

We have therefore established connectivity from a Python Lambda to a SQL Server RDS.

Upload AWS Lambda code from command line or from python

By Brian Fitzgerald

Introduction

For many first-time users, creating the Lambda function is done in the AWS Management Console. The offered code entry choices are “Edit code inline”, “Upload a .zip file” (from your PC, or wherever you are running your browser), or “Upload a file from Amazon S3”.

code.entry.type

You can, however, save a few steps by uploading your code directly from its development, staging, or testing location. That way, you don’t need to log on to the console and work the menus, and you don’t need to copy the zip file to your PC, or to S3.

The AWS API

Tasks done from AWS Management Console are communicated to AWS using an API library, which communicates to AWS vi JSON. However, Amazon supplies a command line interpreter (CLI) built on top of the same library, so you can accomplish your tasks without using the browser. You can also use the API to write your own code to accomplish the same task.

Upload Lambda from EC2 using the CLI

If your Lambda code is in EC2, then it is convenient to upload directly from EC2 using the AWS CLI.

Configure

The AWS CLI command is “aws”, and is already installed in EC2.If you have not done so already, you should run aws configure, a one-time setup. If you have not already done so, generate an AWS acccess key and use the actual values in place of “AKI…” and “Gtj…”. Choose your own region. In the beginning, you are better off keeping all your code in a single region.

[ec2-user@ip-172-31-80-17 ~]$ aws configure
AWS Access Key ID [None]: AKI...
AWS Secret Access Key [None]: Gtj...
Default region name [us-east-1]:
Default output format [None]:

Let’s assume that you have some code to upload:

[ec2-user@ip-172-31-80-17 lambdas]$ unzip -l trc.zip
Archive: trc.zip
Length Date Time Name
--------- ---------- ----- ----
0 04-29-2019 01:37 bin/
854664 07-30-2018 20:05 bin/strace
119 05-01-2019 01:28 lg.py
84 04-30-2019 01:54 sllg.py
254 04-30-2019 02:07 trcp.py
119 05-01-2019 01:28 trc.py
--------- -------
855240 6 files

Assume, for example, that the destination Lambda is called “lamLocal”. The upload command is (showing lines folded):

[ec2-user@ip-172-31-80-17 lambdas]$ aws lambda update-function-code 
  --function-name lamLocal 
  --zip-file fileb://trc.zip

The response is in JSON. A normal response looks like this:

{
  "FunctionName": "lamLocal",
  "LastModified": "2019-05-01T19:13:47.011+0000",
  "RevisionId": "a139363f-2f31-4aa2-818f-ec20057a981b",
  "MemorySize": 128,
  "Version": "$LATEST",
  "Role": "arn:aws:iam::549357536367:role/service-role/lamLocal-role-gdmmx0de",
  "Timeout": 3,
  "Runtime": "python3.6",
  "TracingConfig": {
    "Mode": "PassThrough"
  },
  "CodeSha256": "45ELCm6smYw5Q/fFxMR+756GwfvSEGeLxVIF0kyFhac=",
  "Description": "",
  "VpcConfig": {
    "SubnetIds": [],
    "VpcId": "",
    "SecurityGroupIds": []
  },
  "CodeSize": 314844,
  "FunctionArn": "arn:aws:lambda:us-east-1:549357536367:function:lamLocal",
  "Handler": "trcp.lam"
}

If an error occurs, the CLI displays no JSON and an exception message (folded):

[ec2-user@ip-172-31-80-17 lambdas]$ aws lambda update-function-code
  --function-name zamLocal --zip-file fileb://trc.zip

An error occurred (ResourceNotFoundException) 
  when calling the UpdateFunctionCode operation: 
  Function not found: arn:aws:lambda:us-east-1:549357536367:function:zamLocal

This is a basic example of updating code on an existing Lambda that was previously created in the AWS Management Console. It is also possible to create, invoke, and delete a Lambda from the AWS CLI.

Upload Lambda from EC2 from code

Now we’ll upload the zip to lambda by making a python call. First some setup

[ec2-user@ip-172-31-80-17 lambdas]$ sudo bash
[root@ip-172-31-80-17 lambdas]# yum -y update
[root@ip-172-31-80-17 lambdas]# pip3 install boto3
WARNING: Running pip install with root privileges 
  is generally not a good idea. 
  Try `pip3 install --user` instead.

File uplam.py:

from boto3 import client
from json import dumps

awskey = 'AKI...'
awskeysec = 'Gtj...'
lam = 'lamLocal'
zf = 'trc.zip'

cli = client(
    'lambda',
    aws_access_key_id= awskey,
    aws_secret_access_key= awskeysec
)

with open(zf, 'rb') as f:
    ret = cli.update_function_code(
        FunctionName = lam,
        ZipFile = f.read()
    )
    print(dumps(ret, indent=4, sort_keys=True))

Execution:

[ec2-user@ip-172-31-80-17 lambdas]$ python3 uplam.py

Output:

{
    "CodeSha256": "45ELCm6smYw5Q/fFxMR+756GwfvSEGeLxVIF0kyFhac=",
    "CodeSize": 314844,
    "Description": "",
    "FunctionArn": "arn:aws:lambda:us-east-1:549357536367:function:lamLocal",
    "FunctionName": "lamLocal",
    "Handler": "trcp.lam",
    "LastModified": "2019-05-01T22:46:44.426+0000",
    "MemorySize": 128,
    "ResponseMetadata": {
        "HTTPHeaders": {
            "connection": "keep-alive",
            "content-length": "675",
            "content-type": "application/json",
            "date": "Wed, 01 May 2019 22:46:44 GMT",
            "x-amzn-requestid": "fe6d9520-6c62-11e9-a5fb-27bb7e5a8a89"
        },
        "HTTPStatusCode": 200,
        "RequestId": "fe6d9520-6c62-11e9-a5fb-27bb7e5a8a89",
        "RetryAttempts": 0
    },
    "RevisionId": "08722516-76c6-4b96-a9ab-dd6f89fadf1f",
    "Role": "arn:aws:iam::549357536367:role/service-role/lamLocal-role-gdmmx0de",
    "Runtime": "python3.6",
    "Timeout": 3,
    "TracingConfig": {
        "Mode": "PassThrough"
    },
    "Version": "$LATEST",
    "VpcConfig": {
        "SecurityGroupIds": [],
        "SubnetIds": [],
        "VpcId": ""
    }
}

In case of error, update_function_code throws an exception and does not return the JSON value.

[ec2-user@ip-172-31-80-17 lambdas]$ python3 uplam.py
Traceback (most recent call last):
  File "uplam.py", line 18, in 
    ZipFile = f.read()
  File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 357,
    in _api_call return self._make_api_call(operation_name, kwargs)
  File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 661,
    in _make_api_call raise error_class(parsed_response, operation_name)
botocore.errorfactory.ResourceNotFoundException: An error occurred 
    (ResourceNotFoundException) 
    when calling the UpdateFunctionCode operation: 
    Function not found: 
    arn:aws:lambda:us-east-1:549357536367:function:zamLocal

Other Languages

I have demonstrated Lambda upload using python3 and boto3. The upload program could have as well been written in any of these languages:

  • Java
  • .NET
  • Node.js
  • PHP
  • Ruby
  • Go
  • C++

Each of these APIs has access to the Lambda service and a method for updateFunctionCode.

Conclusion

An introduction to Lambda functions will lead the user to the AWS Management Console. From there, code entry methods are inline editor, upload .zip file from PC, and upload .zip file from S3. Instead, you may upload your code by using the AWS CLI. Finally, you may upload a Lambda function from within a python script, or a program written in any of seven other languages.