aws lambda publish-layer-version fails silently

By Brian Fitzgerald

Introduction

aws lambda publish-layer-version fails silently. Out of memory is the root cause. Changing the instance type fixes the problem.

Symptoms

aws lambda publish-layer-version produces no output. The exit status is nonzero.

[root@ip-172-31-62-89 layers]# aws lambda publish-layer-version --layer-name oracle-instant-client-layer --zip-file fileb://oracle-instant-client-layer.zip  --compatible-runtimes python3.7


[root@ip-172-31-62-89 layers]# echo $?
255

Investigation

Investigation using strace reveals an out of memory condition

[root@ip-172-31-62-89 layers]# uname -a
Linux ip-172-31-62-89.ec2.internal 4.14.186-146.268.amzn2.x86_64 #1 SMP Tue Jul 14 18:16:52 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
[root@ip-172-31-62-89 layers]# strace -f -o tr aws lambda publish-layer-version --layer-name oracle-instant-client-layer --zip-file fileb://oracle-instant-client-layer.zip  --compatible-runtimes python3.7
[root@ip-172-31-62-89 layers]# grep ENOMEM tr
3576  mmap(NULL, 272371712, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
3576  mmap(NULL, 272502784, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)

The instance type is t2.micro. Physical memory is 983 MB

[root@ip-172-31-62-89 layers]# curl -X GET http://169.254.169.254/latest/meta-data/instance-type                                                  t2.micro[root@ip-172-31-62-89 layers]#
[root@ip-172-31-62-89 layers]# free -m
              total        used        free      shared  buff/cache   available
Mem:            983          63         742           0         177         784
Swap:             0           0           0

Solution

Change the instance type to t2.small. Re-run the command, The normal json output appears. The exit status is 0.

[root@ip-172-31-62-89 layers]# aws lambda publish-layer-version --layer-name oracle-instant-client-layer --zip-file fileb://oracle-instant-client-layer.zip  --compatible-runtimes python3.7
{
    "LayerVersionArn": "arn:aws:lambda:us-east-1:999999999999:layer:oracle-instant-client-layer:2",
    "Description": "",
    "CreatedDate": "2020-08-02T21:03:38.787+0000",
    "LayerArn": "arn:aws:lambda:us-east-1:999999999999:layer:oracle-instant-client-layer",
    "Content": {
        "CodeSize": 51069060,
        "CodeSha256": "B1DGnA385aL50A8mrKoq1FOsIsEtMerbhdYCwd485YA=",
        "Location": "https://prod-04-2014-layers. etc."
    },
    "Version": 2,
    "CompatibleRuntimes": [
        "python3.7"
    ]
}
[root@ip-172-31-62-89 layers]# echo $?
0

Physical memory us 1991 MB.

[root@ip-172-31-62-89 layers]# curl -X GET http://169.254.169.254/latest/meta-data/instance-type
t2.small
[root@ip-172-31-62-89 layers]# free -m
              total        used        free      shared  buff/cache   available
Mem:           1991          63        1712           0         215        1788
Swap:             0           0           0

Conclusion

An AWS CLI on EC2 produced no output and exited silently. Investigation uncovered an out of memory condition, which was fixed by upgrading the instance type.

Scala AWS Lambda function

By Brian Fitzgerald

Introduction

This is a step-by-step procedure on how to create Scala AWS Lambda implementation in Eclipse with Maven. The compiled classes will run in the cloud in AWS Lambda’s java runtime environment (JRE).

Eclipse

To begin with, install these packages into your Eclipse, or check that they are installed:

  • AWS Toolkit for Eclipse
  • Scala IDE for Eclipse

Begin with Hello from Lambda!

The instructions in this section are based on the AWS manual. Setup AWS Toolkit for Eclipse.

Setup the AWS Lambda project

Click AWS Toolkit for Eclipse,

aws.toolkit.for.eclipse

and select “New AWS Lambda Java project…” from the dropdown. In the dialog box, enter the project name, i.e. “FiboLam”, change “Input Type” from “S3 Event” to “Custom”, and press “Finish”.

new.aws.lam.proj

Test the existing java function

Right click the project and select “Amazon Web Services” -> “Upload Function to AWS Lambda…”. In the dialog box, select “Create a new Lambda function:”. Enter a name and press “Next”. If you have not done so previously, create your role and S3 bucket. Press “Finish”.

upload.func

Again, right click the project and select “Amazon Web Services” -> “Run function on AWS Lambda…”. Note that the Lambda Handler is in the form “package.class”, i.e. “com.amazonaws.lambda.demo.LambdaFunctionHandler”. Press “Invoke”. Check the console:

Skip uploading function code since no local change is found...
Invoking function...
==================== FUNCTION OUTPUT ====================
"Hello from Lambda!"
==================== FUNCTION LOG OUTPUT ====================
START RequestId: 45a20261-f9aa-4104-a0a3-b3a65cc37c07 Version: $LATEST
Input: {}END RequestId: 45a20261-f9aa-4104-a0a3-b3a65cc37c07
REPORT RequestId: 45a20261-f9aa-4104-a0a3-b3a65cc37c07	Duration: 0.74 ms	Billed Duration: 100 ms 	Memory Size: 512 MB	Max Memory Used: 82 MB	

Up to here, we have a working AWS Lambda Java project.

Delete the existing java code

Now wipe out the java code

Expand src/main/java. Delete file LambdaFunctionHandler.java. Delete package com.example.lambda.demo

Expand src/test/java. You may delete file LambdaFunctionHandlerTest.java. Fixing it is out of scope for now.

Expand Maven Dependencies. You will see no scala runtime libraries, so far.

Scala

Add scala runtime library

Right click on the project, select “Configure->Add Scala Nature”

Refresh Maven Dependencies and take note of the version number, i.e. 2.12.3

Right click on the project and select “Maven”->”Add Dependency”, In the dialog box enter:

Group Id: org.scala-lang

Artifact Id: scala-library

Version: the version. i.e 2.12.3

Press “OK”

add.maven.dependency

Expand Maven Dependencies. Note that the scala-library jar appears. A screenshot of the updated Maven tree appears later in this blog.

Create Scala Sources

Set scala perspective

Select Window->Perspective->Open Perspective->Other..” Scroll down and select “Scala”. Press “Open”

Create Scala classes

At “src/main/java”, you can create a new package, let’s say “com.yourcompany.fibo”. Then, create your scala source file. Select “New->Scala Class”, and enter the class name, i.e. “com.yourcompany.fibo.Fibo”.

Replace the code with:

import com.amazonaws.services.lambda.runtime.Context
import com.amazonaws.services.lambda.runtime.RequestHandler

class YourClass extends RequestHandler[Object, String] {

  def handleRequest(o: Object, cx: Context): String =
    yourCode

}

Method “handleRequest” is mandatory. In practice, input Object will be JSON serializable. The return must be of type String. For example, Fibo.scala:

package com.yourcompany.fibo

import com.amazonaws.services.lambda.runtime.Context
import com.amazonaws.services.lambda.runtime.RequestHandler
import com.yourcompany.fibo.FibTailRec.fib

class Fibo extends RequestHandler[Object, String] {

  def handleRequest(o: Object, cx: Context): String =
    fibTailRec(o.toString.toInt).toString

  def fibTailRec(n: Int): Int =
    fib(n, 0, 1)

}

press ctrl-S to save.

FibTailRec.scala

package com.yourcompany.fibo

import scala.annotation.tailrec

object FibTailRec {

  @tailrec def fib(i: Int, p: Int, f: Int): Int = i match {
    case 0 => {
      p
    }
    case _ => fib(i - 1, f, p + f)
  }
}

By the way, this function demonstrates Scala tail call optimization, and is examined in more detail here.

The project tree

Examine the Package Explorer. Expand src/main/java. Note that there is only your package, your sources, and no demo code. Expand Maven Dependencies. Note the scala library jar. Note that no folders or files are flagged with errors.

explorer

Upload the Lambda function

Right click the project and select “Amazon Web Services”->”Upload Function to AWS Lambda..”. Select the correct handler (there should be only one), i.e. “com.yourcompany.fibo.Fibo”. That way, AWS Lambda will automatically look to run method handleRequest. Choose existing Lambda Function: FiboLam

upload.scala

Run the Lambda function

Again, right click the project and select “Amazon Web Services” -> “Run function on AWS Lambda…”. Select your handler, i.e. “com.yourcompany.fibo.Fibo”. For the sake of this blog, we’re not going to delve into scala JSON parsing, so in the text pane, enter a nonnegative integer, i.e. 7. Press “Invoke”.

run.dialog

Output:

Skip uploading function code since no local change is found...
Invoking function...
==================== FUNCTION OUTPUT ====================
"13"
==================== FUNCTION LOG OUTPUT ====================
START RequestId: 0f0990fe-497d-4f78-9709-5c67085d7a78 Version: $LATEST
END RequestId: 0f0990fe-497d-4f78-9709-5c67085d7a78
REPORT RequestId: 0f0990fe-497d-4f78-9709-5c67085d7a78	Duration: 0.54 ms	Billed Duration: 100 ms 	Memory Size: 512 MB	Max Memory Used: 93 MB	

Interpretation: 13 is the 7th Fibonacci number.

The upload file

You can go to your S3 bucket and find your file there (FiboLam.zip). You can download and explore the zip file and find your class files and the library jar files.

Summary

I have implemented a Scala AWS Lambda Function using these tools:

  • Eclipse
    • AWS Toolkit
    • Scala IDE
  • AWS
    • Lambda Function
    • IAM role
    • S3 bucket
  • Maven

The finished Lambda is made from only Scala classes. Now you can replace the Fibonacci code with your own code, and replace the test event with events from other AWS components in your own project.

Serverless function learning environments across Amazon, Microsoft, and Google clouds

by Brian Fitzgerald

Introduction

If you want to dip your toe into serverless function programming, you will want to try it out in a simple web-based environment with all the needed syntax setup for you. That way, you can at least get to “Hello World!” without delay or error.

Across three cloud providers, Amazon, Microsoft, and Google, online edit availability varies across languages and operating systems. Here is a brief summary.

Amazon Web Services

AWS serverless functions, Lambda, are available in seven languages, C#, Go, Java, JavaScript, Powershell, Python, and Ruby. You can experiment with some simple coding by entering your choice of JavaScript, Python or Ruby code into the online code editor. If you want to use C#, Go, Java, or Powershell, you will have to develop and test your files outside Lambda, put them in a zip file, and upload the zip file. The Lambda console also accepts a jar file for upload. A Lambda java upload needs class and jar files, not java source files. Also, a jar file can contain bytecode compiled from other languages that run in a JRE, so, for example, you can write a Lambda in scala or clojure.

Saving code changes from the AWS Console is quick, usually under one second. Python code is saved without syntax checking. There is one quirk. Tabs in sources get copied to the clipboard as spaces. I refer to the Lambda Management console in Chrome on Windows.

You can export your function, and in that way, get your source files out after you have tested them.

A python Lambda function can return any data type that is JSON serializable, such as  dict, list, tuple, Boolean, scalars, None, and hierarchies of these, but not, for example, set, date, datetime, class, or object.

Azure

Azure Functions are offered in five languages: C#, Java, JavaScript, Powershell, and Python. Azure functions can be administered online in the Azure portal. Azure offers a choice of Windows or Linux for your function, but online edit is only available for the Windows Function Apps. Python runs on Linux only, which rules out online edit. Creating Java or Go functions is supported only by upload. Online edit, therefore, is available for C#, JavaScript, and Powershell.

An Azure function sits inside a FunctionApp. FunctionApp names must be unique across all Azure. You cannot name your Azure FunctionApp “spam” or “eggs”, and you cannot name your Azure FunctionApp “SpamAndEggs” unless I delete my Azure FunctionApp “SpamAndEggs”.

spamandeggs

FunctionApp creation can take more than 1 minute. When creation finishes, the function list displayed in the portal does not refresh when the function is ready, and you could miss the notification. Saving your code from the portal is almost instantaneous. Compile and run takes less than 1 second. You can zip and download your finished code by pressing Download app content.

Press tab in the online code results in saving space characters, which will be less of a problem, since you won’t be editing python source online.

Google

In Google Cloud Platform, you can create a Google Cloud Function. The language choices are Go, JavaScript, and Python, and you can enter all code using the online editor.

When you finish editing, you press “Deploy”, which can run for up to 1 minute.Syntax errors lead to failed deployment. While testing the code, you can view it read-only.  If you want to make a change, you have to go back to the edit screen. You may download your finished code as a zip file.

Google Cloud function return type is limited to string, tuple, Response instance, or WSGI callable.

Summary

Here is a summary of programming languages across cloud providers.

Language AWS Azure Google
C# upload only online edit not available
Go upload only not available online edit
Java upload only upload only not available
JavaScript online edit online edit online edit
Powershell upload only online edit not available
Python online edit upload only online edit
Ruby online edit not available not available

JavaScript is universally available for learning: You can quickly create a Hello World serverless function using an online editor on any cloud platform. On the other hand, if you are a hard-core java programmer, you are going to need to work out how to upload your code. You could upload code from your IDE, for example. If you want to learn C# or Powershell cloud programming, Azure is the place to be. If you want to explore Go, then go to Google.

 

AWS Simple Queue Service implementation

By Brian Fitzgerald

Introduction

This is an AWS Simple Queue Service (SQS) python implementation with Lambda enqueue and dequeue functions. Some may find this procedure more straightforward than the techniques found in the manual, or in other blogs.

In this implementation, we’re not going to use public IPs, the public internet, internet gateway, Network Address Translation (NAT) instance, VPN connection, or AWS Direct Connect connection. However, we’re also not going to use CloudFormation or EC2. We are only going to use Endpoints, SQS, and Lambda.

Overview

We are going to attack the problem in this order:

  1. Create an endpoint
  2. Create the queue
  3. Create enqueue and dequeue Lambda functions

Create SQS endpoint

Navigate like this:

  • AWS Management Console
  • Services
  • In the left navigation bar, under the “Virtual Private Cloud”, click “Endpoints”
  • Click “Create Endpoint”
  • Select the sqs service for your region, i.e. com.amazonaws.us-east-1.sqs
  • Select your VPC
  • Select two or more subnets
  • Enable Private DNS Name: Leave checked (important)
  • Select your security group
  • Click “Create endpoint”

Note the output:

VPC Endpoint ID vpce-02534f0e3cac4a30d

Caution!

Endpoints are not free! If you are experimenting, then delete your endpoint when you are through. Endpoint charges will accrue on endpoints even if you are not actively using them. $1.44 per day is an example charge.

Create queue

command:

C:\>aws sqs create-queue --queue-name blogQ

output:

https://queue.amazonaws.com/394755372005/blogQ

Observe that the URL is internet facing.

C:\>curl https://queue.amazonaws.com/394755372005/blogQ

The queue could still be secure, because queue access still requires authentication. The queue can be secured further by limiting network access.

Edit permissions

The easiest way is to start with the management console, create a starting policy document, and then edit the document. Navigate:

  • Services
  • Simple Queue Service
  • Select your queue
  • At bottom, click the Permissions tab
  • Click Add a Permission
  • Select Effect: Allow
  • Principal: Click Everybody
  • Actions: Select
    • DeleteMessage
    • ReceiveMessage
    • SendMessage
  • Click Add Permission

edit.queue

Click Edit Policy Document (Advanced)

Put a comma at the end of the “Resource” line and add a condition such as indicated in boldface.

{
  "Version": "2012-10-17",
  "Id": "arn:aws:sqs:us-east-1:394755372005:blogQ/SQSDefaultPolicy",
  "Statement": [
    {
      "Sid": "1",
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "SQS:DeleteMessage",
        "SQS:ReceiveMessage",
        "SQS:SendMessage"
      ],
      "Resource": "arn:aws:sqs:us-east-1:394755372005:blogQ",
      "Condition": {
        "StringEquals": {
          "aws:sourceVpce": "vpce-02534f0e3cac4a30d"
        }
      }
    }
  ]
}
  • Review Policy
  • Save Changes

Create Lambda functions

Enqueue

Create a Lambda, noting these details:

  • VPC Access
  • Select your VPC, Subnets and Security Group
  • Execution Role. Make sure your role has:
    • AWSLambdaBasicExecutionRole
    • AWSLambdaENIManagementAccess
  • Python 3.7

Message

For this blog, each queue message will be a dict with a timestamp and a four-character random string, like this:

{
  'rnd': 'LDOR',
  'ts': '2019-05-07 04:03:24.145866'
}

Files

Create two files:

enq.py

from random import choice
from string import ascii_uppercase
from json import dumps
from datetime import datetime
from sqs import Sqs


def lam(ev, c):
    cli = Sqs.cli()
    rsp = cli.send_message(
        QueueUrl=Sqs.url(),
        DelaySeconds=1,
        MessageBody=dumps(bod())
    )
    return {}


def bod():
    stringLength = 4
    rnd = ''.join(choice(ascii_uppercase) for i in range(stringLength))
    return {
        'rnd': rnd,
        'ts': str(datetime.now())
    }

sqs.py

A class to hide SQS details, common to enq and deq

from boto3 import client


class Sqs:

    @staticmethod
    def cli():
        epurl = 'https://sqs.us-east-1.amazonaws.com/'
        return client(
            service_name='sqs',
            endpoint_url=epurl
        )

    @staticmethod
    def url():
        return 'https://sqs.us-east-1.amazonaws.com/394755372005/blogQ'

The file arrangement looks like this:

enq.lam

In the Handler box, enter “enq.lam”, click “Save”, then click “Test”. Check for “Succeeded”. Click “Test” a few times to enqueue some messages.

Dequeue

Create file deq.py:

from sqs import Sqs


def lam(ev, cx):
    cli = Sqs.cli()
    numdeq = 0
    while True:
        rsp = cli.receive_message(
            QueueUrl=Sqs.url(),
            MaxNumberOfMessages=10,
            WaitTimeSeconds=1
        )

        if 'Messages' not in rsp:
            break
        msgs = rsp['Messages']
        for msg in msgs:
            cli.delete_message(
                QueueUrl=Sqs.url(),
                ReceiptHandle=msg['ReceiptHandle']
            )
            print(msg['Body'])
            numdeq += 1

    print('numdeq = %s' % numdeq)
    ret = {
        'numdeq': numdeq
    }
    return ret

Also, create file sqs.py as before

deq.lam

 

In the Handler box, enter “deq.lam”, click “Save”, then click “Test”. Check for “Succeeded”. Check in the output that all your messages got dequeued.

Summary

We implemented an AWS queue using the most basic tools available, namely VPC Endpoint, SQS, and Lambda

AWS Lambda python directory Easter Egg

By Brian Fitzgerald

This could be  a bug or an Easter Egg in AWS Lambda. When you create directories “py” and “python”, directory “thon” appears in the Lambda Console code tree. Directory “thon” does not actually exist.

Setup

[ec2-user@ip-172-31-80-17 EasterEgg]$ mkdir py python
[ec2-user@ip-172-31-80-17 EasterEgg]$ cat > lam.py
from os import system

def lam(e,c):
    system('du')
    return {}
[ec2-user@ip-172-31-80-17 EasterEgg]$ zip -rq ../EasterEgg.zip *
[ec2-user@ip-172-31-80-17 EasterEgg]$ aws lambda update-function-code 
  --function-name EasterEgg 
  --zip-file fileb://../EasterEgg.zip

Result

Directory “thon” appears in the code tree.. du output shows that directory “thon” does not exist.

eegg

Connect AWS Lambda to RDS SQL Server with pyodbc

By Brian Fitzgerald

Introduction

We want to connect Lambda to Microsoft SQL Server RDS using python ODBC connnector pyodbc. pyodbc calls the Microsoft SQL Server driver, which sits on top of linuxODBC. Installing ODBC drivers into AWS Lambda has frustrated some users in the past. This blog outlines a simple approach.

Staging on EC2

We’re going to create a complete set of files for uploading to Lambda. We’ll stage those files on EC2, zip them, and upload the zip to Lambda.

Create RDS

For this article, I created Microsoft SQL Server RDS instance, as described in this table.

Parameter Value
Instance name odbcblog
Engine SQL Server Express Edition
Engine version 14.00.3049.1.v1
Class db.t2.micro
security group sg-04ed8240
Endpoint IP address odbcblog.p0p3rwmlj3hf.us-east-1.rds.amazonaws.com
Endpoint Port 1433
Master user odbcuser
Master password odbcuser

connectivity

Testing from EC2, I get:

[ec2-user@ip-172-251-80-17 ~]$ nc -v odbcblog.p0p3rwmlj3hf.us-east-1.rds.amazonaws.com 1433
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connection timed out.

RDS instance odbcblog is in security group sg-04ed8240. After associating security group sg-04ed8240 to our EC2, we are good to go:

[ec2-user@ip-172-251-80-17 ~]$ nc -v odbcblog.p0p3rwmlj3hf.us-east-1.rds.amazonaws.com 1433
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 172.251.58.192:1433.

install SQL Server ODBC

[ec2-user@ip-172-251-80-17 ~]$ sudo bash

[root@ip-172-251-80-17 download]# curl packages.microsoft.com/config/rhel/6/prod.repo > /etc/yum.repos.d/mssql-release.repo
[root@ip-172-251-80-17 download]# yum -y install msodbcsql17

Review the output and notice that dependent package unixODBC also gets installed.

 Installing : unixODBC-2.3.1-11.amzn2.0.1.x86_64

We’ll use that fact later.

Notice the /etc/odbcinst.ini entry:

[ODBC Driver 17 for SQL Server]
Description=Microsoft ODBC Driver 17 for SQL Server
Driver=/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.3.so.1.1
UsageCount=1

install pyodbc

[root@ip-172-251-80-17 download]# yum -y install gcc-c++
[root@ip-172-251-80-17 download]# yum -y install python3-devel
[root@ip-172-251-80-17 download]# yum -y install unixODBC-devel
[root@ip-172-251-80-17 download]# pip3 install pyodbc
WARNING: Running pip install with root privileges is generally not a good idea. 
 Try `pip3 install --user` instead.

test pyodbc from EC2

testodbc.py:

import pyodbc

con = pyodbc.connect(
    driver = 'ODBC Driver 17 for SQL Server',
    server = 'odbcblog.p0p3rwmlj3hf.us-east-1.rds.amazonaws.com',
    port = 1433,
    user = 'odbcuser',
    password = 'odbcuser',
    timeout = 5
)
sql = 'select @@version'
crsr = con.cursor()
crsr.execute(sql)
row = crsr.fetchone()
print (row[0])

Execute:

[ec2-user@ip-172-251-80-17 test]$ python3 testodbc.py

Output:

Microsoft SQL Server 2017 (RTM-CU13-OD) (KB4483666) - 14.0.3049.1 (X64)
Dec 15 2018 11:16:42
Copyright (C) 2017 Microsoft Corporation
Express Edition (64-bit) on Windows Server 2016 Datacenter 10.0  
  (Build 14393: ) (Hypervisor)

cool.

Stage Lambda code on EC2

Download packages

Let’s start over and download the packages

[ec2-user@ip-172-251-80-17 download]$ yumdownloader unixODBC.x86_64
[ec2-user@ip-172-251-80-17 download]$ yumdownloader msodbcsql17
[ec2-user@ip-172-251-80-17 download]$ pip3 download pyodbc
[ec2-user@ip-172-251-80-17 download]$ ls -1
msodbcsql17-17.3.1.1-1.x86_64.rpm
pyodbc-4.0.26.tar.gz
unixODBC-2.3.1-11.amzn2.0.1.x86_64.rpm

Identify a Lambda staging directory on EC2

[ec2-user@ip-172-251-80-17 testodbc]$ mkdir -p /home/ec2-user/lambdas/testodbc
[ec2-user@ip-172-251-80-17 testodbc]$ cd /home/ec2-user/lambdas/testodbc

Install the rpms

[ec2-user@ip-172-251-80-17 testodbc]$ rpm2cpio /home/ec2-user/lambdas/download/unixODBC-2.3.1-11.amzn2.0.1.x86_64.rpm | cpio -id
2504 blocks
[ec2-user@ip-172-251-80-17 testodbc]$ rpm2cpio /home/ec2-user/lambdas/download/msodbcsql17-17.3.1.1-1.x86_64.rpm | cpio -id
4486 blocks

Install pyodbc

Create a python library directory

[ec2-user@ip-172-251-80-17 lib]$ mkdir -p /home/ec2-user/lambdas/testodbc/python/lib

Install

[ec2-user@ip-172-251-80-17 ~]$ pip3 install --target /home/ec2-user/lambdas/testodbc/python/lib /home/ec2-user/lambdas/download/pyodbc-4.0.26.tar.gz

Directory structure

Observe the directory structure so far

[ec2-user@ip-172-251-80-17 ~]$ cd /home/ec2-user/lambdas/testodbc
[ec2-user@ip-172-251-80-17 testodbc]$ ls -1F
etc/
opt/
python/
usr/
[ec2-user@ip-172-251-80-17 testodbc]$ cd usr/
[ec2-user@ip-172-251-80-17 usr]$ ls -1F
bin/
lib64/
share/

Library directory

The Lambda function is going to load pyodbc. pyodbc is going to look for libodbc.so.2, but it is not going to search usr/lib64. It will do you no good to set Lambda’s LD_LIBRARY_PATH because the Lambda’s containing runtime starts before the Lambda’s environment gets set. Lambda will search lib, so move the library directory there:

[ec2-user@ip-172-251-80-17 testodbc]$ mv usr/lib64 lib

odbcinst.ini

AWS will install our Lambda code in a virtual machine under /var/task. Edit odbcinst.ini

[ec2-user@ip-172-251-80-17 testodbc]$ vi etc/odbcinst.ini

Replace the contents:

[ODBC Driver 17 for SQL Server]
Description=Microsoft ODBC Driver 17 for SQL Server
Driver=/var/task/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.3.so.1.1
UsageCount=1

Python application code directory

To avoid clutter, we will put our own application code in a subdirectory.

[ec2-user@ip-172-251-80-17 testodbc]$ mkdir py

Summary

The directory structure at the top level is now:

[ec2-user@ip-172-251-80-17 testodbc]$ ls -1F
etc/
lib/
opt/
py/
python/
usr/

Create the lambda code

File:

[ec2-user@ip-172-251-80-17 testodbc]$ vi py/testodbc.py

Contents:

import pyodbc
from json import dumps

def lam(ev, cx):
    con = pyodbc.connect(
        driver = 'ODBC Driver 17 for SQL Server',
        server = 'odbcblog.p0p3rwmlj3hf.us-east-1.rds.amazonaws.com',
        port = 1433,
        user = 'odbcuser',
        password = 'odbcuser',
        timeout = 5
    )
    sql = 'select @@version'
    crsr = con.cursor()
    crsr.execute(sql)
    row = crsr.fetchone()
    version = row[0]
    ret = {
	'version': version
    }
    return dumps(ret)

The handler will, therefore be testodbc.lam

Fun fact: You cannot name an AWS Lambda python handler “lambda”.

Create the Lambda

Initial creation

Create a basic lambda function by any method. For example, use the Lambda console.

Configuration Value
Name testOdbc
Runtime python 3.7
Timeout 5 minutes
Handler testodbc.lam

Set two environment variables:

variable value
ODBCSYSINI /var/task/etc
PYTHONPATH /var/runtime:/var/task/py:/var/task/python/lib

Code upload

Zip all libraries, configuration files, and code:

[ec2-user@ip-172-251-80-17 testodbc]$ zip -rq ../testodbc.zip *

Upload the files

[ec2-user@ip-172-251-80-17 testodbc]$ aws lambda update-function-code
   --function-name testOdbc
   --zip-file fileb://../testodbc.zip

Networking

This section must be handled with care. Otherwise, you are going to get ODBC driver timeouts. For Lambda to successfully connect to RDS, two conditions must be in place.

Elastic Network Interface

Lambda needs basic execution role for basic Cloudwatch access. In addition, Your Lambda needs to be able to bind to an Elastic Network Interface.

In IAM Console, create a new role having these roles. Ex: odbcLamRole

  • AWSLambdaBasicExecutionRole
  • AWSLambdaENIManagementAccess

In Lambda Console, assign the role to the Lambda.

VPC

In Lambda console in the Network pane, if you see “No VPC”, switch to your VPC. select two or more subnets, and select your security group.

network

Review

You may review the configuration from the CLI.

[ec2-user@ip-172-251-80-17 ~]$ aws lambda get-function-configuration --function-name testOdbc

Output:

{
    "FunctionName": "testOdbc",
    "LastModified": "2019-05-02T20:29:51.551+0000",
    "RevisionId": "df676edb-e545-42dc-90c3-0cf5dc16ed81",
    "MemorySize": 128,
    "Environment": {
        "Variables": {
            "PYTHONPATH": "/var/runtime:/var/task/py:/var/task/python/lib",
            "ODBCSYSINI": "/var/task/etc"
        }
    },
    "Version": "$LATEST",
    "Role": "arn:aws:iam::665575760545:role/odbcLamRole",
    "Timeout": 300,
    "Runtime": "python3.7",
    "TracingConfig": {
        "Mode": "PassThrough"
    },
    "CodeSha256": "VV3g7pLL1G+y3PoEPyX+UcbwMn40KIiOUbCu5ApYowM=",
    "Description": "",
    "VpcConfig": {
        "SubnetIds": [
            "subnet-8c036bd0",
            "subnet-b7214ed0",
            "subnet-aa197384",
            "subnet-364a757c",
            "subnet-af9a2991",
            "subnet-0f476600"
        ],
        "VpcId": "vpc-0d398177",
        "SecurityGroupIds": [
            "sg-04ed8240"
        ]
    },
    "CodeSize": 2322598,
    "FunctionArn": "arn:aws:lambda:us-east-1:665575760545:function:testOdbc",
    "Handler": "testodbc.lam"
}

run the Lambda

command:

[ec2-user@ip-172-251-80-17 ~]$ aws lambda invoke --function-name testOdbc out.json

cli output:
{
    "ExecutedVersion": "$LATEST",
    "StatusCode": 200
}

Lambda return:

[ec2-user@ip-172-251-80-17 ~]$ cat out.json
"{\"version\": \"Microsoft SQL Server 2017 (RTM-CU13-OD) (KB4483666) - 14.0.3049.1 (X64) \\n\\tDec 15 2018 11:16:42 \\n\\tCopyright (C) 2017 Microsoft Corporation\\n\\tExpress Edition (64-bit) on Windows Server 2016 Datacenter 10.0  (Build 14393: ) (Hypervisor)\\n\"}"

The Lambda performed these steps

  • Load all application code and dependent libraries
  • Load Unix ODBC driver
  • In the handler, load the MS ODBC driver
  • Connect to the RDS SQL Server
  • Allocate a cursor
  • Execute a SQL statement
  • Retrieve the result set
  • Parse the result set as a version
  • Return the version as JSON from the lambda handler

Summary

We accomplished these items

  • setup EC2, RDS, and Lambda in a VPC
  • install pyodbc and underlying drivers in EC2
  • test python code by connecting from EC2 to RDS
  • stage all needed drivers, configuration files python libraries, and python application code on EC2
  • upload the code to Lambda
  • run the Lambda

We have therefore established connectivity from a Python Lambda to a SQL Server RDS.

Upload AWS Lambda code from command line or from python

By Brian Fitzgerald

Introduction

For many first-time users, creating the Lambda function is done in the AWS Management Console. The offered code entry choices are “Edit code inline”, “Upload a .zip file” (from your PC, or wherever you are running your browser), or “Upload a file from Amazon S3”.

code.entry.type

You can, however, save a few steps by uploading your code directly from its development, staging, or testing location. That way, you don’t need to log on to the console and work the menus, and you don’t need to copy the zip file to your PC, or to S3.

The AWS API

Tasks done from AWS Management Console are communicated to AWS using an API library, which communicates to AWS vi JSON. However, Amazon supplies a command line interpreter (CLI) built on top of the same library, so you can accomplish your tasks without using the browser. You can also use the API to write your own code to accomplish the same task.

Upload Lambda from EC2 using the CLI

If your Lambda code is in EC2, then it is convenient to upload directly from EC2 using the AWS CLI.

Configure

The AWS CLI command is “aws”, and is already installed in EC2.If you have not done so already, you should run aws configure, a one-time setup. If you have not already done so, generate an AWS acccess key and use the actual values in place of “AKI…” and “Gtj…”. Choose your own region. In the beginning, you are better off keeping all your code in a single region.

[ec2-user@ip-172-31-80-17 ~]$ aws configure
AWS Access Key ID [None]: AKI...
AWS Secret Access Key [None]: Gtj...
Default region name [us-east-1]:
Default output format [None]:

Let’s assume that you have some code to upload:

[ec2-user@ip-172-31-80-17 lambdas]$ unzip -l trc.zip
Archive: trc.zip
Length Date Time Name
--------- ---------- ----- ----
0 04-29-2019 01:37 bin/
854664 07-30-2018 20:05 bin/strace
119 05-01-2019 01:28 lg.py
84 04-30-2019 01:54 sllg.py
254 04-30-2019 02:07 trcp.py
119 05-01-2019 01:28 trc.py
--------- -------
855240 6 files

Assume, for example, that the destination Lambda is called “lamLocal”. The upload command is (showing lines folded):

[ec2-user@ip-172-31-80-17 lambdas]$ aws lambda update-function-code 
  --function-name lamLocal 
  --zip-file fileb://trc.zip

The response is in JSON. A normal response looks like this:

{
  "FunctionName": "lamLocal",
  "LastModified": "2019-05-01T19:13:47.011+0000",
  "RevisionId": "a139363f-2f31-4aa2-818f-ec20057a981b",
  "MemorySize": 128,
  "Version": "$LATEST",
  "Role": "arn:aws:iam::549357536367:role/service-role/lamLocal-role-gdmmx0de",
  "Timeout": 3,
  "Runtime": "python3.6",
  "TracingConfig": {
    "Mode": "PassThrough"
  },
  "CodeSha256": "45ELCm6smYw5Q/fFxMR+756GwfvSEGeLxVIF0kyFhac=",
  "Description": "",
  "VpcConfig": {
    "SubnetIds": [],
    "VpcId": "",
    "SecurityGroupIds": []
  },
  "CodeSize": 314844,
  "FunctionArn": "arn:aws:lambda:us-east-1:549357536367:function:lamLocal",
  "Handler": "trcp.lam"
}

If an error occurs, the CLI displays no JSON and an exception message (folded):

[ec2-user@ip-172-31-80-17 lambdas]$ aws lambda update-function-code
  --function-name zamLocal --zip-file fileb://trc.zip

An error occurred (ResourceNotFoundException) 
  when calling the UpdateFunctionCode operation: 
  Function not found: arn:aws:lambda:us-east-1:549357536367:function:zamLocal

This is a basic example of updating code on an existing Lambda that was previously created in the AWS Management Console. It is also possible to create, invoke, and delete a Lambda from the AWS CLI.

Upload Lambda from EC2 from code

Now we’ll upload the zip to lambda by making a python call. First some setup

[ec2-user@ip-172-31-80-17 lambdas]$ sudo bash
[root@ip-172-31-80-17 lambdas]# yum -y update
[root@ip-172-31-80-17 lambdas]# pip3 install boto3
WARNING: Running pip install with root privileges 
  is generally not a good idea. 
  Try `pip3 install --user` instead.

File uplam.py:

from boto3 import client
from json import dumps

awskey = 'AKI...'
awskeysec = 'Gtj...'
lam = 'lamLocal'
zf = 'trc.zip'

cli = client(
    'lambda',
    aws_access_key_id= awskey,
    aws_secret_access_key= awskeysec
)

with open(zf, 'rb') as f:
    ret = cli.update_function_code(
        FunctionName = lam,
        ZipFile = f.read()
    )
    print(dumps(ret, indent=4, sort_keys=True))

Execution:

[ec2-user@ip-172-31-80-17 lambdas]$ python3 uplam.py

Output:

{
    "CodeSha256": "45ELCm6smYw5Q/fFxMR+756GwfvSEGeLxVIF0kyFhac=",
    "CodeSize": 314844,
    "Description": "",
    "FunctionArn": "arn:aws:lambda:us-east-1:549357536367:function:lamLocal",
    "FunctionName": "lamLocal",
    "Handler": "trcp.lam",
    "LastModified": "2019-05-01T22:46:44.426+0000",
    "MemorySize": 128,
    "ResponseMetadata": {
        "HTTPHeaders": {
            "connection": "keep-alive",
            "content-length": "675",
            "content-type": "application/json",
            "date": "Wed, 01 May 2019 22:46:44 GMT",
            "x-amzn-requestid": "fe6d9520-6c62-11e9-a5fb-27bb7e5a8a89"
        },
        "HTTPStatusCode": 200,
        "RequestId": "fe6d9520-6c62-11e9-a5fb-27bb7e5a8a89",
        "RetryAttempts": 0
    },
    "RevisionId": "08722516-76c6-4b96-a9ab-dd6f89fadf1f",
    "Role": "arn:aws:iam::549357536367:role/service-role/lamLocal-role-gdmmx0de",
    "Runtime": "python3.6",
    "Timeout": 3,
    "TracingConfig": {
        "Mode": "PassThrough"
    },
    "Version": "$LATEST",
    "VpcConfig": {
        "SecurityGroupIds": [],
        "SubnetIds": [],
        "VpcId": ""
    }
}

In case of error, update_function_code throws an exception and does not return the JSON value.

[ec2-user@ip-172-31-80-17 lambdas]$ python3 uplam.py
Traceback (most recent call last):
  File "uplam.py", line 18, in 
    ZipFile = f.read()
  File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 357,
    in _api_call return self._make_api_call(operation_name, kwargs)
  File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 661,
    in _make_api_call raise error_class(parsed_response, operation_name)
botocore.errorfactory.ResourceNotFoundException: An error occurred 
    (ResourceNotFoundException) 
    when calling the UpdateFunctionCode operation: 
    Function not found: 
    arn:aws:lambda:us-east-1:549357536367:function:zamLocal

Other Languages

I have demonstrated Lambda upload using python3 and boto3. The upload program could have as well been written in any of these languages:

  • Java
  • .NET
  • Node.js
  • PHP
  • Ruby
  • Go
  • C++

Each of these APIs has access to the Lambda service and a method for updateFunctionCode.

Conclusion

An introduction to Lambda functions will lead the user to the AWS Management Console. From there, code entry methods are inline editor, upload .zip file from PC, and upload .zip file from S3. Instead, you may upload your code by using the AWS CLI. Finally, you may upload a Lambda function from within a python script, or a program written in any of seven other languages.

Attempt to trace a process in AWS lambda

By Brian Fitzgerald

Introduction

On the surface, AWS Lambda appears to be a serverless resource that runs your code. However, Lambda users will quickly notice that the code runs on an EC2-like Linux container. There are times when a system-related error appears, and you want to trace the code to find out the cause, or the point of failure.

Scenario

We setup the lambda. File lg.py:

from os import getlogin
from json import dumps

def lam(ev, cx):
    ret = {
        'login': dumps(getlogin())
    }
    return ret
    
if __name__ == '__main__':
    lam(None, None)

the output is:

START RequestId: bc81ac8d-5c3b-421a-9432-82a6ab279767 Version: $LATEST
[Errno 6] No such device or address: OSError
Traceback (most recent call last):
File "/var/task/lg.py", line 6, in lam
'login': dumps(getlogin())
OSError: [Errno 6] No such device or address

Suppose we want to know more about the error message. In Linux, strace will tell you what system call led to the error message. You can try strace. File trc.py:

from os import system

def lam(ev, cx):
    system('strace python lg.py')
    return {}

However, strace is not in lambda:

sh: strace: command not found

You can search:

     system('find / -name strace -ls')

No file is found. You can copy in strace. Start from EC2. Copy in the strace binary and your python files:

[ec2-user@ip-172-31-80-17 trc]$ mkdir -p bin
[ec2-user@ip-172-31-80-17 trc]$ cp -p /usr/bin/strace bin/strace
[ec2-user@ip-172-31-80-17 trc]$ find * -type f
bin/strace
lg.py
trc.py
[ec2-user@ip-172-31-80-17 trc]$ zip -rq ../trc.zip * 
[ec2-user@ip-172-31-80-17 trc]$ aws s3 cp ../trc.zip s3://test.bucket/lambda/trc.zip
upload: ../trc.zip to s3://test.bucket/lambda/trc.zip

Upload the zip file:

upload

Now your lambda function has 3 files: the two python files and the strace binary. Change the call to system:

    system('/var/task/bin/strace python lg.py')

Run it and you get this message:

/var/task/bin/strace: ptrace(PTRACE_TRACEME, ...): Operation not permitted

That did not work. The next thing you can try is:

  1. Start a second process, and get the pid
  2. The second process will sleep for 1s and then run the failing statement.
  3. In the first process, trace the second process.

trcp.py

from subprocess import Popen
from os import system


def lam(ev, cx):
    proc = Popen('python sllg.py', shell=True)
    fmt = 'bin/strace -p %s'
    cmd = format(fmt % proc.pid)
    print(cmd)
    system(cmd)
    return {}


if __name__ == '__main__':
    lam(None, None)

sllg.py:

from time import sleep
from os import getlogin

sleep(1)
lg = getlogin()

In that case, the error message is:

bin/strace: attach: ptrace(PTRACE_ATTACH, 5): Operation not permitted

Discussion

We tried two different invocations of strace inside lambda. In the first attempt, we ran “strace command”. Internally, strace should trace itself by calling ptrace(PTRACE_TRACEME…) and then exec the command. In the second case, we want strace to attach to a process with strace(PTRACE_ATTACH…). AWS Lambda permits neither call.

Conclusion

The traditional approach to tracing Linux processes is not permitted in AWS Lambda.