Using Lambda as S3 events processor

AWS S3 service is used primarily for storing any kind of data. S3 service also supports event notification feature where it can send an event if object is created, updated, deleted in the S3 service.

In this tutorial, we will learn how we can use AWS Lambda service to process S3 events.

lambda diagram

I will be using AWS CLI for this tutorial so that you can get better grasp of AWS CLI. For real projects, I would suggest you to use cloudformation or higher level abstractions such as SAM (Serverless Application Model) Or Terraform.

Use case

Suppose you have a web application which stores user documents in S3 service and you want to process those documents once they are uploaded to S3. You also want to store processed documents in a separate bucket in S3.

Following is high level flow

  • User via a web application uploads an object to the source bucket in Amazon S3.
  • Amazon S3 publishes the file create event data to AWS Lambda by invoking the Lambda function
  • Lambda function processes the input event and executes business logic

Prerequisites

Make sure you have completed the following steps before you start this tutorial:

  • Signed up for an AWS account and created an administrator user in the account.
  • Installed and set up the AWS CLI.
  • For instructions, see Step 1: Set Up an AWS Account and the AWS CLI
  • Install Gradle
  • Once installed, please add the executables in your system path for git, gradle, awscli.
  • Run following commands to verify that the following tools have been installed.
git --version

gradle --version

aws --version

Tutorial

if you run into any errors, please refer Common Errors section to see if it helps

Create Source and target buckets

Create Source and target bucket in S3

# create source bucket
aws s3 mb s3://polyglotdeveloper-user-bucket --region us-west-2

# create target bucket
aws s3 mb s3://polyglotdeveloper-user-processed-bucket --region us-west-2

If you run into error, try to add random string in your source or target bucket and use that bucket name in this tutorial from here onwards.

Upload file to the source bucket using s3 cp command

In reality, your web application will upload this file. For this tutorial sake, we will upload the file using S3 cp command.

# Create project directory
mkdir lambda-s3; cd lambda-s3

# Create config directory to store config files such as s3, AWS IAM policies
mkdir config;
mkdir config/polyglotdeveloper-user-bucket;

# Create sample user file to add to source bucket
touch config/polyglotdeveloper-user-bucket/sunny-college-fees.txt;

# Add some content to file
echo "Sunny's college details" > config/polyglotdeveloper-user-bucket/sunny-college-fees.txt

# Validate file is created
cat config/polyglotdeveloper-user-bucket/sunny-college-fees.txt

# Upload sunny-college-fees.txt to source bucket
aws s3 cp config/polyglotdeveloper-user-bucket/sunny-college-fees.txt s3://polyglotdeveloper-user-bucket/

Create Lambda code using Java

# Create Java project directory
mkdir documentProcessor;cd documentProcessor
gradle init --type java-library

gradle build

# Remove default files created by project
rm -rf src/main/java/Library.java;rm -rf src/test/java/LibraryTest.java

Add below code under dependencies section in build.gradle file. These libraries helps us to read(deserialize) the events sent by S3.

compile 'com.amazonaws:aws-lambda-java-core:1.1.0'
compile 'com.amazonaws:aws-lambda-java-events:1.1.0'

Add below task at the end of build.gradle so that we can create a Fat Jar which contains all the dependencies for the project. This is required as AWS lambda mandates uploading one zip/jar file containing all the dependencies.

/*
** Task to create zip file bundling all the dependencies
*/
task buildZip(type: Zip) {
    from compileJava
    from processResources
    into('lib') {
        from configurations.runtime
    }
}

build.dependsOn buildZip

Add S3 DocumentProcessor class

Tip: You can Open the project as gradle project in your favorite IDE such as eclipse or Idea

touch src/main/java/DocumentProcessorLambda.java

Add following code to DocumentProcessorLambda.java class

import java.io.IOException;
import java.io.InputStream;
import java.net.URLDecoder;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.S3Event;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.event.S3EventNotification.S3EventNotificationRecord;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.S3Object;

public class DocumentProcessorLambda implements RequestHandler<S3Event, String> {

  private static final String TARGET_BUCKET = "polyglotdeveloper-user-processed-bucket";

  public String handleRequest(S3Event s3event, Context context) {

    try {

      // STEP1: Read input event and extract file details which got added to the source bucket
      S3EventNotificationRecord record = s3event.getRecords().get(0);
      String srcBucket = record.getS3().getBucket().getName();
      // Remove any spaces or unicode non-ASCII characters.
      String srcKey = record.getS3().getObject().getKey().replace('+', ' ');
      srcKey = URLDecoder.decode(srcKey, "UTF-8");
      String dstKey = srcKey;
      srcKey = srcKey.replace(" ", "");
      System.out.println("srcBucket=" + srcBucket + " srcKey=" + srcKey);
      System.out.println("targetBucket=" + TARGET_BUCKET + " destKey=" + srcKey);

      // STEP2: Create S3 client and read the object as a stream
      AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient();
      S3Object s3Object = s3Client.getObject(new GetObjectRequest(srcBucket, srcKey));
      InputStream objectData = s3Object.getObjectContent();

      // STEP3: Do some processing on the bucket here
      //...

      // STEP4: Uploading to S3 target bucket
      System.out.println("Writing to: " + TARGET_BUCKET + "/" + dstKey);
      ObjectMetadata meta = new ObjectMetadata();
      meta.setContentType(s3Object.getObjectMetadata().getContentType());
      s3Client.putObject(TARGET_BUCKET, dstKey, objectData, null);
      System.out.println("Successfully processed " + srcBucket + "/" + srcKey + " and uploaded to " + TARGET_BUCKET +
          "/" + dstKey);
      return "Ok";
    } catch (IOException e) {
      throw new RuntimeException(e);
    }
  }
}

Build the project now to make sure you are not getting any error.

gradle clean build

Verify that fat jar has been created in distributions folder

ls build/distributions

Create AWS roles and policies

lambda diagram

Create the Lambda Execution role

When our code runs as Lambda, it will need permissions to write to cloudwatch for writing logs and on S3 for writing to target bucket.

# move to parent project directory lambda-s3
cd ../
mkdir config/policies
touch config/policies/DocumentProcessor-lambda-trustpolicy.json

Add following contents to the DocumentProcessor-lambda-trustpolicy.json

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Create role

aws iam create-role --role-name DocumentProcessor-lambda-execution-role --assume-role-policy-document  file://config/policies/DocumentProcessor-lambda-trustpolicy.json

You will see the following output

{
    "Role": {
        "AssumeRolePolicyDocument": {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Action": "sts:AssumeRole",
                    "Effect": "Allow",
                    "Principal": {
                        "Service": "lambda.amazonaws.com"
                    }
                }
            ]
        },
        "RoleId": "AROAIDJWHJOWVF74LE65E",
        "CreateDate": "2017-07-09T03:19:14.840Z",
        "RoleName": "DocumentProcessor-lambda-execution-role",
        "Path": "/",
        "Arn": "arn:aws:iam::YOUR_ACCOUNT_ID:role/DocumentProcessor-lambda-execution-role"
    }
}

Note down the YOUR_ACCOUNT_ID from the ARN and replace in below line

export AWS_ACCOUNT_ID=YOUR_ACCOUNT_ID

# Validate that it has been set up properly
echo AWS_ACCOUNT_ID

Create Cloudwatch Permissions Policy

touch config/policies/lambda-cloudwatch-permissionspolicy.json

Add following contents to the lambda-cloudwatch-permissionspolicy.json

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "*"
    }
  ]
}

Create S3 Permissions Policy

touch config/policies/lambda-s3-all-permissionspolicy.json

Add following contents to the lambda-s3-all-permissionspolicy.json

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:*"
      ],
      "Resource": "*"
    }
  ]
}

Attach Cloudwatch and S3 policies to the role

Embed the permissions policy (in this example an inline policy) to the role to specify what it is allowed to do.

# Attach cloudwatch write policy
aws iam put-role-policy --role-name DocumentProcessor-lambda-execution-role --policy-name documentprocessor-cloudwatch-permissions-Policy --policy-document file://config/policies/lambda-cloudwatch-permissionspolicy.json

# Attach S3 read/write policy
aws iam put-role-policy --role-name DocumentProcessor-lambda-execution-role --policy-name documentprocessor-s3-permissions-Policy --policy-document file://config/policies/lambda-s3-all-permissionspolicy.json

Create Lambda function

Create lambda function by referring local lambda jar and lambda execution role we just created

Replace YOUR_ACCOUNT_ID variable with your AWS account ID.

aws lambda create-function \
--region us-west-2 \
--function-name DocumentProcessor \
--zip-file fileb://DocumentProcessor/build/distributions/DocumentProcessor.zip \
--role arn:aws:iam::$AWS_ACCOUNT_ID:role/DocumentProcessor-lambda-execution-role \
--handler DocumentProcessorLambda::handleRequest \
--runtime java8

# Output

    {
        "TracingConfig": {
            "Mode": "PassThrough"
        },
        "CodeSha256": "6B2x4RMSEJkTJV6fDMAxQEhALW4I6cMTVx1yWJ0cGnI=",
        "FunctionName": "DocumentProcessor",
        "CodeSize": 8083703,
        "MemorySize": 128,
        "FunctionArn": "arn:aws:lambda:us-west-2:YOUR_ACCOUNT_ID:function:DocumentProcessor",
        "Version": "$LATEST",
        "Role": "arn:aws:iam::YOUR_ACCOUNT_ID:role/DocumentProcessor-lambda-execution-role",
        "Timeout": 3,
        "LastModified": "2017-07-09T03:27:53.289+0000",
        "Handler": "DocumentProcessorLambda::handleRequest",
        "Runtime": "java8",
        "Description": ""
    }

Subscribe Lambda to S3 events

Let’s configure our Source bucket to send events to our Lambda function whenever a new file is created

Add Permissions to the Lambda Function’s Access Permissions Policy

aws lambda add-permission \
--function-name DocumentProcessor \
--region us-west-2 \
--statement-id allow-s3-to-invoke-lambda123456789 \
--action "lambda:InvokeFunction" \
--principal s3.amazonaws.com \
--source-arn arn:aws:s3:::polyglotdeveloper-user-bucket \
--source-account $AWS_ACCOUNT_ID
{
    "Statement": "{\"Sid\":\"allow-s3-to-invoke-lambda123456789\",\"Resource\":\"arn:aws:lambda:us-west-2:YOUR_ACCOUNT_ID:function:DocumentProcessor\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"s3.amazonaws.com\"},\"Action\":[\"lambda:InvokeFunction\"],\"Condition\":{\"StringEquals\":{\"AWS:SourceAccount\":\"YOUR_ACCOUNT_ID\"},\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws:s3:::polyglotdeveloper-user-bucket\"}}}"
}

Configure Notification on the Bucket

touch config/policies/s3-lambda-notification-configuration.json

Add following content to the file

{
    "LambdaFunctionConfigurations": [
        {
          "Id": "1234567890",
          "LambdaFunctionArn": "arn:aws:lambda:us-west-2:YOUR_ACCOUNT_ID:function:DocumentProcessor",
          "Events": ["s3:ObjectCreated:*"]
        }
      ]
}
aws s3api put-bucket-notification-configuration --bucket polyglotdeveloper-user-bucket --notification-configuration  file://config/policies/s3-lambda-notification-configuration.json

Add a new file so that S3 can send the notification to the Lambda

# Create sample user file to add to source bucket
touch config/polyglotdeveloper-user-bucket/sunny-college-course.txt;

# Add some content to file
echo "Sunny's college course details" > config/polyglotdeveloper-user-bucket/sunny-college-course.txt

# Upload sunny-college-fees.txt to source bucket
aws s3 cp config/polyglotdeveloper-user-bucket/sunny-college-course.txt s3://polyglotdeveloper-user-bucket/

Test Lambda invocation using S3 mock event

Create sample s3 input event file

mkdir config/test
touch config/test/lambda-s3input.json

Add following to lambda-s3input.json

{
  "Records":[
    {
      "eventVersion":"2.0",
      "eventSource":"aws:s3",
      "awsRegion":"us-west-2",
      "eventTime":"1970-01-01T00:00:00.000Z",
      "eventName":"ObjectCreated:Put",
      "userIdentity":{
        "principalId":"AIDAJDPLRKLG7UEXAMPLE"
      },
      "requestParameters":{
        "sourceIPAddress":"127.0.0.1"
      },
      "responseElements":{
        "x-amz-request-id":"C3D13FE58DE4C810",
        "x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD"
      },
      "s3":{
        "s3SchemaVersion":"1.0",
        "configurationId":"testConfigRule",
        "bucket":{
          "name":"polyglotdeveloper-user-bucket",
          "ownerIdentity":{
            "principalId":"A3NL1KOZZKExample"
          },
          "arn":"arn:aws:s3:::polyglotdeveloper-user-bucket"
        },
        "object":{
          "key":"sunny-college-fees.txt",
          "size":1024,
          "eTag":"d41d8cd98f00b204e9800998ecf8427e",
          "versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko"
        }
      }
    }
  ]
}

Invoke Lambda

aws lambda invoke \
--invocation-type RequestResponse \
--function-name DocumentProcessor \
--region us-west-2 \
--log-type Tail \
--payload file://config/test/lambda-s3input.json \
outputfile.txt

Output

{
    "LogResult": "U1RBUlQgUmVxdWVzdElkOiAxZWY4YTIxNi02NDU4LTExZTctOTAyYy1kM2UwMzhhYmQ1ODUgVmVyc2lvbjogJExBVEVTVApzcmNCdWNrZXQ9cG9seWdsb3RkZXZlbG9wZXItdXNlci1idWNrZXQgc3JjS2V5PXNhbXBsZS50eHQKdGFyZ2V0QnVja2V0PXBvbHlnbG90ZGV2ZWxvcGVyLXVzZXItcHJvY2Vzc2VkLWJ1Y2tldCBkZXN0S2V5PXNhbXBsZS50eHQKRU5EIFJlcXVlc3RJZDogMWVmOGEyMTYtNjQ1OC0xMWU3LTkwMmMtZDNlMDM4YWJkNTg1ClJFUE9SVCBSZXF1ZXN0SWQ6IDFlZjhhMjE2LTY0NTgtMTFlNy05MDJjLWQzZTAzOGFiZDU4NQlEdXJhdGlvbjogMzAwMS4xMiBtcwlCaWxsZWQgRHVyYXRpb246IDMwMDAgbXMgCU1lbW9yeSBTaXplOiAxMjggTUIJTWF4IE1lbW9yeSBVc2VkOiA1MiBNQgkKMjAxNy0wNy0wOVQwMzozOTowMC4zMjlaIDFlZjhhMjE2LTY0NTgtMTFlNy05MDJjLWQzZTAzOGFiZDU4NSBUYXNrIHRpbWVkIG91dCBhZnRlciAzLjAwIHNlY29uZHMKCg==",
    "FunctionError": "Unhandled",
    "StatusCode": 200
}

LogResult above is a base64 encoded data of lambda logs. If you are on linux/mac, you can use below command to see the logs. If you are on windows, you can use other base64 decoders or online decoders to see the log messages.

echo U1RBUlQgUmVxdWVzdElkOiAxZWY4YTIxNi02NDU4LTExZTctOTAyYy1kM2UwMzhhYmQ1ODUgVmVyc2lvbjogJExBVEVTVApzcmNCdWNrZXQ9cG9seWdsb3RkZXZlbG9wZXItdXNlci1idWNrZXQgc3JjS2V5PXNhbXBsZS50eHQKdGFyZ2V0QnVja2V0PXBvbHlnbG90ZGV2ZWxvcGVyLXVzZXItcHJvY2Vzc2VkLWJ1Y2tldCBkZXN0S2V5PXNhbXBsZS50eHQKRU5EIFJlcXVlc3RJZDogMWVmOGEyMTYtNjQ1OC0xMWU3LTkwMmMtZDNlMDM4YWJkNTg1ClJFUE9SVCBSZXF1ZXN0SWQ6IDFlZjhhMjE2LTY0NTgtMTFlNy05MDJjLWQzZTAzOGFiZDU4NQlEdXJhdGlvbjogMzAwMS4xMiBtcwlCaWxsZWQgRHVyYXRpb246IDMwMDAgbXMgCU1lbW9yeSBTaXplOiAxMjggTUIJTWF4IE1lbW9yeSBVc2VkOiA1MiBNQgkKMjAxNy0wNy0wOVQwMzozOTowMC4zMjlaIDFlZjhhMjE2LTY0NTgtMTFlNy05MDJjLWQzZTAzOGFiZDU4NSBUYXNrIHRpbWVkIG91dCBhZnRlciAzLjAwIHNlY29uZHMKCg== | base64 --decode

# Output

    START RequestId: 188348c5-4a77-11e7-bc54-85724e6d0d0e Version: $LATEST
    END RequestId: 188348c5-4a77-11e7-bc54-85724e6d0d0e
    REPORT RequestId: 188348c5-4a77-11e7-bc54-85724e6d0d0e  Duration: 3003.60 ms    Billed Duration: 3000 ms        Memory Size: 128 MB     Max Memory Used: 69 MB
    2017-06-06T05:15:08.572Z 188348c5-4a77-11e7-bc54-85724e6d0d0e Task timed out after 3.00 seconds

Request failed because task timed out after 3 seconds. Okay, no problem! Let’s increase the timeout

aws lambda update-function-configuration \
    --function-name DocumentProcessor  \
    --region us-west-2 \
    --timeout 30 \
    --memory-size 1024

Output

{
    "TracingConfig": {
        "Mode": "PassThrough"
    },
    "CodeSha256": "6B2x4RMSEJkTJV6fDMAxQEhALW4I6cMTVx1yWJ0cGnI=",
    "FunctionName": "DocumentProcessor",
    "CodeSize": 8083703,
    "MemorySize": 1024,
    "FunctionArn": "arn:aws:lambda:us-west-2:YOUR_ACCOUNT_ID:function:DocumentProcessor",
    "Version": "$LATEST",
    "Role": "arn:aws:iam::YOUR_ACCOUNT_ID:role/DocumentProcessor-lambda-execution-role",
    "Timeout": 30,
    "LastModified": "2017-07-09T03:41:54.789+0000",
    "Handler": "DocumentProcessorLambda::handleRequest",
    "Runtime": "java8",
    "Description": ""
}

Let’s try testing again

aws lambda invoke \
--invocation-type RequestResponse \
--function-name DocumentProcessor \
--region us-west-2 \
--log-type Tail \
--payload file://config/test/lambda-s3input.json \
outputfile.txt

This time you would see the success response

{
    "LogResult": "U1RBUlQgUmVxdWVzdElkOiBlYzk3N2Q1OC02NDU4LTExZTctODMzMC1jZmY5NjhkZTdiODYgVmVyc2lvbjogJExBVEVTVApzcmNCdWNrZXQ9cG9seWdsb3RkZXZlbG9wZXItdXNlci1idWNrZXQgc3JjS2V5PXNhbXBsZS50eHQKdGFyZ2V0QnVja2V0PXBvbHlnbG90ZGV2ZWxvcGVyLXVzZXItcHJvY2Vzc2VkLWJ1Y2tldCBkZXN0S2V5PXNhbXBsZS50eHQKV3JpdGluZyB0bzogcG9seWdsb3RkZXZlbG9wZXItdXNlci1wcm9jZXNzZWQtYnVja2V0L3NhbXBsZS50eHQKSnVsIDA5LCAyMDE3IDM6NDQ6MzkgQU0gY29tLmFtYXpvbmF3cy5zZXJ2aWNlcy5zMy5BbWF6b25TM0NsaWVudCBwdXRPYmplY3QKV0FSTklORzogTm8gY29udGVudCBsZW5ndGggc3BlY2lmaWVkIGZvciBzdHJlYW0gZGF0YS4gIFN0cmVhbSBjb250ZW50cyB3aWxsIGJlIGJ1ZmZlcmVkIGluIG1lbW9yeSBhbmQgY291bGQgcmVzdWx0IGluIG91dCBvZiBtZW1vcnkgZXJyb3JzLgpTdWNjZXNzZnVsbHkgcHJvY2Vzc2VkIHBvbHlnbG90ZGV2ZWxvcGVyLXVzZXItYnVja2V0L3NhbXBsZS50eHQgYW5kIHVwbG9hZGVkIHRvIHBvbHlnbG90ZGV2ZWxvcGVyLXVzZXItcHJvY2Vzc2VkLWJ1Y2tldC9zYW1wbGUudHh0CkVORCBSZXF1ZXN0SWQ6IGVjOTc3ZDU4LTY0NTgtMTFlNy04MzMwLWNmZjk2OGRlN2I4NgpSRVBPUlQgUmVxdWVzdElkOiBlYzk3N2Q1OC02NDU4LTExZTctODMzMC1jZmY5NjhkZTdiODYJRHVyYXRpb246IDM0NDMuNDggbXMJQmlsbGVkIER1cmF0aW9uOiAzNTAwIG1zIAlNZW1vcnkgU2l6ZTogMTAyNCBNQglNYXggTWVtb3J5IFVzZWQ6IDEwMSBNQgkK",
    "StatusCode": 200
}

Verify that file with same name is created on target bucket

aws s3 ls s3://polyglotdeveloper-user-processed-bucket --region us-west-2

Test Lambda invocation using S3 event

Validate that after lambda run file has been uploaded to processed bucket

aws s3 ls s3://polyglotdeveloper-user-processed-bucket --region us-west-2

Great! if it works so far. Time to wrap up and delete the resources.

Conclusion

In this tutorial, You learnt following things

  • Lambda could be used as S3 event consumer
  • To allow Lambda to call S3, we created execution role and granted it permission to access s3.
  • To allow Lambda to write logs to cloudwatch, we attached policy to the role which allowed write access to cloudwatch.
  • To allow S3 to invoke Lambda, we added access permission policy on lambda and gave permissions to source bucket to invoke lambda

Resource Cleanup

Now time to clean up resources so that you don’t get charged unnecessarily.

Delete Lambda Function

aws lambda delete-function \
 --function-name DocumentProcessor \
 --region us-west-2

Delete Roles and policies

aws iam delete-role-policy --role-name DocumentProcessor-lambda-execution-role --policy-name documentprocessor-cloudwatch-permissions-Policy
aws iam delete-role-policy --role-name DocumentProcessor-lambda-execution-role --policy-name documentprocessor-s3-permissions-Policy
aws iam delete-role --role-name DocumentProcessor-lambda-execution-role

Delete Source and Target bucket

aws s3 rm s3://polyglotdeveloper-user-bucket --region us-west-2 --recursive
aws s3 rm s3://polyglotdeveloper-user-processed-bucket --region us-west-2 --recursive

aws s3 rb s3://polyglotdeveloper-user-bucket --region us-west-2
aws s3 rb s3://polyglotdeveloper-user-processed-bucket --region us-west-2

Common Errors

S3 bucket creation failed with make_bucket failed error

If you get below error, please change bucket name to make it unique. If you are changing the bucket name, remember to use your bucket name in next commands.

make_bucket failed: s3://polyglotdeveloper-user-bucket An error occurred (BucketAlreadyExists) when calling the CreateBucket operation: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.

Lambda function failing due to timeout

You can increase the timeout by running following commands.

aws lambda update-function-configuration \
    --function-name DocumentProcessor  \
    --region us-west-2 \
    --timeout 30 \
    --memory-size 1024

How do I upload the code after further modification

aws lambda update-function-code \
--function-name DocumentProcessor \
--zip-file fileb://DocumentProcessor/build/distributions/DocumentProcessor.zip

References


# Reference
AWS S3 documentation http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html

Version History


Date Description
2017-07-04    Initial Version