AWS SNS Retry Policy – New Feature – Push Based Messaging Instead of SQS Polling

The good guys at Amazon Web Services just announced a new feature in Simple Notifications Service (SNS) which allows settings a retry policy for SNS notifications.

Up until now SNS had an unknown publish retry policy (and maybe non existing). I always suspected it had some logic for different subscription types (Email, HTTP, Text, SQS) but it was never mentioned anywhere.

PUSH
by Ed Russel

The new retry policy feature allows you to define the number of retries and the wait period between them (even if its a linear or exponential wait!) as well as set a throttling policy so that if your server is currently down it won’t get flooded with notifications once its back up.

This allows for some very interesting patterns. Most notably is push based messaging mechanism in which instead of writing a dedicated process to poll an SQS queue you can use SNS as sort of an ad-hoc push queue that will post the messages to an HTTP/S URL. Setting a reasonable retry policy and throttling policy will also ensure that if your server is down, messages won’t get lost.

I posted a while back a suggestion for a hack which utilized SNS as a notification mechanism to start polling SQS, however now that SNS has a retry policy its a good candidate for allowing you to handle your async tasks using your regular HTTP servers with all the goodies of logging, multi-threading, debugging, etc.

Before you run to start implementing a push based messaging (or re-implement Google AppEngine’s TaskQueue Push Queue API), there are certain things which are yet unknown and/or require further consideration:

  • SQS has a 15 days storage policy, so you have up to 15 days to fix a bug or setup a system that will empty a queue. In the new SNS retry policy you may reach a similar long period of time however, the maximum values to set in the policy are not yet known and may pose a limit.
  • I am no aware and couldn’t find any documentation which related to the HTTP status code of a message pushed to an HTTP/S subscriber (other than answering the subscription request). How can you tell SNS that if a message was pushed to an HTTP subscriber, the subscriber failed due to an HTTP error? In that case will SNS consider a non HTTP Status 200 request a failed request and will do the retry policy?
  • What happens if a message pushed to an HTTP/S subscriber takes a long time to process due to load or any other reason? When will SNS decide if the request failed due to timeout?
I hope some of these questions will get cleared up and SNS can become a viable push based messaging mechanism.
I previously recommended to always push via SNS (even if its just to an SNS queue) just to get the added benefits of easier debugging (subscribe to an SQS queue AND send email or HTTP request to debug, etc). The new features only proves that its becoming a very interesting building block to use inside and/or outside of AWS.

 

 

The cool example of SaaS for developers by @mza and @jeffbarr

In a recent post on the AWS blog, Jeff Barr and Matt Wood, showed the architecture and code they wrote which lists the most interesting AWS related jobs from the Amazon Jobs site.

It serves as a rather good example of how service components such as the ones AWS provides (SNS, SQS, S3 to name a few that are AWS agnostic) a great set of building blocks that can easily help you focus on writing the code you really need to write.

I found the auto scaling policy for spinning up & down machines just to tweet a bit of an over kill at first (and Jeff could have easily added the code on the same instance running the cron), however thinking about it  a bit more and considering the various pricing strategies it actually makes a lot sense.

Membase Cluster instead of ElastiCache in 5 minutes

Want to have an ElastiCache like service in all regions, not just US-EAST?Memby
Want to utilize the new reserved instances utilization model to lower your costs?
Want to have your cache persistent and easily backup and restore it?
Want to add (or remove) servers from your cache cluster with no down time?

We have just the solution for you. A simple CloudFormation template to create a Membase cluster which gives you all of the ElasticCache benefits and a lot more including:

  • Support for ALL regions (not just US-EAST Apparently Amazon beat me to the punch with support at US West (N. California), EU West (Dublin), Asia Pacific (Singapore), and Asia Pacific (Tokyo))
  • Support for reserved instances including the new utilization based model
  • Supports adding and removing servers to the cluster with no downtime and automatic rebalancing of keys amont the cluster’s servers
  • Support persistency (if you wish)
  • Supports multi-tenancy and SASL authentication
  • Supports micro instances (not recommended, but good for low volume environments and testing environments)
  • Easily backup and restore
  • Install the cluster in multiple availability zones and regions for maximum survivability and have all of them talk to each other
  • Using a vBucket aware memcache client or running through a Moxi proxy changes in topology will get communicated automatically with no need for code or configuration changes!
  • No need for a single address (the CNAME you get from ElastiCache) because if you are using a vBucket aware client (or going through Moxi Proxy) topology changes are communicated to your client.
Based on the CloudFormation scripts created by Couchbase Labs, this script is more generic and utilize the complete available RAM (80% of available ram) for each type of instance.

Notes:

  • All instances run the latest Amazon Linux (2011-09)
  • 64bit instances use instance-store for storage
  • m1.micro uses 32bit to utilize maximum amount of RAM (we don’t want those 64bit pointers eating our RAM)
  • m1.small is 32bit and while it does have instance-store support, we wanted to have a one formation script to rule them all so it uses an EBS supported AMI.
  • The CloudFormation script changes the names of the instance to their public DNS names so they are available from anywhere in the world and any Availability Zone and Region in AWS so you can have repl

Security (VERY IMPORTANT!):

  • The default script will create a security group for you which allows access to ALL of the servers.
  • If you created a default bucket via the script – that bucket, which uses port 11211 is open to everyone. Make sure to protect it or delete it and create a bucket with SASL protection.
  • In general, if you are creating a non SASL protected bucket, make sure to protect the port by updating the security group!

Download:

Instructions:

  1. Grab one of the existing templates from the GitHub repository or run gen-pack.py to generate a template with any number of instances in it.
  2. Go to the CloudFromation Console
  3. Click on “Create New Stack”
  4. Name your stack
  5. Select “Upload a Temaplate File”
  6. Click “Browse” and select the template file (use one of the defaults named “membase-pack-X”)
  7. Click “Continue”
  8. In “Specify Parameters” step the minimum parameters you are required to fill are:
    • RESTPassword – the password for the REST API and management console
    • KeyName – the name of the KeyPair you are using when creating instances (usually the name of the .pem file without the “.pem” suffix)
    • Instance Type – choose the size of the instance (t1.micro by default)
  9. Click “Continue”
  10. Sit back and relax. CloudFormation will do the rest of the work for you
  11. Select the “Outputs” tab (click refresh if it doesn’t show anything). It should contain the URL of the Membase management console
  12. Clap your hands with joy!