AWS Reserved Instances Marketplace Should Also Give Back AWS Credits Not Just Cash

Amazon recently introduced the AWS Reserved Instances Marketplace. The idea is great – allow people to sell their reserved instances which they don’t need for whatever reason instead of losing the reservation money (or if you are in heavy utilization - the complete run cost of the instance 24 x 7 x Number of years you reserved).

Before you can sell a reserved instance you need to setup various details to which Amazon will wire the money – however if you are not located in the US or have a US bank account you are out of luck. Unfortunately for me – I’m located in Israel with no US bank account.

Instead of messing with various taxing issues I would like to suggest AWS to simply give back AWS credits. That is – if I sell my reserved instance for $100 I should have the option of directly crediting my AWS account with $100 which I can then use on various AWS services.

I know AWS has some mechanism to work with such a thing since they do give out gift/trial credits all the time. I also know that the Amazon Associates program for referring customers to Amazon can give you back Amazon gift certificates instead of actual money.

Just a thought that would keep the money inside the AWS ecosystem while making non US customers happy.

Quick & Dirty API for Accessing Amazon Web Services (AWS) EC2 Pricing Data

Continuing my post about the JSON files used in the Amazon EC2 Page, I’ve created a small Python library that also acts as a command line interface to get the data.
The data in the JSON files does not contain the same values as the EC2 API for things like region name and instance types so the library/cli translates these values to their corresponding values in the EC2 API.

You can filter the output by region, instance type and OS type.

The command line output support a human readable table format, JSON and CSV.

To use the command line you’ll need to install the following Python libraries:

  • argparse – only if you are using Python < 2.7 (argparse is included in Python 2.7 and 3.x)
  • prettytable – if you want to print the human readable and pretty good looking ASCII table output

Both libraries can be installed using pip.
Grab the code from Github

Don’t forget to send feedback! Enjoy!

 

Amazon Web Services (AWS) EC2 Pricing Data

Have you ever wanted a way to access the Amazon Web Services EC2 pricing data from code?

It seems Amazon uses predefined JSON files in the EC2 page to display the pricing per region per instance type and type of utilization.

You can easily access these JSON files, load it and use it in your own apps (at least until Amazon changes these URLs).

The naming in these JSON files is sometimes different than the naming used in the API so for example, a small instance (m1.small) is “size” : “sm” and its type is “stdODI” or “stdResI” for reserved instances.

Below are the links to the relevant files:

On Demand Instances

Reserved Instances

Data Transfer Pricing

Cloud Watch Pricing

Elastic IPs Pricing

Elastic Load Balancer (ELB) Pricing

 

Crunch these files and enjoy it while its there :-)

My ideal monitoring system

The server / application monitoring field is filled with lots of options these days. The solutions vary greatly with feature set and management in mind.

There are various parameters distinguishing between these systems:

  • Hosted (CloudKick, ServerDensity, CloudWatch, RevelCloud and others) vs Installed (NagiosMuninGangliaCacti)

    Mission Control by Wade Harpootlian

  • Hosted solutions pricing plans use varied parameters such as price/server, price/metric, retention policy, # of metrics tracked, realtime-ness, etc.
  • Poll based method – where collecting server polls the other servers/service vs. Push – where you have a client on the server that pushes locally collected data to logging/monitoring server
  • Allowing custom metrics – not all systems allows monitoring, plotting, sending and alert on custom data (at least not in a easy manner)
Some of these systems are better suited to some tasks more than the others but in general none of them provides a (good) solution for handling todays monitoring needs that spans from operational to applicative.

My ideal monitoring system for any application that have servers running in the background should have the following features:

  • Hosted – for when it doesn’t make sense for me to run the operations of this
  • Open Source – for when the sweet spot leans towards me taking control of the operations and management of collecting my own statistics with a CLEAR path of migration between the hosted solution and my installed one
  • Suitable for a cloud / virtual server environment – where servers go up and down and each server simply reports its data to the monitoring system without the need to pre-register it with the system. This suggests a small client running on each machine collecting local stats and relaying it to a central system
  • Supports custom metrics – allowing me to add whatever stats I want be it operational (CPU, network bandwidth, disk space) or application related (such as number of sign ups or a specific function run time in milliseconds)
  • Understand more than numbers – not all stats are equal. Some are counters which I just want to say “increment” or “decrement”. Others are single data points that I need to simply store. Others are special data points with a unit of measure (such as “milliseconds”, “Bytes/second”, etc)
  • Locally installed client must handle network failures – if there is a network failure or a collecting server down time, stats will be stored locally and relayed to the collecting server when its available again
  • Locally installed client should collect custom metrics – if I want to send some custom metrics from my app – say when a user signs up – my code would talk with the locally installed client and that client will relay the data to the collecting server. This ensures minimum configuration and my app code can assume that there is always a locally installed client which can communicate with the collecting server be it via UNIX sockets, UDP datagram, shared memory or anything else that is suitable for the job
  • Data should be query-able – that is, I really want to query and filter more than just the timeframes of the data and general statistics on it (i.e. group by server, show specific servers, show values higher than X, etc)
  • Reporting Console – somewhere to plot all these statistics which has embeddable graphs (for those who likes building their own dashboards)
  • Built-in near real-time alerts – I want to be able to set alerts that go out near real time when collecting the data to a diverse set of outlets be it Email, Text Messages, Push Notifications, WebHook (for automating some auto handling of failures or problems), etc.
  • API – because everything needs it :-)

It is very important to me in almost any hosted SaaS (Software-as-a-Service) solution I use that I will have a clear migration path if (or when) the time comes and I need to host a certain sub-system on my own. Sometimes I do have to compromise and use a system that I may not have the ability to migrate (or at least not easily) but the decision is made consciously.

From an architecture point of view, I would like to see these main building blocks:

  • Storage – Reliable, scalable, can handle lots of writes fast and handle any size of dataset for any reasonable retention period
  • Collectors – clients push data to these collectors which gets it and pass it on the processors
  • Processors – Handle incoming data to be written. Aggregate data for quicker reporting.
  • Reporting – something that will enable easy querying and filtering of the data
  • Real time alerts monitoring – handle preconfigured alerts and figuring in near real time if certain conditions are met to issue the relevant alerts/actions
  • Web Console – for configuration, querying and real-time plotting of data
  • API for querying
  • API for real time plotting – to be used for integration with other apps, embeddable chunks of code, etc.

While I’m sure with a little more thought more requirements can be added or some of these requirements can be merged and minimized, this set of features will create a system a lot of people would love to use and feel comfortable using.

Would you use such a system? Do you have anything else to add to the feature set?

AWS SNS Retry Policy – New Feature – Push Based Messaging Instead of SQS Polling

The good guys at Amazon Web Services just announced a new feature in Simple Notifications Service (SNS) which allows settings a retry policy for SNS notifications.

Up until now SNS had an unknown publish retry policy (and maybe non existing). I always suspected it had some logic for different subscription types (Email, HTTP, Text, SQS) but it was never mentioned anywhere.

PUSH

by Ed Russel

The new retry policy feature allows you to define the number of retries and the wait period between them (even if its a linear or exponential wait!) as well as set a throttling policy so that if your server is currently down it won’t get flooded with notifications once its back up.

This allows for some very interesting patterns. Most notably is push based messaging mechanism in which instead of writing a dedicated process to poll an SQS queue you can use SNS as sort of an ad-hoc push queue that will post the messages to an HTTP/S URL. Setting a reasonable retry policy and throttling policy will also ensure that if your server is down, messages won’t get lost.

I posted a while back a suggestion for a hack which utilized SNS as a notification mechanism to start polling SQS, however now that SNS has a retry policy its a good candidate for allowing you to handle your async tasks using your regular HTTP servers with all the goodies of logging, multi-threading, debugging, etc.

Before you run to start implementing a push based messaging (or re-implement Google AppEngine’s TaskQueue Push Queue API), there are certain things which are yet unknown and/or require further consideration:

  • SQS has a 15 days storage policy, so you have up to 15 days to fix a bug or setup a system that will empty a queue. In the new SNS retry policy you may reach a similar long period of time however, the maximum values to set in the policy are not yet known and may pose a limit.
  • I am no aware and couldn’t find any documentation which related to the HTTP status code of a message pushed to an HTTP/S subscriber (other than answering the subscription request). How can you tell SNS that if a message was pushed to an HTTP subscriber, the subscriber failed due to an HTTP error? In that case will SNS consider a non HTTP Status 200 request a failed request and will do the retry policy?
  • What happens if a message pushed to an HTTP/S subscriber takes a long time to process due to load or any other reason? When will SNS decide if the request failed due to timeout?
I hope some of these questions will get cleared up and SNS can become a viable push based messaging mechanism.
I previously recommended to always push via SNS (even if its just to an SNS queue) just to get the added benefits of easier debugging (subscribe to an SQS queue AND send email or HTTP request to debug, etc). The new features only proves that its becoming a very interesting building block to use inside and/or outside of AWS.

 

 

The cool example of SaaS for developers by @mza and @jeffbarr

In a recent post on the AWS blog, Jeff Barr and Matt Wood, showed the architecture and code they wrote which lists the most interesting AWS related jobs from the Amazon Jobs site.

It serves as a rather good example of how service components such as the ones AWS provides (SNS, SQS, S3 to name a few that are AWS agnostic) a great set of building blocks that can easily help you focus on writing the code you really need to write.

I found the auto scaling policy for spinning up & down machines just to tweet a bit of an over kill at first (and Jeff could have easily added the code on the same instance running the cron), however thinking about it  a bit more and considering the various pricing strategies it actually makes a lot sense.

Membase Cluster instead of ElastiCache in 5 minutes

Want to have an ElastiCache like service in all regions, not just US-EAST?Memby
Want to utilize the new reserved instances utilization model to lower your costs?
Want to have your cache persistent and easily backup and restore it?
Want to add (or remove) servers from your cache cluster with no down time?

We have just the solution for you. A simple CloudFormation template to create a Membase cluster which gives you all of the ElasticCache benefits and a lot more including:

  • Support for ALL regions (not just US-EAST Apparently Amazon beat me to the punch with support at US West (N. California), EU West (Dublin), Asia Pacific (Singapore), and Asia Pacific (Tokyo))
  • Support for reserved instances including the new utilization based model
  • Supports adding and removing servers to the cluster with no downtime and automatic rebalancing of keys amont the cluster’s servers
  • Support persistency (if you wish)
  • Supports multi-tenancy and SASL authentication
  • Supports micro instances (not recommended, but good for low volume environments and testing environments)
  • Easily backup and restore
  • Install the cluster in multiple availability zones and regions for maximum survivability and have all of them talk to each other
  • Using a vBucket aware memcache client or running through a Moxi proxy changes in topology will get communicated automatically with no need for code or configuration changes!
  • No need for a single address (the CNAME you get from ElastiCache) because if you are using a vBucket aware client (or going through Moxi Proxy) topology changes are communicated to your client.
Based on the CloudFormation scripts created by Couchbase Labs, this script is more generic and utilize the complete available RAM (80% of available ram) for each type of instance.

Notes:

  • All instances run the latest Amazon Linux (2011-09)
  • 64bit instances use instance-store for storage
  • m1.micro uses 32bit to utilize maximum amount of RAM (we don’t want those 64bit pointers eating our RAM)
  • m1.small is 32bit and while it does have instance-store support, we wanted to have a one formation script to rule them all so it uses an EBS supported AMI.
  • The CloudFormation script changes the names of the instance to their public DNS names so they are available from anywhere in the world and any Availability Zone and Region in AWS so you can have repl

Security (VERY IMPORTANT!):

  • The default script will create a security group for you which allows access to ALL of the servers.
  • If you created a default bucket via the script – that bucket, which uses port 11211 is open to everyone. Make sure to protect it or delete it and create a bucket with SASL protection.
  • In general, if you are creating a non SASL protected bucket, make sure to protect the port by updating the security group!

Download:

Instructions:

  1. Grab one of the existing templates from the GitHub repository or run gen-pack.py to generate a template with any number of instances in it.
  2. Go to the CloudFromation Console
  3. Click on “Create New Stack”
  4. Name your stack
  5. Select “Upload a Temaplate File”
  6. Click “Browse” and select the template file (use one of the defaults named “membase-pack-X”)
  7. Click “Continue”
  8. In “Specify Parameters” step the minimum parameters you are required to fill are:
    • RESTPassword – the password for the REST API and management console
    • KeyName – the name of the KeyPair you are using when creating instances (usually the name of the .pem file without the “.pem” suffix)
    • Instance Type – choose the size of the instance (t1.micro by default)
  9. Click “Continue”
  10. Sit back and relax. CloudFormation will do the rest of the work for you
  11. Select the “Outputs” tab (click refresh if it doesn’t show anything). It should contain the URL of the Membase management console
  12. Clap your hands with joy!

Script to update Beanstalkd work queue statistics in CloudWatch

I’ve written a small Python script which uses the excellent boto libraryto monitor Beanstalkd server statistics as well as a specific beanstalkd tube (queue) statistics.

Beanstalkd

By Evil Cheese Scientist (Back Intact!)

You can get the code herehttps://github.com/erans/beanstalkdcloudwatch

The easiest way to run it is via a cron job. I run it every 1 minute to monitor the “reserved” and “buried” state of a few of the tubes I use (if you want to read more about how beanstalkd works I suggest reading the protocol document).

I highly recommending checking beanstalkd out if you need a queue for offloading tasks to other processes/machines. It’s really geared towards that task and have a couple of dedicated features such as the buried state, time-to-live and time-to-run which makes managing this really painless.

AWS Elastic (accidental) Load Balancer Man-in-the-middle Attack

I just read a post on Slashdot about a poor guy getting a huge chunk of Netflix traffic to his server.

The problem seemed to have been caused by the nature of IP address in EC2 which are quite fluid and gets reassigned when you spin up and down a machine. The same goes for Elastic Load Balancers (ELB) which are managed by Amazon and may switch the IP address as well (that’s why they ask to map to their CNAME record for the ELB instead of the IP).

In the Slashdot post, there is a link to this article, which describes the problem and lists some possible implications and possible ways of avoiding leaking data such as passwords and session ids when such a problem occurs.

The article mostly talks about what happend if someone hijacks your ELB, but the original problem reported was that you accidentally got someone elses traffic. This can lead to some other severe consequences:

  • Your servers crashing (in which case you should probably notice that rather quickly. Duh!)
  • If you are running some kind of a content site that depends on SEO and crawlers picked on the wrong IP, you might end up with a HUGE SEO penalty because another site’s content will be crawled on your domain
There is a very simple and quick solution for the problem I am describing above. Make sure you configure your web server to answer only to YOUR host names. Your servers will return response ONLY for a preconfigured set of hostnames, so if you get Netflix traffic, which probably has netflix.com hostname, your server will reject it immediately.

You can easily configured that in Nginx, Apache or if you have a caching proxy such as Varnish or squid.

A better solution for this problem is to add hostname checks support to ELB itself. I’ve posted a feature request on the AWS EC2 forum with the hopes that it will get implemented.

URL Considerations When Using Amazon CloudFront Origin Pull

CloudFront is a great cost effective Content Delivery Network (CDN). When it first started it only supported files located on Amazon’s Simple Storage Service (S3) and on November 2010 Amazon releasedthe “Origin Pull” feature. Origin Pull allows defining a CDN distribution that pull content directing from a preconfigured site (preconfigured hostname) instead of pulling the content from S3.

Cloud Front

by outdoorPDK

The benefits of using the Origin Pull feature includes:

  • No need to sync an S3 bucket with your static resources (CSS, Images, Javascripts)
  • You can serve via the CDN dynamically generated content (like modified images or text fiels) without pre-generating it and putting it inside an S3 bucket.
One of the problems that may occur when intorudcing any caching mechanism is the need to invalidate all or parts of the data. CloudFront provides an invalidation API, however, it has various limitations such as:
  • You need to call it on each object
  • First 1,000 requests are free, each additional one will cost $0.005.
  • It may take up to 15 minutes for the cache to actually clear from all edge locations
There are some techniques to avoid calling the invalidation API but using versioned URLs.

What are versioned URLs?

A versioned URL contain a version part in it, i.e. “http://cdn.example.com/1.0/myimage.jpg”. The version part doesn’t affect the content of the URL, but since the URL to the resource is different, systems using the URL as a key for caching will think of URLs with 2 different version as 2 different resources.
It’s a nice trick to use when you want to quickly invalidate URLs and make a client pull a different/modified version of a resource.

Versioned URLs granularity

You can determine the granularity of the version value to suite your needs. The granularity will allow you to invalidate as little as one file, or every file served via the origin pull in your application.

Common granualirty levels are:
  • A value determined by the build version (i.e. invalidate all static CSS, JS and images one every new build deployed)
  • A value in the configuration, updated automatically or manually to invalidte parts or all of the objects
  • An automatically generated value per file determined by the file content by utilizing a hash function
  • An automatically generated value per file determined by its last modification date

CloudFront will disregard URL query string versioning

Amazon CloudFront (and quite a few other CDN providers) disregard the query string value of a URL (the part after the question mark), whether it is served from an S3 bucket or via an origin pull. This means you will have to rewrite your URLs to contain the version part inside the URL itself. For example:
  • CloudFront will disregard a versioned URL of the following format and consider both URLs the same resource:
    • http://cdn.example.com/css/myfile.css?v123
    • http://cdn.example.com/css/myfile.css?v333
  • CloudFront will consider these 2 URLs 2 different resources:
    • http://cdn.example.com/css/v123/myfile.css
    • http://cdn.example.com/css/v333/myfile.css
You can easily use Apache Rewrite module or Nginx URL rewriting to quickly rewrite the URL http://cdn.example.com/css/v123/myfile.css to http://cdn.example.com/css/myfile.css.
Some common web frameworks put the versioning part in the query string. Be minded about that and change the code appropriately to place the version part somewhere in the URL path (before the question mark).

I would recommend using CloudFront or any other CDN supporting origin pull in any project as it will significantly reduce the loading time of your pages with minimal cost and reduce the load on your servers. It’s a great, quick and easy way to make your site (or even API) work much better.