The easiest way to run it is via a cron job. I run it every 1 minute to monitor the “reserved” and “buried” state of a few of the tubes I use (if you want to read more about how beanstalkd works I suggest reading the protocol document).
I highly recommending checking beanstalkd out if you need a queue for offloading tasks to other processes/machines. It’s really geared towards that task and have a couple of dedicated features such as the buried state, time-to-live and time-to-run which makes managing this really painless.
CloudFront is a great cost effective Content Delivery Network (CDN). When it first started it only supported files located on Amazon’s Simple Storage Service (S3) and on November 2010 Amazon releasedthe “Origin Pull” feature. Origin Pull allows defining a CDN distribution that pull content directing from a preconfigured site (preconfigured hostname) instead of pulling the content from S3.
The benefits of using the Origin Pull feature includes:
You can serve via the CDN dynamically generated content (like modified images or text fiels) without pre-generating it and putting it inside an S3 bucket.
One of the problems that may occur when intorudcing any caching mechanism is the need to invalidate all or parts of the data. CloudFront provides an invalidation API, however, it has various limitations such as:
You need to call it on each object
First 1,000 requests are free, each additional one will cost $0.005.
It may take up to 15 minutes for the cache to actually clear from all edge locations
There are some techniques to avoid calling the invalidation API but using versioned URLs.
What are versioned URLs?
A versioned URL contain a version part in it, i.e. “http://cdn.example.com/1.0/myimage.jpg”. The version part doesn’t affect the content of the URL, but since the URL to the resource is different, systems using the URL as a key for caching will think of URLs with 2 different version as 2 different resources.
It’s a nice trick to use when you want to quickly invalidate URLs and make a client pull a different/modified version of a resource.
Versioned URLs granularity
You can determine the granularity of the version value to suite your needs. The granularity will allow you to invalidate as little as one file, or every file served via the origin pull in your application.
Common granualirty levels are:
A value determined by the build version (i.e. invalidate all static CSS, JS and images one every new build deployed)
A value in the configuration, updated automatically or manually to invalidte parts or all of the objects
An automatically generated value per file determined by the file content by utilizing a hash function
An automatically generated value per file determined by its last modification date
CloudFront will disregard URL query string versioning
Amazon CloudFront (and quite a few other CDN providers) disregard the query string value of a URL (the part after the question mark), whether it is served from an S3 bucket or via an origin pull. This means you will have to rewrite your URLs to contain the version part inside the URL itself. For example:
CloudFront will disregard a versioned URL of the following format and consider both URLs the same resource:
CloudFront will consider these 2 URLs 2 different resources:
You can easily use Apache Rewrite module or Nginx URL rewriting to quickly rewrite the URL http://cdn.example.com/css/v123/myfile.css to http://cdn.example.com/css/myfile.css.
Some common web frameworks put the versioning part in the query string. Be minded about that and change the code appropriately to place the version part somewhere in the URL path (before the question mark).
I would recommend using CloudFront or any other CDN supporting origin pull in any project as it will significantly reduce the loading time of your pages with minimal cost and reduce the load on your servers. It’s a great, quick and easy way to make your site (or even API) work much better.
Thinking a bit more about my last post, I’d come to the conclusion that unless there is a really good excuse, one should always submit items to Amazon’s Simple Queue Service (SQS) via publishing to a topic in Simple Notification Service (SNS).
In the simplest case, you’ll publish to a topic and that topic will have a single subscriber that will post the message to the queue.
However, since SNS allows multiple subscribers you can get a couple of features free of charge without changing a single line of code. For example you can:
Add a temporarily an Email address to get messages going to the queue via Email for easy debugging (you can easily and quickly unsubscribe via the link at the end of the Email)
Add additional logging by adding an HTTP/S subscriber, getting the message and perform some logging on it
Notify other monitoring systems that a certain process has started
I know that from now on I’ll try to think really hard if I really need to publish directly to a queue instead of using SNS.