Using Google Cloud SQL from Go with GORM in Google Container Engine and Google Compute Engine

One of the things I find a bit difficult with CloudSQL is surprisingly the first thing you need to do – connect.

Being able to connect securely to your CloudSQL instance is sometimes a bit challenging not just in Go, but in Python as well.

In Go, I’ve found out conflicting suggestions and examples and after much trial and error I found a reasonable solution that works well.

After much searching I found CloudSQL Proxy which takes all of the IP white listing and SSL mess away by using direct OAuth APIs to obtain access to the instance.

While my example works specifically with GORM, which requires a nice database connection string, there are examples of using plain database/sql syntax with CloudSQL Proxy.

I know it looks simple enough, but without importing cloudsql-proxy it wouldn’t have been possible to use the “[email protected](project-id:zone:instance-name)/db” syntax style that is scattered around the net.

3 Ways to Avoid Google AppEngine Lock-In


Google AppEngine is a Platform-as-a-Service (PaaS) offering that’s been around since April 2008.

It’s very easy to build a web app on it or use it as a backend for a mobile native app helping you focus on the code itself instead of the infrastructure to run it.

AppEngine’s (almost) infinite scalability is gained among other things by using AppEngine in combination with other services such as the DataStore – a schemaless NoSQL docuemnt database built for high performance.

All of these makes AppEngine very compelling and easy to use. However, it comes with a price – vendor lock-in. If you are locked into the various AppEngine platform code and services its going to be very hard to leave.

There are some options such as AppScale – which is an open source implementation of the AppEngine stack but its still limited to running your apps in a certain way – the AppEngine way.

If you are stating a new app or service and are considering AppEngine there are a few tips that can help you migrate off of AppEngine, if you chose to, with relative ease.

1. Choose a Web Platform That Can Run Outside AppEngine Easily

Regardless what language you choose to develop it, AppEngine encourages you to use their Web platform. While in most newer language that HTTP platform is very app agnostic, in some – like Python – its better to use something more robust.

Instead of using WebApp2 you can use Flask to develop on AppEngine. Flask is a very active Web application frameworks with lots of modules and add-ons that do anything from database admin scaffolding to authentication.

If you web platform is platform agnostic you can always run your app outside AppEngine on your own servers (like Google Compute Engine – GCP or Amazon’s Elastic Compute Cloud – EC2)

2. Encapsulate AppEngine or Google only APIs

Try to avoid using AppEngine or Google only APIs. It will be hard to replicate it exactly.

If you have to, make sure to encapsulate the Google API code with your own code so you will only have to update code in very specific places. For example, encapsulate the use of Memcache in AppEngine so you could easily move to your own memcached servers if needed.

3. Avoid Using Google Cloud DataStore

While Google DataStore is a very stable and useful NoSQL schema-less database it’s also AppEngine’s biggest vendor lock in since there is no exact replacement for its API that can be used to replace it.

Instead of using DataStore consider using Google Cloud SQL – Google’s MySQL hosted database. If you like the way data is mapped to objects in Google’s Cloud DataStore you can use various ORM solutions in multiple languages such as SQLAlchemny in Python, Gorm in Go,

Another option would be to host your own database on Google Compute Engine (GCE) like Postgres or MongoDB. You can use the Google Cloud Launcher to have a one click installation of Postgres , MongoDB, Cassandra, etc.

Using a 3rd party common database will allow you to easily migrate off of AppEngine if needed.



Google Cloud Network Load Balancer Thinks Server Is Unhealthy

Recently I bumped twice into an issue with Google Cloud Network Load Balancer in which it decided a server was unhealthy regardless of if there was a health check or not.
The first time I cloned the machine and started a new one – which obviously fixed the issue.

The second time I’ve decided to try a bit harder.

I’ve bumped into this old bug detailing a race condition in google-address-manager.

google-address-manager is part of the Google Daemon, a set of processes that runs inside an image running in Google Compute Engine.

If google-address-manager is down, Network Load Balancing may not work at all, marking your instance as unhealthy even though it is working well.

If that happens, you might want to restart the service but issuing:

sudo service google-address-manager restart


sudo /etc/init.d/google-address-manager restart

This should make sure Network Load Balancing will work as it should. Also note that without this service, SSH keys will not get provisioned.

While I do not know what the process crashed (I couldn’t find any reasonable log) I suspect it had something to do with low disk space on that drive.

Google Compute Engine Email Forwarding Solution

Have an app that sends Email and runs on Google Compute Engine (GCE)? Can’t use a locally installed mail server to send emails because Google is blocking sending Emails through the GCE infrastructure?

The solution is simple. Use a 3rd party Email service such as Mandrill, Mailgun, Sendgrid, etc.

Don’t want to change your code and still use a locally installed mail server to forward emails but it doesn’t work?

The solution is simple:

  • Install postfix
  • Configure postfix to forward using port 2525
  • Make sure to enable SASL authentication if the Email service you are using supports it (most do)

I have a server running Celery and I wanted to configure it to send Emails on errors. Celery can only work with a regular SMTP server so I had to install a local one and make sure I forward it to Mandrill, our Email service of choice.

I configured it to forward to port 2525 and configured sasl authentication with our username and password.

# enable SASL authentication
smtp_sasl_auth_enable = yes
# tell Postfix where the credentials are stored
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
# use STARTTLS for encryption
smtp_use_tls = yes
relayhost = []:2525

Works like a charm!

MongoDB ReplicaSet Backup in the Cloud

MongoDB replicaset is a great way to handle scalability and redundancy. In the age of the cloud nodes are added and removed from a replicaset easily and quickly and in most cases all are created from the same image.

So how can we make sure that we are always running backup from a non MASTER replica set node?

Below is a small script that will only run backup on non master replica set node.

It will also archive and compress the backup and upload it to a Google Cloud Storage bucket. You can easily modify the last part to upload the file to an AWS S3 bucket using s3cp or s3cmd.

This is a template that works best for a typical small replica set – 2 nodes and an arbiter. You will install it on both nodes, schedule it using cron and it will only run on the non master one. Even if you flip the master role between servers the script will still work well without changing a thing.

A simple and elegant solution if I may say so myself 🙂

SSL Termination for Google Compute Engine (GCE) Load Balancer

I’ve recently been working on moving some apps that I have from Amazon Web Services (AWS) to Google Compute Engine (GCE) to test the service as well as learn the differences.

One of the things that I had to use was SSL termination in the load balancer. AWS’s Elastic Load Balancer (ELB) supports SSL termination on the load balancer side for quite a while now.

Out of the box, GCE’s load balancer does not support SSL termination at the load balancer level, however you can forward TCP port 443 (the ported used by HTTPS) to the instances and have each instance do the SSL termination.

While it will add some extra load on the CPU to decode the encrypted traffic, its a reasonable solution that is relatively easy to deploy via any of the popular web server (Nginx, Apache, etc).



AWS Reserved Instances Marketplace Should Also Give Back AWS Credits Not Just Cash

Amazon recently introduced the AWS Reserved Instances Marketplace. The idea is great – allow people to sell their reserved instances which they don’t need for whatever reason instead of losing the reservation money (or if you are in heavy utilization – the complete run cost of the instance 24 x 7 x Number of years you reserved).

Before you can sell a reserved instance you need to setup various details to which Amazon will wire the money – however if you are not located in the US or have a US bank account you are out of luck. Unfortunately for me – I’m located in Israel with no US bank account.

Instead of messing with various taxing issues I would like to suggest AWS to simply give back AWS credits. That is – if I sell my reserved instance for $100 I should have the option of directly crediting my AWS account with $100 which I can then use on various AWS services.

I know AWS has some mechanism to work with such a thing since they do give out gift/trial credits all the time. I also know that the Amazon Associates program for referring customers to Amazon can give you back Amazon gift certificates instead of actual money.

Just a thought that would keep the money inside the AWS ecosystem while making non US customers happy.

Quick & Dirty API for Accessing Amazon Web Services (AWS) EC2 Pricing Data

Continuing my post about the JSON files used in the Amazon EC2 Page, I’ve created a small Python library that also acts as a command line interface to get the data.
The data in the JSON files does not contain the same values as the EC2 API for things like region name and instance types so the library/cli translates these values to their corresponding values in the EC2 API.

You can filter the output by region, instance type and OS type.

The command line output support a human readable table format, JSON and CSV.

To use the command line you’ll need to install the following Python libraries:

  • argparse – only if you are using Python < 2.7 (argparse is included in Python 2.7 and 3.x)
  • prettytable – if you want to print the human readable and pretty good looking ASCII table output

Both libraries can be installed using pip.
Grab the code from Github

Don’t forget to send feedback! Enjoy!


Amazon Web Services (AWS) EC2 Pricing Data

Have you ever wanted a way to access the Amazon Web Services EC2 pricing data from code?

It seems Amazon uses predefined JSON files in the EC2 page to display the pricing per region per instance type and type of utilization.

You can easily access these JSON files, load it and use it in your own apps (at least until Amazon changes these URLs).

The naming in these JSON files is sometimes different than the naming used in the API so for example, a small instance (m1.small) is “size” : “sm” and its type is “stdODI” or “stdResI” for reserved instances.

Below are the links to the relevant files:

On Demand Instances

Reserved Instances

Data Transfer Pricing

Cloud Watch Pricing

Elastic IPs Pricing

Elastic Load Balancer (ELB) Pricing


Crunch these files and enjoy it while its there 🙂

My ideal monitoring system

The server / application monitoring field is filled with lots of options these days. The solutions vary greatly with feature set and management in mind.

There are various parameters distinguishing between these systems:

  • Hosted (CloudKick, ServerDensity, CloudWatch, RevelCloud and others) vs Installed (NagiosMuninGangliaCacti)

    Mission Control by Wade Harpootlian
  • Hosted solutions pricing plans use varied parameters such as price/server, price/metric, retention policy, # of metrics tracked, realtime-ness, etc.
  • Poll based method – where collecting server polls the other servers/service vs. Push – where you have a client on the server that pushes locally collected data to logging/monitoring server
  • Allowing custom metrics – not all systems allows monitoring, plotting, sending and alert on custom data (at least not in a easy manner)
Some of these systems are better suited to some tasks more than the others but in general none of them provides a (good) solution for handling todays monitoring needs that spans from operational to applicative.

My ideal monitoring system for any application that have servers running in the background should have the following features:

  • Hosted – for when it doesn’t make sense for me to run the operations of this
  • Open Source – for when the sweet spot leans towards me taking control of the operations and management of collecting my own statistics with a CLEAR path of migration between the hosted solution and my installed one
  • Suitable for a cloud / virtual server environment – where servers go up and down and each server simply reports its data to the monitoring system without the need to pre-register it with the system. This suggests a small client running on each machine collecting local stats and relaying it to a central system
  • Supports custom metrics – allowing me to add whatever stats I want be it operational (CPU, network bandwidth, disk space) or application related (such as number of sign ups or a specific function run time in milliseconds)
  • Understand more than numbers – not all stats are equal. Some are counters which I just want to say “increment” or “decrement”. Others are single data points that I need to simply store. Others are special data points with a unit of measure (such as “milliseconds”, “Bytes/second”, etc)
  • Locally installed client must handle network failures – if there is a network failure or a collecting server down time, stats will be stored locally and relayed to the collecting server when its available again
  • Locally installed client should collect custom metrics – if I want to send some custom metrics from my app – say when a user signs up – my code would talk with the locally installed client and that client will relay the data to the collecting server. This ensures minimum configuration and my app code can assume that there is always a locally installed client which can communicate with the collecting server be it via UNIX sockets, UDP datagram, shared memory or anything else that is suitable for the job
  • Data should be query-able – that is, I really want to query and filter more than just the timeframes of the data and general statistics on it (i.e. group by server, show specific servers, show values higher than X, etc)
  • Reporting Console – somewhere to plot all these statistics which has embeddable graphs (for those who likes building their own dashboards)
  • Built-in near real-time alerts – I want to be able to set alerts that go out near real time when collecting the data to a diverse set of outlets be it Email, Text Messages, Push Notifications, WebHook (for automating some auto handling of failures or problems), etc.
  • API – because everything needs it 🙂

It is very important to me in almost any hosted SaaS (Software-as-a-Service) solution I use that I will have a clear migration path if (or when) the time comes and I need to host a certain sub-system on my own. Sometimes I do have to compromise and use a system that I may not have the ability to migrate (or at least not easily) but the decision is made consciously.

From an architecture point of view, I would like to see these main building blocks:

  • Storage – Reliable, scalable, can handle lots of writes fast and handle any size of dataset for any reasonable retention period
  • Collectors – clients push data to these collectors which gets it and pass it on the processors
  • Processors – Handle incoming data to be written. Aggregate data for quicker reporting.
  • Reporting – something that will enable easy querying and filtering of the data
  • Real time alerts monitoring – handle preconfigured alerts and figuring in near real time if certain conditions are met to issue the relevant alerts/actions
  • Web Console – for configuration, querying and real-time plotting of data
  • API for querying
  • API for real time plotting – to be used for integration with other apps, embeddable chunks of code, etc.

While I’m sure with a little more thought more requirements can be added or some of these requirements can be merged and minimized, this set of features will create a system a lot of people would love to use and feel comfortable using.

Would you use such a system? Do you have anything else to add to the feature set?