Go Cloud Development Kit

In this post, I will talk about an exciting project maintained by the team that develops the Go language: the Go Cloud Development Kit, also known as the Go CDK.

The Go CDK provides a series of abstractions for many features often used in applications that run in the cloud, such as databases, storage, messaging, secrets, etc. The project’s primary goal in creating these abstractions is to make the code cloud-vendor independent. Rather than making your code dependent on one solution, says AWS S3, using the Go CDK, you could easily switch to another vendor like Google Cloud Storage.

But you might be wondering something like:

OK, nice. But in practice, I will hardly change suppliers. So why is it worth using something like this?

I can see some advantages of using the Go CDK:

  • Test writing. Using the abstractions is effortless to use in-memory storage in the tests, while in the production environment, we can use the cloud provider.
  • Different environments. We can use a cheaper supplier in a test/homologation environment and one more robust and expensive in the production environment.
  • Evolution. Your application may start with a more straightforward solution, say SQS for pub/sub, and as the load and complexity increase, you can change your decision and start using Kafka.

For example, let’s look at the following code:

package main

import (


func main() {
	ctx := context.Background()
	// Create an in-memory bucket.
	bucket := memblob.OpenBucket(nil)
	defer bucket.Close()

	key := "my-key"
	text := []byte("hello world")

	// Now we can use bucket to read or write files to the bucket.
	err := write(ctx, bucket, key, text)
	if err != nil {
	data, err := read(ctx, bucket, key)
	if err != nil {

func write(ctx context.Context, bucket *blob.Bucket, key string, text []byte) error {
	err := bucket.WriteAll(ctx, key, text, nil)
	return err

func read(ctx context.Context, bucket *blob.Bucket, key string) ([]byte, error) {
	return bucket.ReadAll(ctx, key)

In the code, we are writing and reading from a document stored in a memory bucket. Therefore, to change the decision and use S3, it is only necessary to change the snippet below in the main function:

sess, err := session.NewSession(&aws.Config{
	Region: aws.String("us-west-1"),

sess, err = session.NewSessionWithOptions(session.Options{
	Profile: "profile_name_configured_in_your_machine",
	Config: aws.Config{
		Region: aws.String("us-east-1"),
if err != nil {
bucket, err := s3blob.OpenBucket(ctx, sess, "post-go-cdk", nil)
if err != nil {
defer bucket.Close()

We are setting up the S3 connection and creating a bucket on this provider. The rest of the code doesn’t change. We continue using the read and write functions.

Currently, the project has abstractions for:

  • Blob, i.e., file storage. It supports Google Cloud Storage, S3, Azure Blob Storage, and local storage.
  • Docstore, i.e., document databases, with support for Google Cloud Firestore, Amazon DynamoDB, Azure Cosmos DB, MongoDB, and in-memory storage.
  • MySQL/PostgreSQL, supporting local databases, GCP Cloud SQL, AWS RDS, and Azure Database.
  • Pub/Sub. Perhaps the most complete, with support for Google Cloud Pub/Sub, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Azure Service Bus, RabbitMQ, NATS, Kafka and memory storage.
  • And the list goes on.

And in addition to code, the official website has an area with some important concepts.

Despite being a project still in its early stages (the last release at the time of writing this post is 0.28), it is a project that is evolving and very active, in addition to being maintained by the language team itself. That’s why it’s worth the investment and use to abstract the complexities I mentioned in this post.