DonaldRauscher.com

A Blog About D4T4 & M47H

Doc2Vec + Dask + K8s for the Toxic Comment Classification Challenge

22 March ’18

The goal of this Kaggle challenge was to build a model to flag toxic Wikipedia comments. The training dataset included 159,571 Wikipedia comments which were labeled by human raters. Each comment was evaluated on 6 dimensions: toxic, severe toxic, obscene, threat, insult, and identity hate.

Model Approach

This challenge is a great application for Doc2Vec, where we treat each of the toxicity dimensions as a label. For Doc2Vec, I used the gensim package. I also used gensim's Phraser for combining words into common phrases. To put everything in a sklearn pipeline, I needed to create sklearn transformers/estimators for each step.

My final model was a two model blend of Doc2Vec and TF-IDF + LR. For the LR model, I used the nifty OneVsRestClassifier to build models for each of the 6 y-variables.

Hyperparameter Tuning

I tuned each input model individually and subsequently the blend. I used Dask.distributed, specifically the dask-searchcv package, to parallelize my hyperparameter tuning step. One of the big advantages of the dask-searchcv implementations of GridSearchCV and RandomizedSearchCV is that they avoid repeated work. Estimators with identical parameters and inputs will only be fit once! In my example, I tested the following grid for my TF-IDF + LR model:

param_grid = {
  'cv__lowercase': [True, False],
  'cv__ngram_range': [(1, 1), (1, 2)],
  'tfidf__norm': ['l1', 'l2', None],
  'lr__estimator__C': [0.01, 0.1],
  'lr__estimator__penalty': ['l1', 'l2']
}

Even though this parameter grid has 48 different combinations, GridSearchCV will only run the CountVectorizer step 4 times, the TF-IDF step 12 times, etc. Much more efficient!

Here's a snapshot of the Dask web UI during hyper parameter tuning:

Dask Cluster

I set up my Dask cluster using Kubernetes. And, of course, there was a very useful for this already. This Helm chart sets up a Dask scheduler + web UI, Dask worker(s), and a Jupyter Notebook instance. When installing the Helm chart, you can use an accompanying values.yaml file to specify which Python packages you need to install. I also used Terraform to create/scale my K8s cluster.

I created a modified version of this Dask Helm chart which adds a nodeSelector option for each of the deployments. In K8s, we can create two node pools: one for the worker pods and one for the Jupyter/scheduler pods. That way, when we want to add/remove workers, we can do so without taking down Jupyter!

I set up three scripts for initializing cluster, scaling up the number of nodes / workers, and destroying the cluster when we're done.

Note: The helm init --wait command will wait until the Tiller is running and ready to receive requests. Very useful for CI/CD workflows. You will need to be running v2.8.2 (most recent as of the time of this post) to use this.

Notebook

import pandas as pd
import numpy as np
import yaml, re

from google.cloud import storage
from io import BytesIO

from gensim.models.doc2vec import Doc2Vec, TaggedDocument
from gensim.models.phrases import Phrases, Phraser

from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import TransformerMixin, BaseEstimator, clone
from sklearn.preprocessing import FunctionTransformer, StandardScaler
from sklearn.multiclass import OneVsRestClassifier
from sklearn.linear_model import LogisticRegression

from sklearn.feature_extraction.text import TfidfTransformer, CountVectorizer, strip_tags
from sklearn.feature_extraction.stop_words import ENGLISH_STOP_WORDS

from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score
from sklearn.utils.validation import check_is_fitted

import distributed
from dask_ml.model_selection import GridSearchCV as GridSearchCVBase
# load the data
client_gcs = storage.Client()
bucket = client_gcs.get_bucket('djr-data')

def gcs_to_df(f):
    blob = bucket.blob(f)
    buf = BytesIO()
    blob.download_to_file(buf)
    buf.seek(0)
    return pd.read_csv(buf, encoding = "utf-8")

df_train = gcs_to_df("kaggle-jigsaw/train.csv")
df_test = gcs_to_df("kaggle-jigsaw/test.csv")
yvar = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
# initialize client for interacting with dask
# DASK_SCHEDULER_ADDRESS env variable specifies scheduler ip
client_dask = distributed.Client()
# correlation matrix
df_train[yvar].corr()
toxic severe_toxic obscene threat insult identity_hate
toxic 1.000000 0.308619 0.676515 0.157058 0.647518 0.266009
severe_toxic 0.308619 1.000000 0.403014 0.123601 0.375807 0.201600
obscene 0.676515 0.403014 1.000000 0.141179 0.741272 0.286867
threat 0.157058 0.123601 0.141179 1.000000 0.150022 0.115128
insult 0.647518 0.375807 0.741272 0.150022 1.000000 0.337736
identity_hate 0.266009 0.201600 0.286867 0.115128 0.337736 1.000000
df_train[yvar].apply(np.mean, axis=0)
toxic            0.095844
severe_toxic     0.009996
obscene          0.052948
threat           0.002996
insult           0.049364
identity_hate    0.008805
dtype: float64
# train/test split
xdata = df_train.comment_text
ydata = df_train[yvar]
xdata_train, xdata_eval, ydata_train, ydata_eval = train_test_split(xdata, ydata, test_size = 0.2, random_state = 1)
# return words from corpus
# TODO: also try r"([\w][\w']*\w)"
def tokenize(doc, token=r"(?u)\b\w\w+\b"):
    doc = strip_tags(doc.lower())
    doc = re.compile(r"\s\s+").sub(" ", doc)
    words = re.compile(token).findall(doc)
    return words


# remove stop words
def remove_stop_words(x, stop_words=ENGLISH_STOP_WORDS):
    return [i for i in x if i not in stop_words]
# wrapper for gensim Phraser
COMMON_TERMS = ["of", "with", "without", "and", "or", "the", "a"]
class PhraseTransformer(TransformerMixin, BaseEstimator):

    def __init__(self, common_terms=COMMON_TERMS):
        self.phraser = None
        self.common_terms = common_terms

    def fit(self, X, y=None):
        phrases = Phrases(X, common_terms=self.common_terms)
        self.phraser = Phraser(phrases)
        return self

    def transform(self, X):
        return X.apply(lambda x: self.phraser[x])
# for making tagged documents
# NOTE: can't use FunctionTransformer since TransformerMixin doesn't pass y to fit_transform anymore
class MakeTaggedDocuments(BaseEstimator):

    def fit(self, X, y):
        return self

    def transform(self, X, y=None):
        if y is not None:
            yvar = list(y.columns)
            tags = y.apply(lambda row: [i for i,j in zip(yvar, row) if j == 1], axis=1)
            return [TaggedDocument(words=w, tags=t) for w,t in zip(X, tags)]
        else:
            return [TaggedDocument(words=w, tags=[]) for w in X]

    def fit_transform(self, X, y):
        return self.transform(X, y)
# wrapper for gensim Doc2Vec
class D2VEstimator(BaseEstimator):

    def __init__(self, min_count=10, alpha=0.025, min_alpha=0.0001, vector_size=200, dm=0, epochs=20):
        self.min_count = min_count
        self.alpha = alpha
        self.min_alpha = min_alpha
        self.vector_size = vector_size
        self.dm = dm
        self.epochs = epochs
        self.yvar = None
        self.model = Doc2Vec(seed=1, hs=1, negative=0, dbow_words=0,
                             min_count=self.min_count, alpha=self.alpha, min_alpha=self.min_alpha,
                             vector_size=self.vector_size, dm=self.dm, epochs=self.epochs)

    def get_tags(self, doc):
        vec = self.model.infer_vector(doc.words, self.model.alpha, self.model.min_alpha, self.model.epochs)
        return dict(self.model.docvecs.most_similar([vec]))

    def fit(self, X, y=None):
        self.model.build_vocab(X)
        self.model.train(X, epochs=self.model.epochs, total_examples=self.model.corpus_count)
        self.model.delete_temporary_training_data()
        self.yvar = list(y.columns)
        return self

    def predict_proba(self, X):
        pred = [self.get_tags(d) for d in X]
        pred = pd.DataFrame.from_records(data=pred)
        return pred[self.yvar]
# blend predictions from multiple models
class Blender(FeatureUnion):

    def __init__(self, transformer_list, n_jobs=1, transformer_weights=None):
        self.transformer_list = transformer_list
        self.scaler_list = [(t, StandardScaler()) for t, _ in transformer_list]
        self.n_jobs = n_jobs
        default_transformer_weights = list(np.ones(len(transformer_list)) / len(transformer_list))
        self.transformer_weights = transformer_weights if transformer_weights else default_transformer_weights

    @property
    def transformer_weights(self):
        return self._transformer_weights

    @transformer_weights.setter
    def transformer_weights(self, values):
        self._transformer_weights = {t[0]:v for t,v in zip(self.transformer_list, values)}

    # don't need to check for fit and transform
    def _validate_transformers(self):
        pass

    # iterator with scalers
    def _iter_ss(self):
        get_weight = (self.transformer_weights or {}).get
        return [(t[0], t[1], s[1], get_weight(t[0])) for t, s in zip(self.transformer_list, self.scaler_list)]

    # also fit scalers
    def fit(self, X, y):
        super(Blender, self).fit(X, y)
        self.scaler_list = [(name, ss.fit(trans.predict_proba(X))) for name, trans, ss, _ in self._iter_ss()]
        return self

    # generate probabilities
    def predict_proba(self, X):
        Xs = [ss.transform(trans.predict_proba(X))*weight for name, trans, ss, weight in self._iter_ss()]
        return np.sum(Xs, axis=0)
# create pipeline
d2v_pipeline = Pipeline(steps=[
    ('tk', FunctionTransformer(func=lambda x: x.apply(tokenize), validate=False)),
    ('ph', PhraseTransformer()),
    ('sw', FunctionTransformer(func=lambda x: x.apply(remove_stop_words), validate=False)),
    ('doc', MakeTaggedDocuments()),
    ('d2v', D2VEstimator())
])

lr_pipeline = Pipeline(steps=[
    ('cv', CountVectorizer(min_df=5, max_features=50000, strip_accents='unicode',
                           stop_words='english', analyzer='word')),
    ('tfidf', TfidfTransformer(sublinear_tf=True, use_idf=True)),
    ('lr', OneVsRestClassifier(LogisticRegression(class_weight="balanced")))
])

pipeline = Blender(transformer_list=[('d2v', d2v_pipeline), ('lr', lr_pipeline)])
# for non-multimetric, don't require refit = True for best_params_ / best_score_
class GridSearchCV(GridSearchCVBase):

    # For multiple metric evaluation, refit is a string denoting the scorer that should be
    # used to find the best parameters for refitting the estimator
    @property
    def scorer_key(self):
        return self.refit if self.multimetric_ else 'score'

    @property
    def best_index(self):
        check_is_fitted(self, 'cv_results_')
        return np.flatnonzero(self.cv_results_['rank_test_{}'.format(self.scorer_key)] == 1)[0]

    @property
    def best_params_(self):
        return self.cv_results_['params'][self.best_index]

    @property
    def best_score_(self):
        return self.cv_results_['mean_test_{}'.format(self.scorer_key)][self.best_index]
# some functions for dealing with parameter grids
def add_prefix(prefix, x):
    return {'{}__{}'.format(prefix, k):v for k,v in x.items()}

def flatten_dict(x):
    temp = {}
    for k,v in x.items():
        if isinstance(v, dict):
            temp.update(add_prefix(k, flatten_dict(v.copy())))
        else:
            temp.update({k: v})
    return temp
# hyperparameter tuning
param_grid = {
    'd2v': {
        'd2v__min_count': [10, 25],
        'd2v__alpha': [0.025, 0.05],
        'd2v__epochs': [10, 20, 30],
        'd2v__vector_size': [200, 300]        
    },
    'lr': {
        'cv__lowercase': [True, False],
        'cv__ngram_range': [(1, 1), (1, 2)],
        'tfidf__norm': ['l1', 'l2', None],
        'lr__estimator__C': [0.01, 0.1],
        'lr__estimator__penalty': ['l1', 'l2']        
    },
    'blender': {
        'transformer_weights': [(0.3, 0.7), (0.4, 0.6), (0.5, 0.5), (0.6, 0.4), (0.7, 0.3)]        
    }
}

# wrapper for hyperparameter tuning
def hyperparameter_tune(pipeline, param_grid):
    # create tuner
    tuner = GridSearchCV(pipeline, param_grid, scheduler=client_dask, scoring='roc_auc',
                         cv=3, refit=False, return_train_score=False)

    # determine optimal hyperparameters
    tuner.fit(xdata_train, ydata_train)
    print('Best params: %s' % (str(tuner.best_params_)))
    print('Best params score: %s' % (str(tuner.best_score_)))

    return tuner.best_params_

# load saved hyperparameters if available; o.w. tune
try:
    with open('model_param_d2v.yaml', 'r') as f:
        param_optimal = yaml.load(f)

except IOError:
    param_optimal = {}

    # tune each model
    param_optimal['d2v'] = hyperparameter_tune(d2v_pipeline, param_grid['d2v'])
    param_optimal['lr'] = hyperparameter_tune(lr_pipeline, param_grid['lr'])

    # tune blender
    d2v_pipeline.set_params(**param_optimal['d2v'])
    lr_pipeline.set_params(**param_optimal['lr'])
    param_optimal.update(hyperparameter_tune(pipeline, param_grid['blender']))

    # flatten
    param_optimal = flatten_dict(param_optimal)

    # save best params
    with open('model_param_d2v.yaml', 'w') as f:
        yaml.dump(param_optimal, f)
Best params: {'d2v__alpha': 0.025, 'd2v__epochs': 30, 'd2v__min_count': 10, 'd2v__vector_size': 200}
Best params score: 0.9520673206887134
Best params: {'cv__lowercase': True, 'cv__ngram_range': (1, 1), 'lr__estimator__C': 0.1, 'lr__estimator__penalty': 'l2', 'tfidf__norm': 'l2'}
Best params score: 0.9764642394949188
Best params: {'transformer_weights': (0.3, 0.7)}
Best params score: 0.9774035665175447
# build model with optimal param
pipeline.set_params(**param_optimal)
pipeline.fit(xdata_train, ydata_train)
Blender(n_jobs=1,
    transformer_list=[('d2v', Pipeline(memory=None,
     steps=[('tk', FunctionTransformer(accept_sparse=False,
          func= at 0x7f39416d12f0>, inv_kw_args=None,
          inverse_func=None, kw_args=None, pass_y='deprecated',
          validate=False)), ('ph', PhraseTransformer(com...ne,
          solver='liblinear', tol=0.0001, verbose=0, warm_start=False),
          n_jobs=1))]))],
    transformer_weights={'d2v': 0.3, 'lr': 0.7})
# apply to eval set
ydata_eval_pred = pipeline.predict_proba(xdata_eval)
# calculate auc
auc = [roc_auc_score(ydata_eval[y], ydata_eval_pred[:,i]) for i,y in enumerate(yvar)]
print('Model AUCs: %s' % auc)
print('Avg AUC: %s' % np.mean(auc))
Model AUCs: [0.9662283198414882, 0.9857095145804597, 0.982421955124849, 0.9849362663053255, 0.9757783792333873, 0.9768901227451926]
Avg AUC: 0.9786607596384505
# generate final model
pipeline_final = clone(pipeline)
pipeline_final.set_params(**param_optimal)
pipeline_final.fit(xdata, ydata)
Blender(n_jobs=1,
    transformer_list=[('d2v', Pipeline(memory=None,
     steps=[('tk', FunctionTransformer(accept_sparse=False,
          func= at 0x7f39416d12f0>, inv_kw_args=None,
          inverse_func=None, kw_args=None, pass_y='deprecated',
          validate=False)), ('ph', PhraseTransformer(com...ne,
          solver='liblinear', tol=0.0001, verbose=0, warm_start=False),
          n_jobs=1))]))],
    transformer_weights={'d2v': 0.3, 'lr': 0.7})
# generate output
xdata_test = df_test.comment_text
ydata_test_pred = pipeline_final.predict_proba(xdata_test)
ydata_test_pred = pd.DataFrame(data=ydata_test_pred, columns=yvar)
ydata_test_pred['id'] = df_test.id
ydata_test_pred.to_csv('submission.csv', index=False)

===

Pretty good! With more time, I definitely would have focused on adding more models to the stack, e.g. Naive Bayes and RF/XGBoost. A link to my repo on GH.

Setting up Apache Airflow on GKE

06 February ’18

Historically, I have used Luigi for a lot of my data pipelining. Recently, however, I have started experimenting with Airflow for a variety of reasons. Some things I really like about Airflow:

  • Easier to parallize - Luigi can only be scaled locally. You can create multiple worker threads by passing --workers N when kicking off a job, but you cannot parallelize Luigi jobs across multiple machines! Airflow parallelizes quite well. For instance, you can use Celery to scale out your workers.
  • Superior scheduler - The Luigi "central scheduler" is a bit of a misnomer; it doesn't actually schedule anything! Its main purpose is to prevent worker threads from running the same task concurrently. That's it. You still need to initiate Luigi jobs with a cronjob. The Airflow scheduler is much more useful. You can use it to set up a cronjob-like schedule for a DAG and even initiate retries following errors.
  • Connection management - Airflow has a nice mechanism for organizing connections to your resources. This is really useful, especially in a multiuser environment. It allows you to avoid storing secrets in .gitignore'd config files all over the place.
  • Better ongoing support - Luigi, originally open sourced at Spotify, is currently maintained on a "for fun basis" by Arash Rouhani, who currently works at Google. Meanwhile, Airflow, originally open sourced at Airbnb, is being incubated by Apache.

Given that I have been on a Docker/Kubernetes kick of-late, I decided to spend some time setting up Airflow on GKE. I leveraged an awesome Docker image with Airflow from Matthieu Roisil. I used a Postgres instance on CloudSQL for the Airflow meta database and Redis as the Celery backend. Also used git-sync sidecar container to continuously sync DAGs and plugins on running cluster, so you only need to rebuild the Docker image when changing the Python environment! Finally, I used Terraform for managing all my GCP infrastructure.

Terraform Configuration

# infrastructure.tf

variable "project" {}

variable "postgres_user" {
  default = "airflow"
}
variable "postgres_pw" {
  default = "airflow"
}

variable "region" {
  default = "us-central1"
}

variable "zone" {
  default = "us-central1-f"
}

provider "google" {
  version = "~> 1.5"
  project = "${var.project}"
  region = "${var.region}"
}

resource "google_compute_global_address" "airflow-static-ip" {
  name = "airflow-static-ip"
}

resource "google_compute_disk" "airflow-redis-disk" {
  name  = "airflow-redis-disk"
  type  = "pd-ssd"
  size = "200"
  zone  = "${var.zone}"
}

resource "google_sql_database_instance" "airflow-db" {
  name = "airflow-db"
  database_version = "POSTGRES_9_6"
  region = "${var.region}"
  settings {
    tier = "db-g1-small"
  }
}

resource "google_sql_database" "airflow-schema" {
  name = "airflow"
  instance = "${google_sql_database_instance.airflow-db.name}"
}

resource "google_sql_user" "proxyuser" {
  name = "${var.postgres_user}"
  password = "${var.postgres_pw}"
  instance = "${google_sql_database_instance.airflow-db.name}"
  host = "cloudsqlproxy~%"
}

resource "google_container_cluster" "airflow-cluster" {
  name = "airflow-cluster"
  zone = "${var.zone}"
  initial_node_count = "1"
  node_config {
    machine_type = "n1-standard-4"
    oauth_scopes = ["https://www.googleapis.com/auth/devstorage.read_only"]
  }
}

Kubernetes Manifest

Note: I packaged all Kubernetes resources in a Helm chart. Helm has several features (e.g. named templates, value substitutions) that allow you write your Kubernetes manifests in a more DRY way.

# config.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: config-airflow
data:
  EXECUTOR: Celery
  POSTGRES_USER: airflow
  POSTGRES_DB: airflow
  POSTGRES_HOST: postgres
  POSTGRES_PORT: "5432"
  REDIS_HOST: redis
  REDIS_PORT: "6379"
  FLOWER_PORT: "5555"
  {{- if .Values.fernetKey }}
  FERNET_KEY: {{ .Values.fernetKey }}
  {{- end }}
  AIRFLOW__CORE__DAGS_FOLDER: "/git/git/dags/"
  AIRFLOW__CORE__PLUGINS_FOLDER: "/git/git/plugins/"
  AIRFLOW__CORE__LOAD_EXAMPLES: "0"
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: config-git-sync
data:
  GIT_SYNC_REPO: {{ .Values.dagRepo }}
  GIT_SYNC_DEST: git
# db.yaml
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: postgres
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: airflow
        tier: postgres
    spec:
      restartPolicy: Always
      containers:
        - name: cloudsql-proxy
          image: gcr.io/cloudsql-docker/gce-proxy:1.11
          command: ["/cloud_sql_proxy", "--dir=/cloudsql",
                    "-instances={{ .Values.projectId }}:us-central1:airflow-db=tcp:0.0.0.0:5432",
                    "-credential_file=/secrets/cloudsql/credentials.json"]
          ports:
            - name: postgres
              containerPort: 5432
          volumeMounts:
            - name: cloudsql-instance-credentials
              mountPath: /secrets/cloudsql
              readOnly: true
            - name: ssl-certs
              mountPath: /etc/ssl/certs
            - name: cloudsql
              mountPath: /cloudsql
      volumes:
        - name: cloudsql-instance-credentials
          secret:
            secretName: cloudsql-instance-credentials
        - name: cloudsql
          emptyDir:
        - name: ssl-certs
          hostPath:
            path: /etc/ssl/certs
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: redis
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: airflow
        tier: redis
    spec:
      restartPolicy: Always
      containers:
        - name: redis
          image: redis:3.0-alpine
          ports:
            - name: redis
              containerPort: 6379
          volumeMounts:
            - name: redis-disk
              mountPath: /data/redis
      volumes:
        - name: redis-disk
          gcePersistentDisk:
            pdName: airflow-redis-disk
            fsType: ext4
# ingress.yaml
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress
  annotations:
    kubernetes.io/ingress.global-static-ip-name: airflow-static-ip
    kubernetes.io/tls-acme: "true"
spec:
  tls:
  - secretName: airflow-tls
    hosts:
    - web.{{ .Values.domain }}
    - flower.{{ .Values.domain }}
  rules:
  - host: web.{{ .Values.domain }}
    http:
      paths:
      - backend:
          serviceName: web
          servicePort: 8080
  - host: flower.{{ .Values.domain }}
    http:
      paths:
      - backend:
          serviceName: flower
          servicePort: 5555
# service.yaml
---
  apiVersion: v1
  kind: Service
  metadata:
    name: web
  spec:
    type: NodePort
    selector:
      app: airflow
      tier: web
    ports:
      - name: web
        port: 8080
  ---
  apiVersion: v1
  kind: Service
  metadata:
    name: flower
  spec:
    type: NodePort
    selector:
      app: airflow
      tier: flower
    ports:
      - name: flower
        port: 5555
  ---
  kind: Service
  apiVersion: v1
  metadata:
    name: postgres
  spec:
    type: ClusterIP
    selector:
      app: airflow
      tier: postgres
    ports:
      - name: postgres
        port: 5432
        protocol: TCP
  ---
  kind: Service
  apiVersion: v1
  metadata:
    name: redis
  spec:
    type: ClusterIP
    selector:
      app: airflow
      tier: redis
    ports:
      - name: redis
        port: 6379
# deploy.yaml
{{- define "airflow" -}}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: {{ .name }}
spec:
  replicas: {{ .replicas | default 1 }}
  template:
    metadata:
      labels:
        app: airflow
        tier: {{ .name }}
    spec:
      restartPolicy: Always
      containers:
        - name: web
          image: gcr.io/{{ .projectId }}/airflow-gke:latest
          ports:
            - name: web
              containerPort: 8080
          volumeMounts:
          - name: dagbag
            mountPath: /git
          envFrom:
          - configMapRef:
              name: config-airflow
          {{- if eq .name "web" }}
          livenessProbe:
            httpGet:
              path: /
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 30
          readinessProbe:
            httpGet:
              path: /
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 30
          {{- end }}
          command: ["/entrypoint.sh"]
          args:  {{ .commandArgs }}
        - name: git-sync
          image: gcr.io/google_containers/git-sync:v2.0.4
          volumeMounts:
          - name: dagbag
            mountPath: /git
          envFrom:
          - configMapRef:
              name: config-git-sync
      volumes:
        - name: dagbag
          emptyDir: {}
{{- end -}}

---
{{- $_ := set .Values.web "projectId" .Values.projectId }}
{{- template "airflow" .Values.web }}
---
{{- $_ := set .Values.scheduler "projectId" .Values.projectId }}
{{- template "airflow" .Values.scheduler }}
---
{{- $_ := set .Values.workers "projectId" .Values.projectId }}
{{- template "airflow" .Values.workers }}
---
{{- $_ := set .Values.flower "projectId" .Values.projectId }}
{{- template "airflow" .Values.flower }}

Deploy Instructions

(1) Store project id and Fernet key as env variables; create SSL cert / key

export PROJECT_ID=$(gcloud config get-value project -q)

if [ ! -f '.keys/fernet.key' ]; then
  export FERNET_KEY=$(python -c "from cryptography.fernet import Fernet; FERNET_KEY = Fernet.generate_key().decode(); print(FERNET_KEY)")
  echo $FERNET_KEY > .keys/fernet.key
else
  export FERNET_KEY=$(cat .keys/fernet.key)
fi

(2) Create Docker image and upload to Google Container Repository

docker build -t airflow-gke:latest .
docker tag airflow-gke gcr.io/${PROJECT_ID}/airflow-gke:latest
gcloud docker -- push gcr.io/${PROJECT_ID}/airflow-gke

(3) Create infrastructure with Terraform

Note: You will also need to create a Service Account for the CloudSQL proxy in Kubernetes. Create that (Role = "Cloud SQL Client"), download the JSON key, and attach as secret. Stored in .keys/airflow-cloudsql.json in this example.

terraform apply -var project=${PROJECT_ID}

gcloud container clusters get-credentials airflow-cluster
gcloud config set container/cluster airflow-cluster

kubectl create secret generic cloudsql-instance-credentials \
  --from-file=credentials.json=.keys/airflow-cloudsql.json

(4) Set up Helm / Kube-Lego for TLS

Note: You only need to set up kube-lego if you want to set up TLS using Let's Encrypt. I only set up HTTPS because I secured my instance with Cloud IAP, which requires a HTTPS load balancer.

kubectl create serviceaccount -n kube-system tiller
kubectl create clusterrolebinding tiller-binding --clusterrole=cluster-admin --serviceaccount kube-system:tiller
helm init --service-account tiller

kubectl create namespace kube-lego

helm install \
  --namespace kube-lego \
  --set config.LEGO_EMAIL=donald.rauscher@gmail.com \
  --set config.LEGO_URL=https://acme-v01.api.letsencrypt.org/directory \
  --set config.LEGO_DEFAULT_INGRESS_CLASS=gce \
  stable/kube-lego

(5) Deploy with Kubernetes

helm install . \
  --set projectId=${PROJECT_ID} \
  --set fernetKey=${FERNET_KEY}

Test Pipeline

The example pipeline (citibike.py) streams data from this Citibike API into Google BigQuery. I had a lot of issues with the GCP contrib classes in Airflow (BQ hook did not support BQ streaming, base GCP hook based on now-deprecated oauth2client library instead of google-auth) so I built my own plugin!

Note: To run Citibike example pipeline, will need to create a Service Account with BigQuery access and add to the google_cloud_default connection in the Airflow UI.

---

Overall, I'm really excited to start using Airflow for more of my data pipelining. Here is a link to all my code on GitHub. Cheers!

Quick and Easy BI: Setting up Redash on GKE

31 December ’17

Professionally, I have worked quite a lot with BI platforms Looker and Tableau. They are great BI platforms for an organization, though probably too heavy (and too expensive) for a small project or a bootstrapping startup. Sometimes you just need something where you can write queries and dump them into a visualization. Recently, I was looking to implement a lightweight BI tool for a personal project. I chose to use Redash, which you can self-host on your own infrastructure. This post documents how to set up Redash on Google Cloud using GKE. Because I am using CloudSQL as the Postgres backend and a persistent drive for Redis, we can delete our cluster when we're not using it and spin it back up as needed, without losing any data!

Infrastructure Setup

We will use the following Google Cloud components to set up Redash:

  • Postgres DB (via CloudSQL)
  • Persistent disk for Redis instance
  • Kubernetes cluster for Redash Docker image

Here is a Terraform configuration which defines all the necessary infrastructure:

# infrastructure.tf

variable "project" {}

variable "postgres_user" {
  default = "redash"
}
variable "postgres_pw" {
  default = "hsader"
}

variable "region" {
  default = "us-central1"
}

variable "zone" {
  default = "us-central1-f"
}

provider "google" {
  version = "~> 1.4"
  project = "${var.project}"
  region = "${var.region}"
}

resource "google_compute_global_address" "redash-static-ip" {
  name = "redash-static-ip"
}

resource "google_compute_disk" "redash-redis-disk" {
  name  = "redash-redis-disk"
  type  = "pd-ssd"
  size = "200"
  zone  = "${var.zone}"
}

resource "google_sql_database_instance" "redash-db" {
  name = "redash-db"
  database_version = "POSTGRES_9_6"
  region = "${var.region}"
  settings {
    tier = "db-f1-micro"
  }
}

resource "google_sql_database" "redash-schema" {
  name = "redash"
  instance = "${google_sql_database_instance.redash-db.name}"
}

resource "google_sql_user" "proxyuser" {
  name = "${var.postgres_user}"
  password = "${var.postgres_pw}"
  instance = "${google_sql_database_instance.redash-db.name}"
  host = "cloudsqlproxy~%"
}

resource "google_container_cluster" "redash-cluster" {
  name = "redash-cluster"
  zone = "${var.zone}"
  initial_node_count = "1"
  node_config {
    machine_type = "n1-standard-4"
  }
}

Create our infrastructure with Terraform and install Helm Tiller on our Kubernetes cluster. You will also need to create a service account that the CloudSQL proxy on Kubernetes will use. Create that (Role = "Cloud SQL Client"), download the JSON key, and attach key as secret.

export PROJECT_ID=$(gcloud config get-value project -q)
terraform apply -var project=${PROJECT_ID}

gcloud container clusters get-credentials redash-cluster
gcloud config set container/cluster redash-cluster

helm init

kubectl create secret generic cloudsql-instance-credentials \
    --from-file=credentials.json=[PROXY_KEY_FILE_PATH]

Redash Deployment

Next, we need to deploy Redash on our Kubernetes cluster. I packaged my Kubernetes resources in a Helm chart, which you can use to inject values / variables via template directives (e.g. {{ ... }}). I used a Helm hook to set up the configuration and the database resources (CloudSQL proxy + Redis) and also run a job to initialize the Redash schema before deploying the app.

helm install . --set projectId=${PROJECT_ID}

Redash resources:

# config.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: config
  annotations:
    "helm.sh/hook": pre-install
    "helm.sh/hook-weight": "1"
data:
  REDASH_DATABASE_URL: postgresql://{{ .Values.postgres.user }}:{{ .Values.postgres.pw }}@postgres:5432/redash
  REDASH_REDIS_URL: "redis://redis:6379/0"
  PYTHONUNBUFFERED: "0"
  REDASH_LOG_LEVEL: "INFO"
# db.yaml
---
kind: Service
apiVersion: v1
metadata:
  name: postgres
  annotations:
    "helm.sh/hook": pre-install
    "helm.sh/hook-weight": "2"
spec:
  type: ClusterIP
  selector:
    app: redash
    tier: postgres
  ports:
    - name: postgres
      port: 5432
---
kind: Service
apiVersion: v1
metadata:
  name: redis
  annotations:
    "helm.sh/hook": pre-install
    "helm.sh/hook-weight": "2"
spec:
  type: ClusterIP
  selector:
    app: redash
    tier: redis
  ports:
    - name: redis
      port: 6379
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: postgres
  annotations:
    "helm.sh/hook": pre-install
    "helm.sh/hook-weight": "2"
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redash
        tier: postgres
    spec:
      containers:
        - name: cloudsql-proxy
          image: gcr.io/cloudsql-docker/gce-proxy:1.11
          command: ["/cloud_sql_proxy", "--dir=/cloudsql",
                    "-instances={{ .Values.projectId }}:us-central1:redash-db=tcp:0.0.0.0:5432",
                    "-credential_file=/secrets/cloudsql/credentials.json"]
          ports:
            - name: postgres
              containerPort: 5432
          volumeMounts:
            - name: cloudsql-instance-credentials
              mountPath: /secrets/cloudsql
              readOnly: true
            - name: ssl-certs
              mountPath: /etc/ssl/certs
            - name: cloudsql
              mountPath: /cloudsql
      volumes:
        - name: cloudsql-instance-credentials
          secret:
            secretName: cloudsql-instance-credentials
        - name: cloudsql
          emptyDir:
        - name: ssl-certs
          hostPath:
            path: /etc/ssl/certs
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: redis
  annotations:
    "helm.sh/hook": pre-install
    "helm.sh/hook-weight": "2"
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redash
        tier: redis
    spec:
      containers:
        - name: redis
          image: redis:3.0-alpine
          ports:
            - name: redis
              containerPort: 6379
          volumeMounts:
            - name: redis-disk
              mountPath: /data/redis
      volumes:
        - name: redis-disk
          gcePersistentDisk:
            pdName: redash-redis-disk
            fsType: ext4

Redash DB initialization job:

# init.yaml
---
kind: Job
apiVersion: batch/v1
metadata:
  name: init-db
  annotations:
    "helm.sh/hook": pre-install
    "helm.sh/hook-weight": "3"
spec:
  template:
    spec:
      restartPolicy: Never
      containers:
        - name: redash-init
          image: redash/redash:latest
          resources:
            requests:
              memory: 1Gi
          envFrom:
            - configMapRef:
                name: config
          args: ["create_db"]

Redash deployment:

# app.yaml
---
kind: Service
apiVersion: v1
metadata:
  name: redash
  annotations:
    kubernetes.io/ingress.global-static-ip-name: redash-static-ip
spec:
  type: LoadBalancer
  selector:
    app: redash
    tier: app
  ports:
    - port: 80
      targetPort: 5000
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: redash
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redash
        tier: app
    spec:
      containers:
        - name: server
          image: redash/redash:latest
          resources:
            requests:
              memory: 1Gi
          ports:
            - containerPort: 5000
          envFrom:
            - configMapRef:
                name: config
          env:
            - name: REDASH_COOKIE_SECRET
              value: {{ .Values.cookieSecret }}
          args: ["server"]
        - name: workers
          image: redash/redash:latest
          resources:
            requests:
              memory: 1Gi
          envFrom:
            - configMapRef:
                name: config
          env:
            - name: WORKERS_COUNT
              value: "{{ .Values.numWorkers }}"
            - name: QUEUES
              value: "queries,scheduled_queries,celery"
          args: ["scheduler"]

---

You can find all of my code up on my GitHub here. Cheers!