Software Engineers Interviews Mistakes - Why "Homework" task are silly

Recruitment is hard and Interviews are complex

Recruitment is a complex and difficult matter, just like Interviews are far from perfect and are very exhausting both for companies and candidates alike.

Engineers looking for new opportunities have a great dislike for the process. It includes lots of calls, interviews, tests, and more. It’s a common feeling for many that they just took on a second job: looking for work.

Companies, on the other hand, don’t have an easier time with it. They have to sift through loads of resumes, read an insane amount of emails, and answer tons of calls. All in order to decide who they will actually interview in person. There are so many candidates that sound and look the part while many are barely qualified to make coffee in real life.

Common mistakes in interviews

I’ve been involved in hundreds—if not thousands—of these processes and worn multiple hats while doing so, and I’d like to make a few observations and important notes to candidates and companies alike.

First you have to remember: Great interviews don’t mean great hires! Both sides have to remember this, as it’s a critical point! There are many things that you will not know:

  • Will that person be hard-working?
  • Will they not give up when confronted with hard tasks?
  • Will they be able to find creative solutions?
  • Will they be a good coder or not?

There are many other things you won’t know; you’ll only know if the person has potential and how well they do at interviews!

When we interview as candidates and as companies we get very excited about certain opportunities (great cultural fit, amazing performance on interview tasks, everyone seems so nice, the unexpected feeling of a strong work connection, etc.) No matter how logical, measured, or obscure your personal reasoning is about that candidate or company, you won’t really know what it means to work together until you actually work together

None of the big guys do it

Google , Facebook, Twitter and many of the big guys, could have easily sent homework task to all candidates, but they don't! They spend a day or more with a candidate, they run through code together, and they get a sense for what that person is like. So why are you trying to re-invent the wheel? You're not going to write you own front-end framework, you'll use React or Angular, so why not also recruit as they do?

Why homework tasks are silly and what should we do?

Way too many companies send people take-home tasks or, better yet, some silly HackerRank that tests people on solving a problem in a very time-limited manner.

I’m not sure who in The Valley started this and made everyone follow this detached from reality pr.

You’re well-funded? That’s no reason to assign a technical a homework task. Feel free to offer it, as some candidates like it, but your best bet is to spend time with a person, solve a problem, code together, etc.

Since we agreed good interviews != good hires, then why not do your best to simulate the environment of solving a real task at work? Isn’t that what you’d want that person to do anyhow?
Run through some code problem together and get a sense for what it is like to work together.
You’ll see how a person thinks, how he/she tackles hard problems, and gain much more insight than you would from a random test or take-home task.

What is the logic behind sending some obscure test or asking someone to build a software for you for free? Are you trying to miss out on good candidates? Should someone that is busy spend half a day, a day, or even more writing free code to prove that he/she is worthy of employment? Maybe that’s okay for recent graduates, but what about for people with 5–10 years’ experience or more? What profession in the world does that?

I’m a big believer in fairness, and if you ask someone to invest time then be willing to invest the same time yourself as well. While it will be more time-consuming, you will both have the chance to work through a task together and you’ll get a good sense for working with each other.

When homework tasks make sense and how to give them?

Personally, I say only if you’re willing to pay that person for their time and show that you value their time. Say you’re a starving startup—pick a small task, offer it as a stand-alone project, and assuming that the code is good, the candidate would sign over the rights and you might even use it. Then pay them for their time except if, of course, the code is bad and they do not pass.

In the next part I’ll talk about more interviewing tips and suggestions. Stay tuned!

Cost Effective Docker Jobs on Google Cloud

Recently I wanted to run some jobs. I'm a huge advocate of using docker, so natrually I was going to build a docker image, to run my python scripts, then I wanted to schedule said job to run once in a while. Doing so on AWS is pretty easy using lamda and step functions, however since this is wasn't a paid gig, I wasn't able to get someone to fork the bill, enter Google Cloud!

Google Cloud Platform (GCP), is in a way the newer kid on the block. AWS has a long history of cloud platform and excellent customer support, whereas Google customer service is a bit like big foot, you've heard of it, some people say they seen it, but it doesn't really exist...BUT, google still is an amazing technology company, they release early, the imporve it to make it rock (i.e. Android). And best they offer 300 bux free credits. So I decided to go for google, how bad can it be?

In this post I'll talk about how I setup the google cloud to work for me, in a rather cool way. It took lots of blood, sweat and tears but I got it working. I schedule a job once in a while, I spin up a cluster of instances, run the job then shut it down! Not only is that cool (ya I'm a geek), it's also quiet cost effective.

I will outline what I did, and even try share the my code with you guys.
Here goes:

Step 1 - Build docker image and push to google cloud private registry

The first step was the easier and the most trival. It is pretty much the same as AWS.

Create a build docker image

Let's start with creating a build image. GitLab ci allows you to use your own image as your build machine, this is cool. If you're using a different ci, I leave it to you to adjust this to your our system.

from docker:latest

RUN apk add --no-cache python py2-pip curl bash
RUN curl -sSL | bash
ENV PATH $PATH:~/google-cloud-sdk/binser

RUN pip install docker-compose

This a Dockerfile for the build machine. It uses docker machine and it pulls pip, and installs glcoud.

Then I push this build image to docker-hub. If you haven't done this before you need to:
1) Singup to docker cloud and remember your username.

2) in the build machine folder, run docker build . -t /build-machine
3) run:

$ docker login
$ docker push /build-machine:latest

Create a GCP service account

You have to create a service account, give it access to the registery then export the key file as json. This is very simple step. If you're unsure how to do it, just click through the IAM / Admin, you need to create a user, give it an IAM and export the key. Very easy.

Customize CI Script to push to private registery

Once this is all done, you have your build machine, we can now work on your ci script. I will show you how to do this on gitlab ci, but you can adapt this to your own environment. First create a build environment variable called CLOUDSDK_JSON and paste the contents of the json key you created in the previous step as the value of that key. Then add the following .gitlab-ci.yaml file to your project.

image: /build-machine

  - docker:dind

  - build
  - test
  - deploy

  - apk add --no-cache python py2-pip
  - pip install --no-cache-dir docker-compose
  - docker version
  - docker-compose version
  - gcloud version

  stage: build
    - develop
    - master
    - docker build -t :latest .

  stage: deploy
    - develop
    - master
    - docker build -t :latest .
    - echo $CLOUDSDK_JSON > key.json
    - gcloud auth activate-service-account  --key-file=key.json
    - docker tag :latest $PRIVATE_REGISTERY/:latest
    - gcloud docker -- push $PRIVATE_REGISTERY/:latest
    - gcloud auth revoke

Please adjust the job-image-name to your job docker image name, service_account_name to the service acocunt name you created and the build image to the image you pushed to docker hub. This yaml file is directect at a python job, but you can change it to any other language.

I have 3 stages: build, test and deploy.
I build and test on all branches, but only deploy on master. Gitlab ci has an issue, each step can happen on a different machine, so my first build step isn't kept to the deploy phase, which forced me to re-build in the deploy phase.

Once this is done, you ci system should be pusing your image to your google private registery, well done!

Step 2 - Running Jobs in a Tеmp cluster

Here come the tricky part. Since jobs only need to run every x time, and only for a limited period, it's ideal to be run as a google function. However those are limited to one hour, and can only be written in JavaScript (AWS support multiple languages with lamda and with state machines). And since I didn't want to pay for full time cluster time running, I had to develop my own way to run jobs.

Kubernetes Services

Controlling jobs in a cluster and cluster control can be achieved using Kubernetes. This is one part of GCP that really shines, it let's you define services, jobs, and pods (a collection of containers), and to run them.

To do this, I wrote a KubernetesService class in python that will:

- Spin up / create a cluster.
- Launch docker containers on the cluster.
- Once jobs finish, shutdown the cluster.

class KubernetesService():

    def __init__(self, namespace='default'):
        self.api_instance = kubernetes.client.BatchV1Api()
        service = build('container', 'v1')
        self.nodes = service.projects().zones().clusters().nodePools()
        self.namespace = namespace

This is the class and constructor. The full code for this class has more configuration and env varibles, as is part of the appengine cron project. I will include repo, if you want full details on how to achieve this.

def setClusterSize(self, newSize):"resizing cluster {} to {}".format(CLUSTER_ID, newSize))
        self.nodes.setSize(projectId=PROJECT_ID, zone=ZONE,
                           clusterId=CLUSTER_ID, nodePoolId=NODE_POOL_ID,
                           body={"nodeCount": newSize}).execute()

This function can control the cluster size. It can spin it up, before jobs need to be run, then shut it down after:

    def kubernetes_job(self, containers_info,  job_name='default_job', shutdown_on_finish=True):

        # Scale the Kubernetes to 3 nodes
        timestampped_job_name = "{}-{:%Y-%m-%d-%H-%M-%S}".format(job_name,
        # Adding the container to a pod definition
        pod = kubernetes.client.V1PodSpec()
        pod.containers = self.create_containers(containers_info) = "p-{}".format(timestampped_job_name)
        pod.restart_policy = 'OnFailure'
        # Adding the pod to a Job template
        template = kubernetes.client.V1PodTemplateSpec()
        template_metadata = kubernetes.client.V1ObjectMeta() = "tpl-{}".format(timestampped_job_name)
        template.metadata = template_metadata
        template.spec = pod
        # Adding the Job Template to the Job spec
        spec = kubernetes.client.V1JobSpec()
        spec.template = template
        # Adding the final job spec to the top level Job object
        body = kubernetes.client.V1Job()
        body.api_version = "batch/v1"
        body.kind = "Job"
        metadata = kubernetes.client.V1ObjectMeta() = timestampped_job_name
        body.metadata = metadata
        body.spec = spec
            # Creating the job
            api_response = self.api_instance.create_namespaced_job(self.namespace, body)
  'job creations result'.format(api_response))
        except ApiException as e:
            print("Exception when calling BatchV1Api->create_namespaced_job: %s\n" % e)

kubernetes_job function creates continers (an additional function that creates container objects with env variables. Containers are then part of a pod, and that pod is part of a job template which is part of a job spec. You can read more about it in the Kubernetes docs.

def shutdown_cluster_on_jobs_complete(self):
        api_response = self.api_instance.list_namespaced_job(self.namespace)
        if next((item for item in api_response.items if item.status.succeeded != 1), None) is None:
  "no running jobs found, shutting down clutser")
  "found running jobs, keeping cluster up")

If you don't want to code to continue to wait for the jobs, you can poll for completion, and that is what shutdown_cluster_on_jobs_complete is for. It will shutdown the cluster once there are no running jobs.

This class controls all the job scheduling execution successfully.
And it's part of an appengine (however they can be used independently).
Next we we need to have this script scheduled or triggered to activate.
And that is our cron scheduler task.

Cron scheduler appengine service

Sadly google doesn't give you an easy way to run code in the cloud, you actually have to write more code to run code (silly right?)

The concenpt is that appengine provies you with a cron web scheduler that calles you own apps endpoints in given intervals.

First you add cron.yaml to your project and you configure which endpoint and the time interval to hit that endpoint:

- description: task to kick off all updates
  url: /events/run-jobs
  schedule: every 2 hours
- description: task to shutdown jobs when finished
  url: /events/shutdown-jobs
  schedule: every 5 min

Then we can add a handler to shutdown the jobs, and to kick off the jobs.

class RunJobsHandler(webapp2.RequestHandler):
      def get(self):
  "running jobs")
            jobs_list = Settings.get("JOBS_LIST").split()
            for job_name in jobs_list:
                job_name = job_name.replace("_", "-") //names cannot have underscore
      'about to publish job {}'.format(job_name))

                containers_info = [
                        "image": Settings.get("IMAGE_NAME"),
                        "name": job_name,
                        "env_vars": [
                            { "name": "SOME_ENV_BAR", "value": some_value}

                job_env_vars = Settings.get('JOB_ENV_VARS').split()
                for env_var in spider_container_env_vars:
          'adding container var {}'.format(env_var))
                        "name": env_var,
                        "value": Settings.get(env_var)
                kuberService.kubernetes_job(containers_info, job_name, False)
            self.response.status = 204
        except Exception, e:
            self.response.status = 500
            self.response.write("error running jobs, check logs for more details.")
            self.response.write("jobs published successfully")

Last we want to add a Setting class to load env like variables from the datastore:

import os
from google.appengine.ext import ndb

if os.getenv('SERVER_SOFTWARE', '').startswith('Google App Engine/'):
    PROD = True
    PROD = False
class Settings(ndb.Model):
    name = ndb.StringProperty()
    value = ndb.StringProperty()

    def get(name):
        retval = Settings.query( == name).get()
        if not retval:
            retval = Settings()
   = name
            retval.value = NOT_SET_VALUE
        if retval.value == NOT_SET_VALUE:
            raise Exception(('Setting %s not found in the database. A placeholder ' +
                             'record has been created. Go to the Developers Console for your app ' +
                             'in App Engine, look up the Settings record with name=%s and enter ' +
                             'its value in that record\'s value field.') % (name, name))
        return retval.value

Note that most the app depends on the datastore. Sadly google doesn't allow you to have env variables easily, but you can setup env variables in the datastore.
For this I added a class called Settings.

Then we just add bind the route handler:

import webapp2

app = webapp2.WSGIApplication([('/events/run-jobs', RunJobsHandler)],

This should allow our app, to spin up a cluster, launch containers and then shutdown the cluster. In my code I also added a handler for the shutdown.

Then make sure you have gcloud installed (here is how and just deploy the appengine using the gcloud deploy command and you should be good to go ( here is how

While my example runs the same docker image, and just has different operation with different env variables, you can easily adjust this code to suit whatever need you might have.

Here is the full git repo:

Hope you find it useful!


JS Testing Survival (Mocha, Chai, Sinon)

This post is a simple guide to JS testing with Mocha, Chai, Sinon on CircleCI. It will show you how to setup for testing, some great tips for good coverage and more.
I'll cover some best practices I use for testing JS code. It's not official best practices, but I use these concepts as I found they make it easier to get easy to read test with full converge and a very flexible setup.

This post will dictate a unit test file to see the different points I found helpful when composing unit test files:


mocha is a testing framework for js, that allows you to use any assertion library you'd like, it goes very commonly with Chai. Chai is an assertion library that works with mocha. chai You can read there about how mocha and chai work, how to use it and more.
One of chai's strong points is that you can easily extend it using support libraries and plugins. We will use a few of them, so let's first setup our dependencies in our project:

npm install mocha chai chai-http chai-as-promised co-mocha sinon --save-dev

We are installing a few liberaries:

  • mocha - js testing framework.
  • chai - the chai library, has a good reference for how to use chai to assert or expect values, and a plugin directory - This is a valuable resource!
  • chai-httpchai-http - This is a chai extension that allows us to hit http endpoints during a test.
  • chai-as-promised - mocha support tests / setup that return a promise. This enables us to assert / expect what the result of the promise would be. We will see this in action shortly.
  • co-mocha - a mocha extension that allows us to use generator functions inside mocha setup / mocha test. If you do not do this step, and try to use a generator function, the test will finish and will not run yield correctly in test code. This means you will have twilight zone like results, of tests passing when they should fail!
  • sinonjs - cool test mocks, spies and stubs for any JS framework. Works really well, and very extensive

After we install all the packages, let's create a new file, and add all the required libraries to it as follows:

//demo test file
const chai = require('chai');
const chaiHttp = require('chai-http');
const server = require('../server');
const chaiAsPromised = require('chai-as-promised');
const sinon = require('sinon');

const TestUtils = require('./utils/TestUtils'); //explained later on.
const server = require ('../server.js'); //explained later on

In this example I'm testing an express server, but you can use any type of node http server (assuming you are testing a server). Just make sure you export the server from you main or server file, and then you can require it from your test files.

We will see how we use the server later on in the test.

const express = require('express');
const server = express();
//all server route and setup code.
module.exports = server;

Grouping tests using 'describe'

Mocha does a great job at grouping tests. To group tests together, under a subject use the following statement:

describe('Test Group Description"', () => {
  // test cases.

'describes' are also easily nest-able, which is great. So the following will also work:

describe('Test Endpoint "', () => {
  describe('GET tests "', () => {
    // GET test cases.
  describe('POST tests "', () => {
    // POST test cases.
  describe('PUT tests "', () => {
    // PUT test cases.
  describe('DELETE tests "', () => {
    // DELETE test cases.

This groups them together, and if you're using something like intelliJ or webstorm then the output is displayed in a collapsible window very nicely:

Test hooks

When running tests many times we need to do setup before each test, before each test suite. The way to do that is to use the testing hooks before, after, beforeEach and afterEach:

describe('hooks', function() {

  before(function() {
    // runs before all tests in this block

  after(function() {
    // runs after all tests in this block

  beforeEach(function() {
    // runs before each test in this block

  afterEach(function() {
    // runs after each test in this block

  // test cases

Also these hooks can return a promise, the test framework will not continue until the promise is resolved, or will fail it is rejected:

before(() => {
  // do some work, return promise / promise chain
  return new Promise(() => true);

after(() => functionThatReturnsPromise());

Also since we have require co-mocha, our hooks can also run a generator function:

let stuffINeedInTests = null;
before(function* () {
  const result = yield functionThatReturnsPromise();
  const resultFromGen = yield* generatorFunction();
  stuffINeedInTests ={ promiseResult: result, genResult : resultFromGen } 

I can then use the stuffINeedInTest in my test files. You can also do this setup using promises as shown above.

Hook on root level

Test hooks are awesome, but sometimes we might want some hooks to run not only once for a test file, but once for all our tests. mocha does expose root level hooks, so in order to achieve that we will create a new hooks file: root-level-hooks.js
and put our hooks in there with no describe block around it:


require('co-mocha'); //enable use of generators

before(() => {
  //global hook to run once before all tests

after(function* () {
  // global after hook that can
  // call generators / promises 
  // using yield / yield*

Then at the top of each test file we will require this file in:

//demo test file

//demo test file 2

This way our hooks will run once for all test runs. This is the perfect place to load up a test db, run some root level setup, authenticate to system etc.

External System Mocking

Some systems / modules call other systems internally . For example think of a functions that processes a payment for an order. That function might need to call a payment gateway, or after the order is processed, send the shipping information to a another system (for example a logistics system or upload a file to s3). Unit test are intended to be very stand alone, and not depend on external systems. Therefore we need a way to mock those external systems, so when the tested code reaches out to these systems ,the test case can respond on its behalf.

In our test we will use sinon.
Basically we will mock the calls using a test class or mocked calls, that reads a response file and send it's back.
This makes the mock strait forward:

const requestMock = {
    get: sinon.spy((input) => {
      switch (input.url) {
        case 'http://externalSystemUrl: {
          const campaignsResponse = fs.readFileSync(path.join(__dirname, '../files/testData.json'), 'utf8');
          return Promise.resolve(campaignsResponse.trim());
        case 'http://anotherExternalSystemUrl:'
          return Promise.resolve(JSON.stringify(';
          throw new Error(`unmocked ${input.url} url request, error in test setup`);
    post: sinon.spy((input) => {
      switch (input.url) {
        case 'http://someServer/check-if-items-invalid: {
          return Promise.resolve( => false));
          throw new Error(`unmocked ${input.url} url request, error in test setup`);

What we are doing here is creating a mock object, in this case we are mocking the axios, as my server code uses it, but we can use the same construct to mock any external system.
Our request mock will provide a get and a post methods, just like the axios library does. I'm using the sinon.spy to check what URL is requested by the module code, and a switch statement to handle the different urls requested by the module. Our mock can return urls, json, promises, files, or whatever is needed to successfully mock the external system.

const axios = require('axios')
  before(() => {
    sinon.stub(axios, 'get').callsFake(requestMock.get);
    sinon.stub(axios, 'post').callsFake(;

  after(() => {

I'm then using the before hook to register the mock as axios mock, so when the module called require('axios') it will receive my mock and not the node_module that actually does the http request.

Then I'm using the after hook, to disable the mock and return to normal.

Test Cases

Mocha let's us create tests very easily. You use the 'it' keyword to create a test.

it('Unit test description and expected output', () => {
  return value or return a promise.

Or using generators

it('Unit test description and expected output', function* () {
  //yield generator or promise.

You can also use the done callback, but I prefer not to use it.
I like to keep code a small as possible, and without any distractions.
However it's here if you need it

it('Unit test description and expected output', (done) {
  //call done when finished some async operation


Each test case is composed out of two parts:
1) The test itself
2) Expected result

Test themselves

Since we have added the mock for external system we can safely use our test code to hit a function, or if we are testing a rest endpoint we can call that endpoint:

  .then(function (response) { 
   // process response  


const response = yield chai.request(server)
  .send({testObject : { name: 'test'}});

In this example we are testing an endpoint, but calling a function would have been even easier.

Expected Result

The second part is includes looking at the results of our test runs and we will be using chai to look at the responses. chai provides a long list of ways to look at responses either using expect, should or assert, whichever you prefer.
I try to use expect often as it doesn't change the Object.prototype. Here is a discussion on the differences expect vs should vs assert

assert.isOk(res.statusCode === 201, 'Bad status code');
TestUtils.testForSucessAndBody(res,expect, 201);

Failing these will trigger the test to fail.
I normally use a test helper class with a few standard ways to test for correct response and to compare return object to the expected object:

Test for failures

Using promises, I can also quickly test for failures to ensure our code doesn't only work properly for valid input, but it should also work for invalid input.

I can test to see that code will fail with bad input:

it('GET /endpoint/BADID should return 400 bad request', () =>
      )'Bad Request')
//or missing field

it('PUT /endpoint/:id with missing property name should return 400', () =>
      TestUtils.testMissingField(server, 'put', chai, expect,
        `/endpoint/${}`, inputObjectWithoutNameProperty, 'name')

TestUtils class

TestUtils is a utility class that I created with some expected results that allows to easily test for missing fields, to iterate the body for all the fields I expect or for a simple 200 and body.

class TestUtils {
  static testMissingField(server, command, chai, expect, url,
    baseObject, fieldToCheck, sendAsArray) {
    const missingNameObj = JSON.parse(JSON.stringify(baseObject));
    delete missingNameObj[fieldToCheck];
    return expect(
        .send((sendAsArray) ? [missingNameObj] : missingNameObj)
    )'Bad Request');


  static testAllPropertiesInSrcExistInTarget(expect, assert, srcObj, targetObj) {
    Object.getOwnPropertyNames(srcObj).forEach((propName) => {
      if (Array.isArray(srcObj[propName])) {
      if (moment(srcObj[propName], ['YYYY-MM-DD', 'moment.ISO_8601'], true).isValid()) {
            .isSame(moment.utc(srcObj[propName])), // eslint-disable-line eqeqeq
          `{expected ${srcObj[propName]}, got ${targetObj[propName]} when comparing ${propName}`
      assert.isOk(targetObj[propName] == srcObj[propName], // eslint-disable-line eqeqeq
        `{expected ${srcObj[propName]}, got ${targetObj[propName]} when comparing ${propName}`);

  static testForSuccessAndBody(res, expect, code = 200) {
    expect(res)'statusCode', code);

I then require the TestUtil class in my test file, and then I can use the test utils for quickly expecting or asserting different conditions.

Mocha tests on circle

When using CircleCI, it's great to get the output of the test into the $CIRCLE_TEST_REPORTS folder, as then circle will read the output, and present you with the results of the test, rather than you looking through the logs each time to figure out what went right and what went wrong. Circle guys have written a whole document about that, and you can see it CircleCi Test Artifacts.

In our discussion we will focus on using mocha and getting the reports parsed. In order to do so, we need mocha to output the result in junit xml format. This can be achieved easily using the mocha-junit-reporter. This lib will allow mocha to run our test and outpu the results in the correct format.

So the first step is to run

npm install mocha-junit-reporter

And to add in package json output in junit format:

  "scripts": {
    "lint": "node_modules/.bin/eslint .",
    "test": "NODE_ENV=test npm run lint && npm run migrate && npm run test:mocha",
    "test:mocha": "NODE_ENV=test ./node_modules/.bin/mocha --timeout=5000 tests/*.test.js",
    "test:circle-ci-junit-output": "npm run lint -- --format=junit --output-file=junit/eslint.xml && MOCHA_FILE=junit/mocha.xml npm run test:mocha -- --reporter mocha-junit-reporter",
   //other npm commands 

This output the information in the junit folder for both eslint (if you are using it) and for mocha.

Now all that is needed is to create a link between your junit folder and the CIRCLE_TEST_REPORTS, which can be done by editing the circle.yml file and adding the following line in the pre step for test.

    - mkdir -p $CIRCLE_TEST_REPORTS/junit
 # for none docker:

If you aren't using docker, you can also add a symbolic link after the creation of the folder - ln -s $CIRCLE_TEST_REPORTS/junit ~/yourProjectRoot/junit

However if you are using docker-compose, or docker run to execute your test inside a will also need to add a volume that maps you test output to the CRICLE_TEST_REPORTS.
For docker compose:

    - $CIRCLE_TEST_REPORTS/junit://junit

for docker run you can do the same with using the -V command.
Once that is done, you'll get the report output in circle after the build finishes.

Good luck!


REST Endpoints Design Pattern

In this post I'll present a suggested design pattern and implementation for this design pattern using a Node + Express REST API with ES Classes. Personally, I hate writing the same code again and again. It violates the DRY principle and I hate to waste my time and my customers' time. Being a C++ developer in background, I love a nice class design.

In today's microservices and web, REST endpoints have become somewhat of the de-facto way to connect services and web applications. There are loads of examples how to create REST endpoints and servers using Node.js and Express 4.0. SOAP, which was popular a while back, has given way to JSON. New technologies like GraphQL have not made it to mainstream yet, so for now we are stuck with REST and JSON.

I haven't found a tutorial that discusses how to do this using ES6 classes and a good class design. This is what we will cover today.

Rather than building REST endpoints over and over, my concept is to have a base router implement base behavior for the REST endpoint, then have derived classes override such behavior if needed.

We create an Abstract Base Class, with all the default route handlers as static methods. Those will take a request, process it (most likely read / write / delete / update the DB) and return the results. Then the SetupRoutes, will be the glue that binds the static methods to the actual routes. In addition our constructor will take a route name which will be the route path that will be processed.

Then derived classes can either disable certain routes, or override routes as need be, while maintaining the base behaviour, if that is what is needed (for example when wrapping a service, or doing simple DB operations).


Now let's implement this in JavaScript using Node.js, Express and ES Classes. I'm going to implement this example using MongoDB and Mongoose, but you can use any other DB or service you wish. The Mongoose in this code sample is pretty meaningless, it's just for the sake of the example.

Create a new project folder, and call npm init inside it.
Then install express and required libs: npm install express body-parser cors bluebird mongoose change-case require-dir --save

Then I'll create the server.js main file (we won't discuss this in detail, as it's mostly a node/express server. The one line that's important to note is require('./routes/index')(server,db); as this will create all the routes for our application).

// Far from perfect, but a good base example for a server.
// Should also change console.log to some logger.
//server .js
'use strict';

const express = require('express');
const bodyParser = require('body-parser');
const cors = require('cors');
const process = require('process');
const mongoose = require('mongoose');
const server = express();

server.use(bodyParser.urlencoded({ extended: true }));
server.use((req, res, next) => {
 console.log(`${req.method} request on ${req.url}`);

const db = mongoose.connect('mongodb://localhost');

//support for cross origin requests
 origins: '*',
 credentials: true,
 methods: ['GET', 'PUT', 'DELETE', 'POST', 'OPTIONS'],
 headers: ['X-Requested-With'],

server.on('uncaughtException', (req, res, route, err) => {
 console.log(`Internal Server Error ${err}`);
res.send(500, { message: 'Internal Server Error' });

const PORT = 8080;
server.listen(PORT, '', () => console.log(`REST server listening on PORT`));

// health check path
server.get('/status', (req, res, next) => {
 res.sendStatus(200, 'ok');

require('./routes/index')(server,db); //<===== includes all routes. server.get('*', (req, res, next) => {
 res.status(404).send('route not defined');
return next();

function cleanup() {
 //do server cleanup.

// listen for TERM signal .e.g. kill
process.on('SIGTERM', cleanup);

// listen for INT signal e.g. Ctrl-C
process.on('SIGINT', cleanup);

// export for testing
module.exports = server;

I'm including a single route file, which will build up all our routes. So let's look into that index.js file, to see what's going on in there:

// routes/index.js'
'use strict';

const routeHandlers = require('require-dir')('./route-handlers');
const changeCase = require('change-case');

const BASEURL = '/api/';

function setupRoutes(server,db) {

 // Initialize all routes by iterating the keys of the require-dir
 Object.keys(routeHandlers).forEach((routeName) => {
 //connect routes to the server base url
 const newRouteHandlerClass = new routeHandlers[routeName](db);
 server.use(`${BASEURL}${changeCase.paramCase(routeName)}`, newRouteHandlerClass.setupRoutes());

module.exports = setupRoutes;

I like to use automatic glue code, rather thant re-type or build a static array. This way we have the system detect new routes and add them automatically, just by adding a file to a folder.

  1. I'm using require-dir which will include all route handlers. I wanted each route to handle it's own paths, and not the global paths (I like encapsulation). So as a design decision I made the filename the subroute file.
  2. I then create an instance of the route handler class, passing it a reference to the dbDB (so it can do it's thing).
  3. setupRoutes() returns a router, which I then connect to our server. I'm building on server.use of the express router , to bind routes to the baseurl. If you adpot this impementation you can always use your own structure.

Next let's look at the base-router-handler which is the base to all route handlers. It will contain most of the code for any endpoint:

'use strict';

const express = require('express');
const coWrapper = require('../utils/expressCoWrapper');

class BaseRouteHandler {
 constructor(collectionName,db) {
 this.db = db;
 this.router = new express.Router();
 this.collectionName = collectionName;
 this.collection = this.db[this.collectionName];

 static validateOkResponse(res, foundItems) {
 if (!foundItems || !foundItems.length) {
 res.status(404).send('item not found');
 return false;
 return true;

 setupMiddleware() {
 // attach any middleware you might need on a route baseis,; can be overriden in subclasses

 static* getSingle(req, res, next) {
 try {
 const foundItems = yield this.collection.find({id:});
 if (BaseRouteHandler.validateOkResponse(res, foundItems)) res.json(foundItems[0]);
 } catch (err) {
 res.status(500).send('Internal Error');
 throw err;

 static* putSingle(req, res, next) {
 try {
 const result = yield this.collection.insert([req.body]);
 if (BaseRouteHandler.validateOkResponse(res, result)) res.json(result[0]);
 } catch (err) {
 res.status(500).send('Internal Error');
 throw err;

 static* deleteSingle(req, res, next) {

 const result = this.mongooseCollection.remove({id :});


 static* getMultiple(req, res, next) {
 try {
 res.connection.setTimeout(0); // disable server timeout - this may take a while
 const result = yield this.collection.find({});
 } catch (err) {
 res.status(500).send('Internal Error');
 throw err;

 static* postMultiple(req, res, next) {
 try {
 const result = yield this.collection.update([req.body]);
 } catch (err) {
 res.status(500).send('Internal Error');
 throw err;

 // eslint-disable-next-line require-yield
 static* notImplemented(req, res, next) {
 res.status(501).send('Not implemented');

 setupRoutes() {
 const self = this;


 return this.router;

module.exports = BaseRouteHandler;

I wanted to use generators, as I like their async / await like structure. So I wrote a co-wrapper file that will handle errors and the generators' routes correctly, including wrapping with a promise. I do not wish to go into depths explaining it, as it's not the point of this post. But you can see this file, in the git repo.

Next we create the base constructor, which takes the route name and (?). It creates the binding to a collection / table / service / anything else you want. It also calls the middleware setup; if you wish to bind your route based middleware, you can override this function in derived classes.

Next I go through and create static route handlers for each route. As you can see the route handlers are pretty simple: take json in, perform some DB operation and return the result. In other examples you might have more complex behaviour. The nice thing is the base creates a default behaviour, but by overriding the static methods in dervied classes we can do whatever we wish to do.

Once the baseclass is ready we can now create a real route, that will do something!
Let's create a 'route-handlers' folder inside the 'routes' folder and add a file called companies.js.

'use strict';

const BaseRouteHandler = require('../base-route-handler');

class CompaniesRouter extends BaseRouteHandler {
  constructor(db) {
    super('companies', db);

  static* putSingle(req, res, next) {
    yield* super.notImplemented(req, res, next);

  static* deleteSingle(req, res, next) {
    yield super.notImplemented(req, res, next);

  static* postSingle(req, res, next) {
   // do some code to send an email to the admin, to ask to create multiple new companies

module.exports = CompaniesRouter;

First look at how easy it was to create a new route. We didn't need to write even this much code. We could just create the constructor and be done with it, if we wanted the same behaviour as the base class.

I did want to show, though, how easy it is to override the code without requiring much work. The base class provided us with a basic implementation for notImplemented[is “basic” an adjective instead of a specific type of implementation?], which makes it easy to disable routes.

Even adding a route is easy. Just add a handler implementation of your own. Makes it easy to test just the functionality and not have to re-write the same code over and over.

That's all for now!

Hope your enjoyed this, or found this useful.

concurrency issues

Concurrency - Watch out for globals in node.js AMD modules!

globals, or global variables are known to be risky.
However using the ‘var’ keyword should ensure file level definition.
As such shouldn’t it be safe to use module level variables?

The answer is no, and it should be avoided at all costs.

why module level variables are bad?

Node require will wrap your module with a function as follows:

~ $ node
> require('module').wrapper
[ '(function (exports, require, module, __filename, __dirname) { ',
'\n});' ]

The calling node will assign to these arguments when it will invoke the wrapper function.
This is what makes them look as if they are globals in the scope of your node module.
It seems we have globals in our module however:
- export is defined as a reference to module.exports prior to that.
- require and module, are defined by the function executed.
- __filename and __dirname are the filename and folder of your current module.

caching - a double edge sword

Node will then cache this module, so the next time you require the file, you won’t actually get a fresh copy, but you’ll be getting the same object as before.
This means you’ll be using the same global modules variables in multiple places, which means danger!

Here is a code example that illustrated the problem:

'use strict';
var x = 0;

module.exports = function (val) {
  console.log(`val : ${val}, x: ${x}`);
  if (val !== x && x !== 0) throw new Error(`failure!!! ${x} != ${val}`);
  x = val;

const fn1 = require('./moduletest');
const fn2 = require('./moduletest');
setInterval(function () {

setInterval(function () {

I’m running here two calls to the same function, with a small delay between each call, after a few runs we will notice that the function will run over each others variables. Which is an example of a module global issue.

How to solve globals?

There are multiple potential solutions to this global issue, I'll present you with two potential solutions

Solution 1 - Functional

If we define a local scope inside our module, we can return a new set of variables for each run.
We will use a 'let' keyword, along with a scoped function (not needed, but nicer and better scope control).

'use strict';
module.exports = (function() {
  let x = 0;

  return function (val) {
    console.log(`val : ${val}, x: ${x}`);
    if (val !== x && x !== 0) 
      throw new Error(`failure!!! ${x} != ${val}`);
    x = val;
fn1 = require('./testmodule')(); //<--- calling a function each time
fn2 = require('./testmodule')();

// fn1 and fn2 are new functions with new variables, we busted the cache !! :)
// notice I also use let, to ensure scope variables, and not hoisted vars.

Solution 2 - use Classes

We can just define a class then create a new class for each run.
This way each variable is a private member of that class, ensuring proper encapsulation.

'use strict';

class FunctionRunner {
  constructor() {
    this.x = 0;

  fn(val) {
    console.log(`val : ${val}, x: ${this.x}`);
    if (val !== this.x && this.x !== 0) throw new Error(`failure!!! ${this.x} != ${val}`);
    this.x = val;

module.export = FunctionRunner;

const FunctionRunner = require('./testmoduleclass.js');

const fn1 = new FunctionRunner();
const fn2 = new FunctionRunner();
// now each fn holds it's own set of variables.
// no risk at all :)

For complete code have look at this repository:

Async JS Clock

Waiting for things the JavaScript way...

JavaScript is filled with an abundance of libraries, frameworks, and acronyms that would make any conversation between two web developers sound like they are about to fly a spaceship to colonize Mars.
If you don't believe me, check out this funny post:
How it feels to learn JavaScript in 2016
[If this post gets a high bounce rate I suggest deleting the rest of the paragraph after Mars, on account of the link]
As such writing Async JS is no different or less confusing.

In this post I'll try to bring clarity to asynchronous code in Javascript. I'll focus on back-end node.js code, but a lot of it also applies to the front-end.
Let's first cover async JS mechanisms we have in Node:

  • Callbacks
  • Promises
  • Generators
  • Async / Await

I have not included things like observers, async.js and events, as they are not exactly the core of JS. For example, events rely on an async mechanism (such as callback). Many of the observer mechanisms are used mainly in front-end patterns today, and async.js is an external library which I stopped using. However if you want to learn more I suggest you look these up.


Callback functions are the most basic types of async code, and are common not only to Javascript but to many other languages.
Callbacks are simple to understand. Callbacks are simple functions passed as arguments, that are called when the called function is finished.

function callMeWhenDone() {

function doLongProcessWithCallback(param1, param2, callback) {
//somelong operation
// finished

doLongProcessWithCallback("stringInput", 34, callMeWhenDone);

Very simple and straightforward. The main problem with callbacks is that when these are all chained together, as many operations are in async, you'll end up with loads of callbacks which is a nightmare to read, manage or follow. This is called callback hell.


Promises are a different way to handle asynchronus code that allows for easier managment of async code, yields easier code flow, and uses exceptions for errors, uniform signatures and easy composition, meaning we can chain promises together!

Promises are a bit like real life promises. Imagine your boss promising you a promotion next quarter. You don't know if you'll get it or not, and you'll know that only in the future. Promises have three states: resolved, rejected and pending.

A promise constructor takes two parameters, reject and resolve, which will be called when the promise finishes and returns a chainable promise object.

const doLongProcessWithPromise = new Promise(function resolve() {
}, function reject() {


This might look more complex, and for very simple situations you might be right. But let's look at the chainable .then and .catch (for success and failure of a promise).

.then(function (result) {
//this is called after the promise resolves,
//and the input parameter is the return value from the success

//or imagine this

As you can see this allows for chaining of promises, which creates sequential code. Sweet!

Prior to ES6 promises were supported using external libraries such as Bluebird, Q , RSVP and many others. However they are now also a part of the coding lanaguge, as promises are that important.

Promises deserve a post of their own so here is some more reading if you want to dive in and understand them better:


Generators are not designed to be an asynchronous mechanism per say. Their intent was to create an iterator-like functionality in the lanaguge; however they are often used to create cleaner looking, synchronous-like code. This is built on the fact that generators can be paused and resumed. Once again generators deserve a post of their own, so I will add additional reading links at the bottom of this section.

Generators landed in ES6, and can be created by adding a '*' after the function keyword (or before, in class members):

function* generatorFunction() {
yield 'a'; //Once yield is called, the function is paused until it's called next.
yield 'b';
yield 'c';

var g = generatorFunction();"
console.log(; // output: a
console.log(; // output: b
console.log(; // output: c

The nice thing about generators is that inside a generator function you can pass the control to another generator *yield or to a promise / value with yield:

function * generatorFunction() {
const userInfo = yield getUserReturningPromise();
const orderInfo = yield* getOrdersForUserGenerator(userInfo);
return orderInfo;

//wraps the generator with a promise and can now be used as a promise.
const generatorFunctionTurnedIntoPromise = Promise.coroutine(generatorFunction);

As you can see you can the code becomes simpler. You can even wrap a generator into a promise easily with a coroutine (Bluebird has a coroutine, for example).
As you can see, promises and generators co-exist nicely!

Here is some further reading, if generators are still not clear:

Async / Await

Async/Await is not part of ES6 sadly, but only ES7. The use of generators and promises, while nice, is not very clean. It requires a lot of wrapping, and the intent of generators was to provide an iterator, not an async mechanism. This is where async / await shines, as it is a cleaner way to handle promises and asyncronous code in a sequential manner:

All you have to do is define an async function (with the async keyword), then enter an await keyword from your promises, much like the generator yield, but with less mess:

async function doProcess() {
const userInfo = await getUserReturningPromise();
const orderInfo = await getOrdersForUserPromise(userInfo);
return orderInfo;

As you can see the code is clean, but didn't require any wrapping, or using generators. Adding just two more keywords allows us to use promises everywhere (promises tend to be faster than generators).

Further reading:

Lecture About JavaScript and ES6 Features

Last Week we had so much fun, we were teaching people about JavaScript history and future, and showing off some of the new ES6 Features.
Check out the images from the talk, and we've also attached the presentation.

ES6 Features - Lecture ImageES6 Features - JavaScript ClassesES6 Features - Javascript HistoryES6 Features - JavaScript CrowdES6 Features - Dory Lecturingdsc_7115Learning JavaScriptES6 FeaturesES6 Features - ECMAScript StandardsJavaScript GeneratorsDory LectureCrowd Learning JavaScriptES6 Features ES6 Arrow FunctionsLaptop LearningES6 Features JavaScript Developers

Here is the presentation in PDF and in PPT:

JavaScript Presentation in PowerPoint Format
JavaScript Presentation in PDF Format

How To Hire Awesome Engineers?

13I get approached daily and weekly by people wanting to hire me, on LinkedIn, email, and through various other means.
I really don't mind it, and while I've been in software for quiet a while now, and I'm not claiming to be an expert on anything, I do believe I understand the engineer / geek mindset and, having successfully hired many people and been hired many times, I think there are a few key elements and success factors to hiring engineers that so many companies and people miss completely. I'm writing this post in the hopes of helping others improve their hiring process for the good of all of us out there. :)

The Wrong Cold Call Email

Personally I don't mind getting cold call emails / messages. Everyone is doing their job and that is actually good! However, I know many people that hate this. I think the main problem here is that people don't invest much time when they're about to send an email to someone they do not know. Before you contact anyone you don't know, spend some time making sure you understand why you're contacting them. Here are two examples of horrible methods that will rarely get a response from anyone and if anything, might even get you tagged as spam and blocked:



It's fine that you have a template part, many times your message has some core information that doesn't change, so leave that in. But show the other person you know why you're contacting them. These guys didn't even bother, they are probably sending emails to everyone. The first person was impressed by my LinkedIn profile, which is great, but do I really believe her? What is she impressed by? My background, my Ember skills or my pretty blue eyes (they are really brown)? You get the picture. I had no doubt she didn't even read my profile or find anything impressive on it - she is machine gun emailing. And the second guy was doing so much copy and pasting he missed getting my name in the template right. However, it's also not always this clear. Sometimes I get emailed with a lengthy one or two paragraphs talking about who they are and what their companies is, etc, etc. Why do I care? Why does anyone at all care about that?

The Right Cold Call Email - 80%+ Response

If you're going to address anyone, not just in regards to hiring but any cold call email, you need to spend time and construct it properly:

  • Pre-approach - This will take you some time. Use Google, LinkedIn, Facebook, and research the person you're contacting. Look for their personal site. Read about the companies they've worked for. Try to get a mental image of who they are before you contact. Make sure you actually do want to contact that person, and that he is the right person you should be talking to! Don't just copy and paste stuff. Spend time in what is called the pre-apparoch. It will pay dividends, ensure you're actually contacting someone you want to talk to, and will show the other person you care about them. In the same way you're asking them to invest time, you're investing time too!
  • The opening paragraph - I always open my emails with highly personalized content. But not just any content - I try to find the reason and the basis to try to reach out to the other person on a personal basis! I try and connect on a personal and professional level. I try to understand the mindset and why what I'm offering suits their mindset and persona. For example, if I see someone that is an Ember fan I would talk about why I think Ember is great and why I'd love to contact them. If I'm looking to hire a developer I look at their Github and their stack overflow and see what they have been doing, do a little code review for them, and only then address them: "I checked out your Github, and I loved your angular auto-complete directive." I always close this paragraph with a clear indication of why I'm contacting them. People have little time, so be precise and direct. People will breeze through your email / message. If they understand what you want - great! If they feel it's spam they will mentally tag it and will not continue to read, but press delete instead.
  • The info paragraph - This is where you are allowed to provide copy and paste info. If you're looking for work, write about your background, provide links etc. If you're looking to hire, explain about yourself, your company and what you're looking for. If you're looking for customers explain what you can provide, what other customers you've worked with, etc. Include links and information but try to keep it short and sweet. Too lengthy becomes lecture-like, and people don't like that, they tend to skip it as spam.
  • End with why now and a call to action - I like to end these emails explaining why I'd like to contact them now (currently hiring, just finished a job and looking for a new things, currently in two for two weeks, etc). Don't make this generic, explain why the time-frame is real, it creates urgency and authenticity. Again don't make this up, really explain why!
  • Language - Long gone are the days of writing fancy emails with fancy language. They super quickly sound too hyped up and too pretentious. I've been using a tip I got from one of my co-founders. Think like you've having a normal conversation with a friend over a beer or lunch and write your email in the same language. Be modest and confident, stick to facts and talk on eye level. These tend to make people feel like they've received an email from a human and not an email sending machine. However, please do be passionate and alive, explain why and show that you care. People tend to respond to that. They see and read your effort and tend to appreciate your energy.
  • Follow ups - If after 3-5 days you get no response, feel free to send a quick 2-3 lines follow up email. If that doesn't work try again after 3-5 days. Most of the time people have either gotten it in a spam folder, or just have been too busy with other things, don't take it personally. And if still no response, just let it go, you probably don't want to do business with them anyhow, as they aren't really mindful of you or your time. ;)

The Next Step - Initial Call

After you do get some interest to your email, I suggest you setup an initial call. This should be an intro call with someone that has some technical knowledge. Do not send out a test task right away! You want to get to understand the other person better, you do not want to put them off.

During this call, let them talk about themselves, what are they doing, who they are, what they want to do. Ask open ended questions and listen. It's the first time they are talking to you, so let them feel at ease. Get a sense for who they are, only after that should you spend a little time talking about you, your company and what you're looking for. After about 30 min introduction try to do 30-45 min of tech phone interview, just to make sure the person you're interviewing does understand the basics.

The Joel on Software blog describes this call very well. Try to get in a few questions. Ask to describe some algorithms. Talk about the technology the person uses, and try to probe. If you don't know the tech the candidate uses, try to get someone else on the call to probe about it. It's not critical that the person has or doesn't have the right tech fit, it's important that he really understands the tech stack that he uses. If it's JavaScript he should understand why == is not the same as === or what is prototypical inheritance and explain about certain gotchya's! Feel free to also ask them to do something simple like write a function that reverses a string, etc.

You can also ask them what's the different between pass by value and pass by reference. I really like to say something that is totally wrong and see how they respond: in C++ you cannot pass by reference. It shows how they respond to conflict. Try to get a sense of whether they do understand the tools they use and the basics of software. If this goes well, setup an in-person interview. If not, move on. So many times you'll find that candidates that seem like they are amazing end up being total duds and vice versa. So try to get this right, the person on the other end will appreciate you for it too! Also it's a good time to ask about salary expectations etc, to make sure you're on the same page, or you can wait until after the interview / test task. This depends if you have a set budget or not, or if you don't mind paying the market rates or not.

The In-Person Interview

The interview is a stressful time for most candidates - they want to impress but it's not the natural environment for most of them. So start again but having some casual conversation. Try to find a quiet location and try to make the other person feel comfortable. It's not about stressing them, they are probably doing that job very well by themselves. It's about making them feel as natural as possible so they can really perform at their best and so you can understand what their best is!

This shouldn't be a one sided discussion or an informal chat, you should have an agenda. Talk through, look at their body language and listen. They should be talking as much as you are talking. Ask them about their work history. About projects that didn't work, about conflicts and how they resolved them. Get a sense for what their everyday job looks like. If they manage people, how do they deal with failures? What is their outlook on failures? etc. Be sure to tell them about yourself and about your company and role.

Then you should get into the technical side of the interview. Do not ask questions with a light bulb moment! You might love them and think they are clever, but they prove nothing to be honest, except that the person can solve your riddle. It is the same with complex algorithm challenges with one solution. Again they provide you with very little insight into how good that person will be as an engineer, and that's the goal right? To find someone intelligent that can deliver results! So good questions are:

  • Ask them to complete a simple code exercise, not too trivial but not too complex. Chances are that the other person won't write optimal code and it might contain bugs. This is a great chance. Tell the person he has bugs and wait to see if he finds them and fixes them. Look to see if they didn't write code to handle invalid input. They should ask you about it, or if not, they should notice that. Then ask them to optimize their code. Many times the code they write isn't dry or optimal. Iterating together will show you a lot about the way that person handles not only code, but also do they persevere or do they give up? You want people that don't give up! This is a great test for anyone you work with!
  • Ask them to design a system. Any OO design question would be good here (Deck of Cards, Elevator System, Etc). These are good questions as they show how that person approaches development design, how they architect and how they think in terms of Objects. Again, this is a good question as you can ask them why they did it this way or that way and gain insight into their thinking
  • Then I always try and find some challenging task I had and give it to the candidates. Something I have already solved but thought it was complex to solve myself. This is another great example, as I've had people solve a complex problem faster than I did! They aren't suppose to come up with a complete solution, just with the concept behind it, and maybe explain how it would work. This is very insightful!

I try to wrap up the inteview by thanking that person and tell them I'll be in touch. Try to send out emails to everyone that interviewed with you showing that you value them and their time, and that you respect them regardless of what the outcome might be.

The Test Task

This is something that is so misused in my mind. On one hand it's a great indicator and way to learn how it is to work with someone. You can see if they understand instructions and how it would be to work with them, but so many companies abuse this. Never send out a test task right away, it shows you don't really care about the other person or even value their time. I've done a few test tasks, some as first point of contact. And while many did like my work, I never ended up taking work with companies that do this.

The proper way to do a test task is as the last step, to see how it would really be to work with that person. I like to find some block I really need, and to give that to the other person as a paid task! Yes, paid. Many times engineers will even agree to do it for free, but always offer a paid task. Just as you wouldn't go to the doctor and ask for a free checkup, just to make sure he is the right doctor, don't expect a good engineer to work for free, they have too many options. But with a paid task, I can see that the other people value my time and respect me, and I also do these very gladly. Yes there is a risk that the other person will write crappy code, but it's better to pay a few wrong people than hire the wrong person. It shows me people are motivated to complete the task and it's a great way to mini-test your working relationship.

Final Notes

I find that following these steps makes people feel at ease not only with leaving a current position and moving to work for you, but also for you. It ensures you find great candidates and hire only the best people that mesh well with you and your team. It's not rocket science but it is a craft, and so many companies have such bad hiring processes that it frustrates the people they are trying to hire. So even if you don't follow my suggested steps, please be mindful of the other person, show you respect for their time and try to treat them the same way you'd like to be treated!


What is the secret to successful remote software engineering?

My recent experience is that many companies insist on having engineers on site. When they hear "remote" or "not in the office" many people have a very negative perception. They either believe it's cheap labor or they believe they require people to come into the office each day in order to get good results. While I do understand the bad experience many companies have, this is not always the case. Many are highly successful with distributed remote engineers, or even a remote team. While there are many places with highly talented engineers all over the world I see again and again companies that insist on hiring people only from the local eco-system. And it's true that there are certain skills that only exist in Silicon Valley / Tel Aviv / NYC and other places where people have successfully built large companies, however a large percent of the work can still be done in a different place where the talent is more loyal, at a lower cost, while not sacrificing the skill-set of the people. It's very difficult and expensive to hire engineers in SF or NYC or TLV, and as there are so many offers for talented engineers there, retention becomes just as hard as recruiting.

I've been highly successful at finding and retaining talent world-wide. I've also been working with companies for around 6 years, remotely. Either personally for my own start-up or providing development services for companies. I'd like to share my thoughts on what are the secrets to making such an environment flourish.

My experience with remote teams

Today my time is split between the US, Israel and Eastern Europe. I've been working for the past 6 years or so in and with remote environments and teams. I've used remote teams to build a complex password manager running on multiple web and mobile platforms, and in 4 years it has reached over 70,000 paying customers. I've also been successful at building products for US companies with teams in Eastern Europe and getting results using the latest front-end and back-end technologies.

Working in a remote team as an individual

When I first started out, I had doubts: how does this remote thing even work, if at all? While I'd heard of companies doing it, up until then I was used to waking up in the morning and going into an office. At the time I'd just started working with my new co-founder, with his company that has sold over 3 million dollars of mobile software products, and has worked with over 20 developers from all around the world. I was fascinated by this. Slowly but surely I saw the way he works with them and why he was so successful in doing so. It actually took me a lot of effort to get him to start meeting regularly (as we lived 2 blocks away) and we ended up meeting once every 3-4 weeks in person. We worked night and day and would communicate via skype, email and other methods. We built an amazing product together and got some great offers for partnerships and acquisitions.

Working with a mixture of remote and local teams

For the past 2 years I've been working with US-based companies, where most of my development work is done either by me, or by using teams of people in Eastern Europe & the US. I've built products and I know that there is a clear difference between a remote single contributor and a remote team. Remote teams are very similar to regular teams, except you might have other people in other countries as your co-developers, product managers, or product owners, and you must manage this process. There are many similarities to being in a remote team and being a remote single contributor. I am not going to go over the differences as I want to focus on the core elements of working with remote teams / single contributors and what is common to making any remote environment work.

The secrets to making remote work

Finding good engineers is hard, no doubt. However using good engineers remotely requires the remote team or remote lead person on that team to have additional skills in order to make it work.

  1. Be Proactive & Driven - This is the single most important quality for any remote engineer / remote team manager. The reason is that when someone is sitting in the office, you can instantly see if someone is not engaged, or stuck. You can just tap him on the shoulder and ask what's up buddy? Is there anything I can do to help? What are you working on? etc. In remote teams that is not possible, so you need to ensure the person on the other side, and possibly in the other time-zone, is proactive. He will get on call at strange local times, he will email you that something isn't working. He will flag that he finished his tasks and needs more work, or even let you know that while you've planned it before, seems he is finishing early. He will be the type of person tapping himself on the shoulder and not requiring anyone to chase him. EVER! This type of person will make or break your remote / outsource / not in the office work environment.
  2. Resourceful - Resourcefulness goes hand in hand with being pro-active. When working in a remote team, many times you will be faced with integration issues. Integration issues are the ones that take up a lot of time. The back-end RESTAPI that is suppose to return X returns Y. Break. Your mobile app / front-end app cannot read / write the data and the work cannot continue, or perhaps it can? While the proactive perosn would raise the issue, a resourceful one would also find a creative way to continue his work. For example, many times I will create mock data / a mock server when I can't get the back-end to work. This can mean the difference between 24-48 hours delay in the work, to zero down time, or just 1-2 hours to fix a bug. A resourceful person will find an alternate path to continue his work, create solution to a problem or just move to another task. Resourcefulness is highly important for any engineer, but in remote teams it is vital as it can be the difference between making the remote team work, and reaching the conclusion that remote teams do not work.
  3. Understand Product - Finding a good engineer that can also understand product is very difficult. However when working remotely this not only becomes a nice to have, it becomes vital. Understanding product means thinking in terms of user experience, and what is the easiest and most intuitive way to use the application. Many talented engineers can produce great code per requirements or spec, but do not think in terms of what the user needs. When this happens in house, the product lead can very quickly do a course adjustment: "Hey, I thought that would work, but on second thought let's scratch that and move this button over here." With remote teams these iterations might take more time, and so it's important to have someone you trust that would adjust the course himself. Someone who would understand what the "real requirements are" or what the functional requirements are, and build the right usability for the user. Even if not perfect, then the product person would have a much smaller adjustment to make. Understanding product is not simple, but once you find the right person that can do that, you're setting yourself up for success with remote teams / engineers.
  4. Result Oriented - Most people hate micromanagement, and while sometimes management does need to intervene in the remote environment, this becomes almost impossible. That is why in remote environments, your engineer / lead must be result oriented. He is not focused on completing a feature, or getting his "workload" ticked off. He should be focused on making sure your business goals are achieved, and that his part is playing it's role in the global scheme of things. A result oriented person would ask about your business deadlines, when do things need to be done by, and why. This means that person is not about just counting the hours worked, but about making sure he is helping you get to where you need to be.

TimeZone Issues

I've worked with teams in many time-zones, and when I meet new customers they always raise that concern. I would like to use the end of this post to crush any time-zone concerns people have. Is having developers in different time-zones a challenge? Sure it is! Does it mean it won't work? Not necessarily. If you've found a good engineer or engineers, that have the list of skills I've mentioned, you won't be suffering from time-zone issues. These types of people, with these skills, are leaders. They will work at many times that overlap with your hours, they will be answering emails at 2am in the morning their time, they will jump on call at strange hours as they commit to your success. Furthermore, how many times do you really need to talk to your engineer 8 hours a day? Most of the time you'd rather not do that, as if you are, you might be hurting your own performance at the same time...

I'm a big believer in remote teams and when done right they are a wonderful asset. The right team / person can build you amazing software that works very well. It's all of matter of understanding how to work it, and what to look for. I hope this helps and feel free to contact me if you have any questions about creating a successful remote software team.

How to Remote Debug Node.js

Finding and fixing bugs is not always easy, especially if someone else wrote the code!

I know that engineers in general have NIH syndrome, but I am one that doesn't share that view. Technology is an enabler, meaning it's not an end goal, it is there to provide a service (or at least that is how it is most of the time).

As such, we must sometimes make fixes to our code, or to other people's code, and that requires debugging. I've seen many people use console.log/logger/printf - heck, sometimes they even suggested that I do it that way. But as much as I enjoy waterboarding myself, I'd much rather use a debugger whenever I can. Debugging a node.js project isn't complex, it just requires a little bit of setup, after which you can debug a local app or even a remote production/staging/test environment.

The first step is to run node.js with the special debug flag and the optional port:

node --debug
node --debug=4455
node --debug-brk

If you're using gulp/nodemon etc, be sure to include those flags in a separate task and/or pass the relevant params to your node app.

// Nodemon task
gulp.task('remote_debug', function () {
return plugins.nodemon({
script: 'server.js',
nodeArgs: ['--harmony', '--debug=5577'],
ext: 'js,html',

Then you can launch your app, or do it via the task, and your node.js app is running and allowing any debugger to connect to it.

You can use any node.js you choose. I personally use phpstorm/webstorm. While it's not a perfect product and has some issues, I've had very successful debugging sessions with it, and I'll try and outline how to set that up.

First install webstorm/phpstorm. Both IDEs are great and very similar, except Phpstorm also allows you to edit and work on PHP files, whereas Webstorm mainly concentrates on JS files and web files.

After the install, launch the app and go into the plugin:

Go to File->Settings and in that screen click on the plugins menu item.


The click on the "Install JetBrains plugin..." button and in the new window either scroll down or search in the top search box for NodeJS plugin.

Once the install is finished, you should have NodeJS installed and you can go ahead and open your projects directory in the IDE. (File->Open Directory, obvious I know, but still... ;) )

In the last step we need to configure the remote config for our node project.

Click on Run -> Edit Configurations... Menu.

And Click on the + button and select Node.js Remote Debug.


Then in the main window just setup the server address and port (this can be used to debug a remote machine or a local machine)
And you're all setup to start debugging your server!


Then click ok, select the configuration from the top right-hand side menu and click on the little bug icon button:


At this stage you're up and running. If you look at the bottom debug tab you should see you're connected and then you can put a breakpoint anywhere in your code and solve any bug you come across like a hero (at least in theory! :) ).

***** Important note *****

While Phpstorm/Webstorm is wonderful, I've had some issues with debugging performance. This issue relates to some settings in the software so to ensure you do not get frustrated waiting for the first breakpoint to hit, I would suggest you configure Phpstorm/Webstorm as follows:

1) Click on help -> "Find Action" (ctrl + shift + a)
2) In the searchbox type: Registry.
3) Then start typing (or scroll down) and find
js.debugger.v8.use.any.breakpoint - turn off

Happy Hunting!