REST Endpoints Design Pattern

In this post I'll present a suggested design pattern and implementation for this design pattern using a Node + Express REST API with ES Classes. Personally, I hate writing the same code again and again. It violates the DRY principle and I hate to waste my time and my customers' time. Being a C++ developer in background, I love a nice class design.

In today's microservices and web, REST endpoints have become somewhat of the de-facto way to connect services and web applications. There are loads of examples how to create REST endpoints and servers using Node.js and Express 4.0. SOAP, which was popular a while back, has given way to JSON. New technologies like GraphQL have not made it to mainstream yet, so for now we are stuck with REST and JSON.

I haven't found a tutorial that discusses how to do this using ES6 classes and a good class design. This is what we will cover today.

Rather than building REST endpoints over and over, my concept is to have a base router implement base behavior for the REST endpoint, then have derived classes override such behavior if needed.

We create an Abstract Base Class, with all the default route handlers as static methods. Those will take a request, process it (most likely read / write / delete / update the DB) and return the results. Then the SetupRoutes, will be the glue that binds the static methods to the actual routes. In addition our constructor will take a route name which will be the route path that will be processed.

Then derived classes can either disable certain routes, or override routes as need be, while maintaining the base behaviour, if that is what is needed (for example when wrapping a service, or doing simple DB operations).


Now let's implement this in JavaScript using Node.js, Express and ES Classes. I'm going to implement this example using MongoDB and Mongoose, but you can use any other DB or service you wish. The Mongoose in this code sample is pretty meaningless, it's just for the sake of the example.

Create a new project folder, and call npm init inside it.
Then install express and required libs: npm install express body-parser cors bluebird mongoose change-case require-dir --save

Then I'll create the server.js main file (we won't discuss this in detail, as it's mostly a node/express server. The one line that's important to note is require('./routes/index')(server,db); as this will create all the routes for our application).

// Far from perfect, but a good base example for a server.
// Should also change console.log to some logger.
//server .js
'use strict';

const express = require('express');
const bodyParser = require('body-parser');
const cors = require('cors');
const process = require('process');
const mongoose = require('mongoose');
const server = express();

server.use(bodyParser.urlencoded({ extended: true }));
server.use((req, res, next) => {
 console.log(`${req.method} request on ${req.url}`);

const db = mongoose.connect('mongodb://localhost');

//support for cross origin requests
 origins: '*',
 credentials: true,
 methods: ['GET', 'PUT', 'DELETE', 'POST', 'OPTIONS'],
 headers: ['X-Requested-With'],

server.on('uncaughtException', (req, res, route, err) => {
 console.log(`Internal Server Error ${err}`);
res.send(500, { message: 'Internal Server Error' });

const PORT = 8080;
server.listen(PORT, '', () => console.log(`REST server listening on PORT`));

// health check path
server.get('/status', (req, res, next) => {
 res.sendStatus(200, 'ok');

require('./routes/index')(server,db); //<===== includes all routes. server.get('*', (req, res, next) => {
 res.status(404).send('route not defined');
return next();

function cleanup() {
 //do server cleanup.

// listen for TERM signal .e.g. kill
process.on('SIGTERM', cleanup);

// listen for INT signal e.g. Ctrl-C
process.on('SIGINT', cleanup);

// export for testing
module.exports = server;

I'm including a single route file, which will build up all our routes. So let's look into that index.js file, to see what's going on in there:

// routes/index.js'
'use strict';

const routeHandlers = require('require-dir')('./route-handlers');
const changeCase = require('change-case');

const BASEURL = '/api/';

function setupRoutes(server,db) {

 // Initialize all routes by iterating the keys of the require-dir
 Object.keys(routeHandlers).forEach((routeName) => {
 //connect routes to the server base url
 const newRouteHandlerClass = new routeHandlers[routeName](db);
 server.use(`${BASEURL}${changeCase.paramCase(routeName)}`, newRouteHandlerClass.setupRoutes());

module.exports = setupRoutes;

I like to use automatic glue code, rather thant re-type or build a static array. This way we have the system detect new routes and add them automatically, just by adding a file to a folder.

  1. I'm using require-dir which will include all route handlers. I wanted each route to handle it's own paths, and not the global paths (I like encapsulation). So as a design decision I made the filename the subroute file.
  2. I then create an instance of the route handler class, passing it a reference to the dbDB (so it can do it's thing).
  3. setupRoutes() returns a router, which I then connect to our server. I'm building on server.use of the express router , to bind routes to the baseurl. If you adpot this impementation you can always use your own structure.

Next let's look at the base-router-handler which is the base to all route handlers. It will contain most of the code for any endpoint:

'use strict';

const express = require('express');
const coWrapper = require('../utils/expressCoWrapper');

class BaseRouteHandler {
 constructor(collectionName,db) {
 this.db = db;
 this.router = new express.Router();
 this.collectionName = collectionName;
 this.collection = this.db[this.collectionName];

 static validateOkResponse(res, foundItems) {
 if (!foundItems || !foundItems.length) {
 res.status(404).send('item not found');
 return false;
 return true;

 setupMiddleware() {
 // attach any middleware you might need on a route baseis,; can be overriden in subclasses

 static* getSingle(req, res, next) {
 try {
 const foundItems = yield this.collection.find({id:});
 if (BaseRouteHandler.validateOkResponse(res, foundItems)) res.json(foundItems[0]);
 } catch (err) {
 res.status(500).send('Internal Error');
 throw err;

 static* putSingle(req, res, next) {
 try {
 const result = yield this.collection.insert([req.body]);
 if (BaseRouteHandler.validateOkResponse(res, result)) res.json(result[0]);
 } catch (err) {
 res.status(500).send('Internal Error');
 throw err;

 static* deleteSingle(req, res, next) {

 const result = this.mongooseCollection.remove({id :});


 static* getMultiple(req, res, next) {
 try {
 res.connection.setTimeout(0); // disable server timeout - this may take a while
 const result = yield this.collection.find({});
 } catch (err) {
 res.status(500).send('Internal Error');
 throw err;

 static* postMultiple(req, res, next) {
 try {
 const result = yield this.collection.update([req.body]);
 } catch (err) {
 res.status(500).send('Internal Error');
 throw err;

 // eslint-disable-next-line require-yield
 static* notImplemented(req, res, next) {
 res.status(501).send('Not implemented');

 setupRoutes() {
 const self = this;


 return this.router;

module.exports = BaseRouteHandler;

I wanted to use generators, as I like their async / await like structure. So I wrote a co-wrapper file that will handle errors and the generators' routes correctly, including wrapping with a promise. I do not wish to go into depths explaining it, as it's not the point of this post. But you can see this file, in the git repo.

Next we create the base constructor, which takes the route name and (?). It creates the binding to a collection / table / service / anything else you want. It also calls the middleware setup; if you wish to bind your route based middleware, you can override this function in derived classes.

Next I go through and create static route handlers for each route. As you can see the route handlers are pretty simple: take json in, perform some DB operation and return the result. In other examples you might have more complex behaviour. The nice thing is the base creates a default behaviour, but by overriding the static methods in dervied classes we can do whatever we wish to do.

Once the baseclass is ready we can now create a real route, that will do something!
Let's create a 'route-handlers' folder inside the 'routes' folder and add a file called companies.js.

'use strict';

const BaseRouteHandler = require('../base-route-handler');

class CompaniesRouter extends BaseRouteHandler {
  constructor(db) {
    super('companies', db);

  static* putSingle(req, res, next) {
    yield* super.notImplemented(req, res, next);

  static* deleteSingle(req, res, next) {
    yield super.notImplemented(req, res, next);

  static* postSingle(req, res, next) {
   // do some code to send an email to the admin, to ask to create multiple new companies

module.exports = CompaniesRouter;

First look at how easy it was to create a new route. We didn't need to write even this much code. We could just create the constructor and be done with it, if we wanted the same behaviour as the base class.

I did want to show, though, how easy it is to override the code without requiring much work. The base class provided us with a basic implementation for notImplemented[is “basic” an adjective instead of a specific type of implementation?], which makes it easy to disable routes.

Even adding a route is easy. Just add a handler implementation of your own. Makes it easy to test just the functionality and not have to re-write the same code over and over.

That's all for now!

Hope your enjoyed this, or found this useful.

concurrency issues

Concurrency - Watch out for globals in node.js AMD modules!

globals, or global variables are known to be risky.
However using the ‘var’ keyword should ensure file level definition.
As such shouldn’t it be safe to use module level variables?

The answer is no, and it should be avoided at all costs.

why module level variables are bad?

Node require will wrap your module with a function as follows:

~ $ node
> require('module').wrapper
[ '(function (exports, require, module, __filename, __dirname) { ',
'\n});' ]

The calling node will assign to these arguments when it will invoke the wrapper function.
This is what makes them look as if they are globals in the scope of your node module.
It seems we have globals in our module however:
- export is defined as a reference to module.exports prior to that.
- require and module, are defined by the function executed.
- __filename and __dirname are the filename and folder of your current module.

caching - a double edge sword

Node will then cache this module, so the next time you require the file, you won’t actually get a fresh copy, but you’ll be getting the same object as before.
This means you’ll be using the same global modules variables in multiple places, which means danger!

Here is a code example that illustrated the problem:

'use strict';
var x = 0;

module.exports = function (val) {
  console.log(`val : ${val}, x: ${x}`);
  if (val !== x && x !== 0) throw new Error(`failure!!! ${x} != ${val}`);
  x = val;

const fn1 = require('./moduletest');
const fn2 = require('./moduletest');
setInterval(function () {

setInterval(function () {

I’m running here two calls to the same function, with a small delay between each call, after a few runs we will notice that the function will run over each others variables. Which is an example of a module global issue.

How to solve globals?

There are multiple potential solutions to this global issue, I'll present you with two potential solutions

Solution 1 - Functional

If we define a local scope inside our module, we can return a new set of variables for each run.
We will use a 'let' keyword, along with a scoped function (not needed, but nicer and better scope control).

'use strict';
module.exports = (function() {
  let x = 0;

  return function (val) {
    console.log(`val : ${val}, x: ${x}`);
    if (val !== x && x !== 0) 
      throw new Error(`failure!!! ${x} != ${val}`);
    x = val;
fn1 = require('./testmodule')(); //<--- calling a function each time
fn2 = require('./testmodule')();

// fn1 and fn2 are new functions with new variables, we busted the cache !! :)
// notice I also use let, to ensure scope variables, and not hoisted vars.

Solution 2 - use Classes

We can just define a class then create a new class for each run.
This way each variable is a private member of that class, ensuring proper encapsulation.

'use strict';

class FunctionRunner {
  constructor() {
    this.x = 0;

  fn(val) {
    console.log(`val : ${val}, x: ${this.x}`);
    if (val !== this.x && this.x !== 0) throw new Error(`failure!!! ${this.x} != ${val}`);
    this.x = val;

module.export = FunctionRunner;

const FunctionRunner = require('./testmoduleclass.js');

const fn1 = new FunctionRunner();
const fn2 = new FunctionRunner();
// now each fn holds it's own set of variables.
// no risk at all :)

For complete code have look at this repository:

Async JS Clock

Async JS - Waiting for things the JavaScript way...

JavaScript is filled with an abundance of libraries, frameworks, and acronyms that would make any conversation between two web developers sound like they are about to fly a spaceship to colonize Mars.
If you don't believe me, check out this funny post:
How it feels to learn JavaScript in 2016
[If this post gets a high bounce rate I suggest deleting the rest of the paragraph after Mars, on account of the link]
As such writing Async JS is no different or less confusing.

In this post I'll try to bring clarity to asynchronous code in Javascript. I'll focus on back-end node.js code, but a lot of it also applies to the front-end.
Let's first cover async JS mechanisms we have in Node:

  • Callbacks
  • Promises
  • Generators
  • Async / Await

I have not included things like observers, async.js and events, as they are not exactly the core of JS. For example, events rely on an async mechanism (such as callback). Many of the observer mechanisms are used mainly in front-end patterns today, and async.js is an external library which I stopped using. However if you want to learn more I suggest you look these up.


Callback functions are the most basic types of async code, and are common not only to JavaScript but to many other languages.
Callbacks are simple to understand. Callbacks are simple functions passed as arguments, that are called when the called function is finished.
JavaScript provides out of a box functions that be async.
In Node.js there is a lot more support for non-blocking async callbacks.

function callMeWhenDone() {

setTimeout(callMeWhenDone, 1000) // will call when 1 second passes
window.addEventHandler('click',callMeWhenDone, false) // will be called on window click

Node is non-blocking io by design and as async rather than have a process stand idle and wait for io operations to finish (read from disk, make http call etc), it will allow other code to run, and once finished will call your callback function:

function doLongProcessWithCallback(param1, param2, callback) {
  readFile("/path/to/file", (fileInput) => {  
      // process file
      // finished

doLongProcessWithCallback("stringInput", 34, callMeWhenDone);

Very simple and straightforward. The main problem with callbacks is that when these are all chained together, as many operations are in async, you'll end up with loads of callbacks which is a nightmare to read, manage or follow. This is called callback hell.


JavaScript Promises are a different way to handle asynchronous code that allows for easier management of async code, yields easier code flow, and uses exceptions for errors, uniform signatures and easy composition, meaning we can chain promises together!

They are a bit like real life promises. Imagine your boss promising you a promotion next quarter. You don't know if you'll get it or not, and you'll know that only in the future. Promises have three states: resolved, rejected and pending.

A promise constructor takes two parameters, reject and resolve, which will be called when the promise finishes and returns a chain-able promise object.

const doLongProcessWithPromise = new Promise(function resolve() {
}, function reject() {


This might look more complex, and for very simple situations you might be right. But let's look at the chain-able .then and .catch (for success and failure of a promise).

.then(function (result) {
//this is called after the promise resolves,
//and the input parameter is the return value from the success

//or imagine this

As you can see this allows for chaining of promises, which creates sequential code. Sweet!

Prior to ES6 promises were supported using external libraries such as Bluebird, Q , RSVP and many others. However they are now also a part of the coding lanaguge, as promises are that important.

Promises deserve a post of their own so here is some more reading if you want to dive in and understand them better:


Generators are not designed to be an asynchronous mechanism per say. Their intent was to create an iterator-like functionality in the lanaguge; however they are often used to create cleaner looking, synchronous-like code. This is built on the fact that generators can be paused and resumed. Once again generators deserve a post of their own, so I will add additional reading links at the bottom of this section.

Generators landed in ES6, and can be created by adding a '*' after the function keyword (or before, in class members):

function* generatorFunction() {
yield 'a'; //Once yield is called, the function is paused until it's called next.
yield 'b';
yield 'c';

var g = generatorFunction();"
console.log(; // output: a
console.log(; // output: b
console.log(; // output: c

The nice thing about generators is that inside a generator function you can pass the control to another generator *yield or to a promise / value with yield:

function * generatorFunction() {
const userInfo = yield getUserReturningPromise();
const orderInfo = yield* getOrdersForUserGenerator(userInfo);
return orderInfo;

//wraps the generator with a promise and can now be used as a promise.
const generatorFunctionTurnedIntoPromise = Promise.coroutine(generatorFunction);

As you can see you can the code becomes simpler. You can even wrap a generator into a promise easily with a co-routine (Bluebird has a co-routine, for example).
As you can see, promises and generators co-exist nicely!

Here is some further reading, if generators are still not clear:

Async / Await

Async/Await is not part of ES6 sadly, but only ES7. The use of generators and promises, while nice, is not very clean. It requires a lot of wrapping, and the intent of generators was to provide an iterator, not an async mechanism. This is where async / await shines, as it is a cleaner way to handle promises and asyncronous code in a sequential manner:

All you have to do is define an async function (with the async keyword), then enter an await keyword from your promises, much like the generator yield, but with less mess:

async function doProcess() {
const userInfo = await getUserReturningPromise();
const orderInfo = await getOrdersForUserPromise(userInfo);
return orderInfo;

As you can see the code is clean, but didn't require any wrapping, or using generators. Adding just two more keywords allows us to use promises everywhere (promises tend to be faster than generators).

Further reading:

Lecture About JavaScript and ES6 Features

Last Week we had so much fun, we were teaching people about JavaScript history and future, and showing off some of the new ES6 Features.
Check out the images from the talk, and we've also attached the presentation.

ES6 Features - Lecture ImageES6 Features - JavaScript ClassesES6 Features - Javascript HistoryES6 Features - JavaScript CrowdES6 Features - Dory Lecturingdsc_7115Learning JavaScriptES6 FeaturesES6 Features - ECMAScript StandardsJavaScript GeneratorsDory LectureCrowd Learning JavaScriptES6 Features ES6 Arrow FunctionsLaptop LearningES6 Features JavaScript Developers

Here is the presentation in PDF and in PPT:

JavaScript Presentation in PowerPoint Format
JavaScript Presentation in PDF Format

How To Hire Awesome Engineers? (Good vs Bad Cold Calls)

13I get approached daily and weekly by people wanting to hire me, on LinkedIn, email, and through various other means.
I really don't mind it, and while I've been in software for quiet a while now, and I'm not claiming to be an expert on anything, I do believe I understand the engineer / geek mindset and, having successfully hired many people and been hired many times, I think there are a few key elements and success factors to hiring engineers that so many companies and people miss completely. I'm writing this post in the hopes of helping others improve their hiring process for the good of all of us out there. :)

The Wrong Cold Call Email

Personally I don't mind getting cold call emails / messages. Everyone is doing their job and that is actually good! However, I know many people that hate this. I think the main problem here is that people don't invest much time when they're about to send an email to someone they do not know. Before you contact anyone you don't know, spend some time making sure you understand why you're contacting them. Here are two examples of horrible methods that will rarely get a response from anyone and if anything, might even get you tagged as spam and blocked:



It's fine that you have a template part, many times your message has some core information that doesn't change, so leave that in. But show the other person you know why you're contacting them. These guys didn't even bother, they are probably sending emails to everyone. The first person was impressed by my LinkedIn profile, which is great, but do I really believe her? What is she impressed by? My background, my Ember skills or my pretty blue eyes (they are really brown)? You get the picture. I had no doubt she didn't even read my profile or find anything impressive on it - she is machine gun emailing. And the second guy was doing so much copy and pasting he missed getting my name in the template right. However, it's also not always this clear. Sometimes I get emailed with a lengthy one or two paragraphs talking about who they are and what their companies is, etc, etc. Why do I care? Why does anyone at all care about that?

The Right Cold Call Email - 80%+ Response

If you're going to address anyone, not just in regards to hiring but any cold call email, you need to spend time and construct it properly:

  • Pre-approach - This will take you some time. Use Google, LinkedIn, Facebook, and research the person you're contacting. Look for their personal site. Read about the companies they've worked for. Try to get a mental image of who they are before you contact. Make sure you actually do want to contact that person, and that he is the right person you should be talking to! Don't just copy and paste stuff. Spend time in what is called the pre-apparoch. It will pay dividends, ensure you're actually contacting someone you want to talk to, and will show the other person you care about them. In the same way you're asking them to invest time, you're investing time too!
  • The opening paragraph - I always open my emails with highly personalized content. But not just any content - I try to find the reason and the basis to try to reach out to the other person on a personal basis! I try and connect on a personal and professional level. I try to understand the mindset and why what I'm offering suits their mindset and persona. For example, if I see someone that is an Ember fan I would talk about why I think Ember is great and why I'd love to contact them. If I'm looking to hire a developer I look at their Github and their stack overflow and see what they have been doing, do a little code review for them, and only then address them: "I checked out your Github, and I loved your angular auto-complete directive." I always close this paragraph with a clear indication of why I'm contacting them. People have little time, so be precise and direct. People will breeze through your email / message. If they understand what you want - great! If they feel it's spam they will mentally tag it and will not continue to read, but press delete instead.
  • The info paragraph - This is where you are allowed to provide copy and paste info. If you're looking for work, write about your background, provide links etc. If you're looking to hire, explain about yourself, your company and what you're looking for. If you're looking for customers explain what you can provide, what other customers you've worked with, etc. Include links and information but try to keep it short and sweet. Too lengthy becomes lecture-like, and people don't like that, they tend to skip it as spam.
  • End with why now and a call to action - I like to end these emails explaining why I'd like to contact them now (currently hiring, just finished a job and looking for a new things, currently in two for two weeks, etc). Don't make this generic, explain why the time-frame is real, it creates urgency and authenticity. Again don't make this up, really explain why!
  • Language - Long gone are the days of writing fancy emails with fancy language. They super quickly sound too hyped up and too pretentious. I've been using a tip I got from one of my co-founders. Think like you've having a normal conversation with a friend over a beer or lunch and write your email in the same language. Be modest and confident, stick to facts and talk on eye level. These tend to make people feel like they've received an email from a human and not an email sending machine. However, please do be passionate and alive, explain why and show that you care. People tend to respond to that. They see and read your effort and tend to appreciate your energy.
  • Follow ups - If after 3-5 days you get no response, feel free to send a quick 2-3 lines follow up email. If that doesn't work try again after 3-5 days. Most of the time people have either gotten it in a spam folder, or just have been too busy with other things, don't take it personally. And if still no response, just let it go, you probably don't want to do business with them anyhow, as they aren't really mindful of you or your time. ;)

The Next Step - Initial Call

After you do get some interest to your email, I suggest you setup an initial call. This should be an intro call with someone that has some technical knowledge. Do not send out a test task right away! You want to get to understand the other person better, you do not want to put them off.

During this call, let them talk about themselves, what are they doing, who they are, what they want to do. Ask open ended questions and listen. It's the first time they are talking to you, so let them feel at ease. Get a sense for who they are, only after that should you spend a little time talking about you, your company and what you're looking for. After about 30 min introduction try to do 30-45 min of tech phone interview, just to make sure the person you're interviewing does understand the basics.

The Joel on Software blog describes this call very well. Try to get in a few questions. Ask to describe some algorithms. Talk about the technology the person uses, and try to probe. If you don't know the tech the candidate uses, try to get someone else on the call to probe about it. It's not critical that the person has or doesn't have the right tech fit, it's important that he really understands the tech stack that he uses. If it's JavaScript he should understand why == is not the same as === or what is prototypical inheritance and explain about certain gotchya's! Feel free to also ask them to do something simple like write a function that reverses a string, etc.

You can also ask them what's the different between pass by value and pass by reference. I really like to say something that is totally wrong and see how they respond: in C++ you cannot pass by reference. It shows how they respond to conflict. Try to get a sense of whether they do understand the tools they use and the basics of software. If this goes well, setup an in-person interview. If not, move on. So many times you'll find that candidates that seem like they are amazing end up being total duds and vice versa. So try to get this right, the person on the other end will appreciate you for it too! Also it's a good time to ask about salary expectations etc, to make sure you're on the same page, or you can wait until after the interview / test task. This depends if you have a set budget or not, or if you don't mind paying the market rates or not.

The In-Person Interview

The interview is a stressful time for most candidates - they want to impress but it's not the natural environment for most of them. So start again but having some casual conversation. Try to find a quiet location and try to make the other person feel comfortable. It's not about stressing them, they are probably doing that job very well by themselves. It's about making them feel as natural as possible so they can really perform at their best and so you can understand what their best is!

This shouldn't be a one sided discussion or an informal chat, you should have an agenda. Talk through, look at their body language and listen. They should be talking as much as you are talking. Ask them about their work history. About projects that didn't work, about conflicts and how they resolved them. Get a sense for what their everyday job looks like. If they manage people, how do they deal with failures? What is their outlook on failures? etc. Be sure to tell them about yourself and about your company and role.

Then you should get into the technical side of the interview. Do not ask questions with a light bulb moment! You might love them and think they are clever, but they prove nothing to be honest, except that the person can solve your riddle. It is the same with complex algorithm challenges with one solution. Again they provide you with very little insight into how good that person will be as an engineer, and that's the goal right? To find someone intelligent that can deliver results! So good questions are:

  • Ask them to complete a simple code exercise, not too trivial but not too complex. Chances are that the other person won't write optimal code and it might contain bugs. This is a great chance. Tell the person he has bugs and wait to see if he finds them and fixes them. Look to see if they didn't write code to handle invalid input. They should ask you about it, or if not, they should notice that. Then ask them to optimize their code. Many times the code they write isn't dry or optimal. Iterating together will show you a lot about the way that person handles not only code, but also do they persevere or do they give up? You want people that don't give up! This is a great test for anyone you work with!
  • Ask them to design a system. Any OO design question would be good here (Deck of Cards, Elevator System, Etc). These are good questions as they show how that person approaches development design, how they architect and how they think in terms of Objects. Again, this is a good question as you can ask them why they did it this way or that way and gain insight into their thinking
  • Then I always try and find some challenging task I had and give it to the candidates. Something I have already solved but thought it was complex to solve myself. This is another great example, as I've had people solve a complex problem faster than I did! They aren't suppose to come up with a complete solution, just with the concept behind it, and maybe explain how it would work. This is very insightful!

I try to wrap up the inteview by thanking that person and tell them I'll be in touch. Try to send out emails to everyone that interviewed with you showing that you value them and their time, and that you respect them regardless of what the outcome might be.

The Test Task

This is something that is so misused in my mind. On one hand it's a great indicator and way to learn how it is to work with someone. You can see if they understand instructions and how it would be to work with them, but so many companies abuse this. Never send out a test task right away, it shows you don't really care about the other person or even value their time. I've done a few test tasks, some as first point of contact. And while many did like my work, I never ended up taking work with companies that do this.

The proper way to do a test task is as the last step, to see how it would really be to work with that person. I like to find some block I really need, and to give that to the other person as a paid task! Yes, paid. Many times engineers will even agree to do it for free, but always offer a paid task. Just as you wouldn't go to the doctor and ask for a free checkup, just to make sure he is the right doctor, don't expect a good engineer to work for free, they have too many options. But with a paid task, I can see that the other people value my time and respect me, and I also do these very gladly. Yes there is a risk that the other person will write crappy code, but it's better to pay a few wrong people than hire the wrong person. It shows me people are motivated to complete the task and it's a great way to mini-test your working relationship.

Final Notes

I find that following these steps makes people feel at ease not only with leaving a current position and moving to work for you, but also for you. It ensures you find great candidates and hire only the best people that mesh well with you and your team. It's not rocket science but it is a craft, and so many companies have such bad hiring processes that it frustrates the people they are trying to hire. So even if you don't follow my suggested steps, please be mindful of the other person, show you respect for their time and try to treat them the same way you'd like to be treated!


What is the secret to successful remote software engineering?

My recent experience is that many companies insist on having engineers on site. When they hear "remote" or "not in the office" many people have a very negative perception. They either believe it's cheap labor or they believe they require people to come into the office each day in order to get good results. While I do understand the bad experience many companies have, this is not always the case. Many are highly successful with distributed remote engineers, or even a remote team. While there are many places with highly talented engineers all over the world I see again and again companies that insist on hiring people only from the local eco-system. And it's true that there are certain skills that only exist in Silicon Valley / Tel Aviv / NYC and other places where people have successfully built large companies, however a large percent of the work can still be done in a different place where the talent is more loyal, at a lower cost, while not sacrificing the skill-set of the people. It's very difficult and expensive to hire engineers in SF or NYC or TLV, and as there are so many offers for talented engineers there, retention becomes just as hard as recruiting.

I've been highly successful at finding and retaining talent world-wide. I've also been working with companies for around 6 years, remotely. Either personally for my own start-up or providing development services for companies. I'd like to share my thoughts on what are the secrets to making such an environment flourish.

My experience with remote teams

Today my time is split between the US, Israel and Eastern Europe. I've been working for the past 6 years or so in and with remote environments and teams. I've used remote teams to build a complex password manager running on multiple web and mobile platforms, and in 4 years it has reached over 70,000 paying customers. I've also been successful at building products for US companies with teams in Eastern Europe and getting results using the latest front-end and back-end technologies.

Working in a remote team as an individual

When I first started out, I had doubts: how does this remote thing even work, if at all? While I'd heard of companies doing it, up until then I was used to waking up in the morning and going into an office. At the time I'd just started working with my new co-founder, with his company that has sold over 3 million dollars of mobile software products, and has worked with over 20 developers from all around the world. I was fascinated by this. Slowly but surely I saw the way he works with them and why he was so successful in doing so. It actually took me a lot of effort to get him to start meeting regularly (as we lived 2 blocks away) and we ended up meeting once every 3-4 weeks in person. We worked night and day and would communicate via skype, email and other methods. We built an amazing product together and got some great offers for partnerships and acquisitions.

Working with a mixture of remote and local teams

For the past 2 years I've been working with US-based companies, where most of my development work is done either by me, or by using teams of people in Eastern Europe & the US. I've built products and I know that there is a clear difference between a remote single contributor and a remote team. Remote teams are very similar to regular teams, except you might have other people in other countries as your co-developers, product managers, or product owners, and you must manage this process. There are many similarities to being in a remote team and being a remote single contributor. I am not going to go over the differences as I want to focus on the core elements of working with remote teams / single contributors and what is common to making any remote environment work.

The secrets to making remote work

Finding good engineers is hard, no doubt. However using good engineers remotely requires the remote team or remote lead person on that team to have additional skills in order to make it work.

  1. Be Proactive & Driven - This is the single most important quality for any remote engineer / remote team manager. The reason is that when someone is sitting in the office, you can instantly see if someone is not engaged, or stuck. You can just tap him on the shoulder and ask what's up buddy? Is there anything I can do to help? What are you working on? etc. In remote teams that is not possible, so you need to ensure the person on the other side, and possibly in the other time-zone, is proactive. He will get on call at strange local times, he will email you that something isn't working. He will flag that he finished his tasks and needs more work, or even let you know that while you've planned it before, seems he is finishing early. He will be the type of person tapping himself on the shoulder and not requiring anyone to chase him. EVER! This type of person will make or break your remote / outsource / not in the office work environment.
  2. Resourceful - Resourcefulness goes hand in hand with being pro-active. When working in a remote team, many times you will be faced with integration issues. Integration issues are the ones that take up a lot of time. The back-end RESTAPI that is suppose to return X returns Y. Break. Your mobile app / front-end app cannot read / write the data and the work cannot continue, or perhaps it can? While the proactive perosn would raise the issue, a resourceful one would also find a creative way to continue his work. For example, many times I will create mock data / a mock server when I can't get the back-end to work. This can mean the difference between 24-48 hours delay in the work, to zero down time, or just 1-2 hours to fix a bug. A resourceful person will find an alternate path to continue his work, create solution to a problem or just move to another task. Resourcefulness is highly important for any engineer, but in remote teams it is vital as it can be the difference between making the remote team work, and reaching the conclusion that remote teams do not work.
  3. Understand Product - Finding a good engineer that can also understand product is very difficult. However when working remotely this not only becomes a nice to have, it becomes vital. Understanding product means thinking in terms of user experience, and what is the easiest and most intuitive way to use the application. Many talented engineers can produce great code per requirements or spec, but do not think in terms of what the user needs. When this happens in house, the product lead can very quickly do a course adjustment: "Hey, I thought that would work, but on second thought let's scratch that and move this button over here." With remote teams these iterations might take more time, and so it's important to have someone you trust that would adjust the course himself. Someone who would understand what the "real requirements are" or what the functional requirements are, and build the right usability for the user. Even if not perfect, then the product person would have a much smaller adjustment to make. Understanding product is not simple, but once you find the right person that can do that, you're setting yourself up for success with remote teams / engineers.
  4. Result Oriented - Most people hate micromanagement, and while sometimes management does need to intervene in the remote environment, this becomes almost impossible. That is why in remote environments, your engineer / lead must be result oriented. He is not focused on completing a feature, or getting his "workload" ticked off. He should be focused on making sure your business goals are achieved, and that his part is playing it's role in the global scheme of things. A result oriented person would ask about your business deadlines, when do things need to be done by, and why. This means that person is not about just counting the hours worked, but about making sure he is helping you get to where you need to be.

TimeZone Issues

I've worked with teams in many time-zones, and when I meet new customers they always raise that concern. I would like to use the end of this post to crush any time-zone concerns people have. Is having developers in different time-zones a challenge? Sure it is! Does it mean it won't work? Not necessarily. If you've found a good engineer or engineers, that have the list of skills I've mentioned, you won't be suffering from time-zone issues. These types of people, with these skills, are leaders. They will work at many times that overlap with your hours, they will be answering emails at 2am in the morning their time, they will jump on call at strange hours as they commit to your success. Furthermore, how many times do you really need to talk to your engineer 8 hours a day? Most of the time you'd rather not do that, as if you are, you might be hurting your own performance at the same time...

I'm a big believer in remote teams and when done right they are a wonderful asset. The right team / person can build you amazing software that works very well. It's all of matter of understanding how to work it, and what to look for. I hope this helps and feel free to contact me if you have any questions about creating a successful remote software team.

How to Remote Debug Node.js

Finding and fixing bugs is not always easy, especially if someone else wrote the code! Now image you need to remote debug Node.js code with bugs and code you didn't write.

I know that engineers in general have NIH syndrome, but I am one that doesn't share that view. Technology is an enabler, meaning it's not an end goal, it is there to provide a service (or at least that is how it is most of the time).

As such, we must sometimes make fixes to our code, or to other people's code, and that requires debugging. I've seen many people use console.log/logger/printf - heck, sometimes they even suggested that I do it that way. But as much as I enjoy waterboarding myself, I'd much rather use a debugger whenever I can. Debugging a node.js project isn't complex, it just requires a little bit of setup, after which you can debug a local app or even a remote production/staging/test environment.

The first step is to run node.js with the special debug flag and the optional port:

node --debug
node --debug=4455
node --debug-brk

If you're using gulp/nodemon etc, be sure to include those flags in a separate task and/or pass the relevant params to your node app.

// Nodemon task
gulp.task('remote_debug', function () {
return plugins.nodemon({
script: 'server.js',
nodeArgs: ['--harmony', '--debug=5577'],
ext: 'js,html',

Then you can launch your app, or do it via the task, and your node.js app is running and allowing any debugger to connect to it.

You can use any node.js you choose. I personally use phpstorm/webstorm. While it's not a perfect product and has some issues, I've had very successful debugging sessions with it, and I'll try and outline how to set that up.

First install webstorm/phpstorm. Both IDEs are great and very similar, except Phpstorm also allows you to edit and work on PHP files, whereas Webstorm mainly concentrates on JS files and web files.

After the install, launch the app and go into the plugin:

Go to File->Settings and in that screen click on the plugins menu item.


The click on the "Install JetBrains plugin..." button and in the new window either scroll down or search in the top search box for NodeJS plugin.

Once the install is finished, you should have NodeJS installed and you can go ahead and open your projects directory in the IDE. (File->Open Directory, obvious I know, but still... ;) )

In the last step we need to configure the remote config for our node project.

Click on Run -> Edit Configurations... Menu.

And Click on the + button and select Node.js Remote Debug.


Then in the main window just setup the server address and port (this can be used to debug a remote machine or a local machine)
And you're all setup to start debugging your server!


Then click ok, select the configuration from the top right-hand side menu and click on the little bug icon button:


At this stage you're up and running. If you look at the bottom debug tab you should see you're connected and then you can put a breakpoint anywhere in your code and solve any bug you come across like a hero (at least in theory! :) ).

***** Important note *****

While Phpstorm/Webstorm is wonderful, I've had some issues with debugging performance. This issue relates to some settings in the software so to ensure you do not get frustrated waiting for the first breakpoint to hit, I would suggest you configure Phpstorm/Webstorm as follows:

1) Click on help -> "Find Action" (ctrl + shift + a)
2) In the searchbox type: Registry.
3) Then start typing (or scroll down) and find
js.debugger.v8.use.any.breakpoint - turn off

Happy Hunting!

Authenticating with Ember-Simple-Auth

Recently I've setup an ember project and I needed authentication. There is an excellent library / cli plug-in called ember-simple-auth. While the guides are great, the most recent version of Ember(1.13 / 2) doesn't play nicely with the latest ember-simple-auth.

Ember-Simple-Auth, provides a special branch called jjAbrams (PR 602). While it is an awesome library, getting it to work can be somewhat tricky as not everything is as documented and requires some tweaks here and there. I'm trying to outline what I did, in the hopes it will save time for many others. And help prevent other devs from banging their heads against their keyboards, posting issues on git repositories or irc channels, or reading simple-auth source code to understand why things don't work they way they are suppose to (as I did) in the hopes of understanding why things don't work. Especially if you're running against a custom server, like I did.

Here are the steps to get it working.


ul class="regular">

  • First create an ember app using ember-cli
    ember new app-name
  • Then follow the instructions on ember-simple-auth, how to get this specials build. It's rather simple but still you need to make sure you don't have any of the ember-simple-auth / simple-auth packages in your package.json / bower.json and also delete them from your node_modules / bower_components directories. Here is the pull request for it (make sure to read it as it explain how to get setup).
<li>Next create an application route (if you don't have one). open the command prompt and type:
ember g route application
<li>Then add the application route mixin to the route:

// app/routes/application.js
import ApplicationRouteMixin from 'ember-simple-auth/mixins/application-route-mixin';
export default Ember.Route.extend(ApplicationRouteMixin,{
 ... rest of application route code
  • Next add a Login page and login controller and a protected page (notice I'm not using the LoginMixin as in many of Ember-Auth examples as it states it's deprecated and I've also chosen not to use the Unauthenticated mixin (just because I'd rather leave that always accessible).
    ember g route login 
    ember g controller login
    ember g route protected
  • edit the login controller file:
    import Ember from 'ember';
    // app/controllers/login.js
    export default Ember.Controller.extend({
      session: Ember.inject.service('session'),
      actions: {
        // called when the login button is pressed
        // will authenticate agains the ember-simple-auth authentication infrastructur
        authenticate: function() {
          this.set('errorText',null); //in case your want to display an error
          var creditionals = {};
          var that = this;
          creditionals.identification = this.get('model.identification');
          creditionals.password = this.get('model.password');
          new Ember.RSVP.Promise(function (resolve,reject) {
            return (that.get('session.isAuthenticated')) ? that.get('session').invalidate() : resolve(true);
           //first invalidate, so we clear past data.
          .then (function () {
            return that.get('session').authenticate('authenticator:custom',creditionals);
          .then(function authSuccess() {
            // this hook is not automatically called, and seemed to have change, so I just trigger the action myself.
          },function authFailed() {
            that.set('errorText','Login Failed!');

    and the protected page route

    // app/routes/protected.js
    import Ember from 'ember';
    import AuthenticatedRouteMixin from 'ember-simple-auth/mixins/authenticated-route-mixin';
    export default Ember.Route.extend(AuthenticatedRouteMixin,{

    then edit the login template (I have some bootstrap styling here)




    {{input value=model.identification class="form-control" placeholder='Enter Login'}}


    {{input value=model.password class="form-control" placeholder='Enter Password' type='password'}}









  • Next create an authenticators directory and an custom.js (authenticates login requests and restores session on refresh)
    Notice I use the same name everywhere (access_token) as once you resolve when authenticating it stores all the info in the session.

    import Ember from 'ember';
    import Base from 'ember-simple-auth/authenticators/base';
    export default Base.extend({
      restore: function(data) {
        return new Ember.RSVP.Promise(function(resolve, reject) { 
          if (!Ember.isEmpty(data.access_token)) {
          } else {
      authenticate: function(credentials) {
        return new Ember.RSVP.Promise(function (resolve, reject) {
         // make the request to authenticate the user at /authpath endpoint.
           url: '/authpath',
           type: 'POST',
           contentType : 'application/json',
           dataType : 'json',
           data: JSON.stringify({"email":credentials.identification,"password":credentials.password}),
         }).then(function (response) {
             resolve({ access_token: response.token, account_id: }); //saving to the session.
         }, function (xhr, status, error) {
      invalidate: function(data) {
        return new Ember.RSVP.resolve(true); //does nothing for now, you should add code for this to work. (clear the session, send a logout to the server)
  • Then add an initializer for the authenticator
    ember g initializer custom-authenticator

    and the file itself:

    import CustomAuthenticator from '../authenticators/custom';
    export function initialize(container, application) {
      application.register('authenticator:custom', CustomAuthenticator);
    export default {
      before: 'ember-simple-auth',
      name: 'custom-authenticator',
      initialize: initialize
  • And now create an authorizers directory and a custom authorizer
    Notice that the info is is accessible via that.get('session.authentication') (not as in other documentation!)..

    // app/authorizers/custom.js
    import Ember from 'ember';
    import Base from 'ember-simple-auth/authorizers/base';
    export default Base.extend({
      //notice the change, it will add an authorization header, in this case I'm using basic http authentication.
      authorize: function(addHeaderFunction, requestOptions) {
         if (this.get('session.isAuthenticated') && !Ember.isEmpty(this.get('session.authenticated.access_token'))) {
           var basicHTTPToken = btoa(this.get('session.authenticated.account_id') + ":" + this.get('session.authenticated.access_token'));
           addHeaderFunction('Authorization', 'Basic ' + basicHTTPToken);
  • Then an an initializer for the custom authorizer:
    ember g initializer custom-authorizer
    import CustomAuthorizer from '../authorizers/custom';
    export function initialize(container, application) {
      application.register('authorizer:custom', CustomAuthorizer);
    export default {
      name:       'custom-authorizer',
      before:     'ember-simple-auth',
      initialize: initialize

Here is a Git Repo with a working example

This hopefully will get you up and running! enjoy :)

Choosing an MVC framework: Why I picked Ember & why it was an awesome decision

When I started working with Originate on a new project, I was given the freedom to choose my own stack. The CEO's requirements were simple: create an MVP to replace the current process of resource planning that we use today. This meant creating a replacement for their Excel based process using a web app. I told the CEO: Don’t worry, I’ll get something out to you in two-three weeks.
So there I was, staring at my computer, having absolute freedom to choose my tools. I should've been happy.

I wasn’t. I was stressed. I really wanted to impress them, to use the latest technologies.

I knew JavaScript very well, and I’d done a few node projects, so I picked that. I didn’t have too much relational data, so I picked Mongo.

But which front-end frame to use to make a snappy single page app?

The Decision process:

I really wanted to pick the best one, but as getting hands-on with a framework can take anywhere from a few days to weeks, I just didn’t have that time to spare.

So which one should I pick? Angular? Knockout? Backbone? Ember? Meteor? I was in a bit of pickle.

I’ve used many technologies over the years but I hadn’t really experimented with Angular, Knockout, or Ember. I’ve used Backbone, but it’s just too messy for my taste and not really structured.

So I started reading about all the technologies.

The instant 'No's:

  • Meteor seemed amazing - really smart people building such a great concept. The idea of front-end and back-end in the same system seems incredible. I actually wrote similar code in one of my past projects to move models from the back-end to the front-end, so I instantly connected with the concept. However, it was still very early days for Meteor and no big production environment was running it at the time, so it was out. After checking on them recently I notice they seem to have made great strides and are doing very well. That's one tech that I’d keep my eyes on personally.
  • Backbone – I never liked Backbone that much and I'm not a huge Marionette fan either. I just felt it’s too messy for my taste.

Now there was the tossup between the ones that were left:

  • Angular- the most popular one at the time, with lots of support due to it being a Google product. However, I’d read complaints about it using its own idioms and engineering concepts that one must learn in order to use it. Still, it was the most popular and most commonly used MVC, so I was not going to dismiss it easily.
  • Ember – As I was reading about Ember and doing my background investigation, I leant that out of all the frameworks Ember is the most opinionated and that using it requires following a very particular structure, whereas Angular is more configuration based. I also learnt that Ember is stable and used by quite a few serious companies.

My decision wasn’t simple, but ultimately I picked Ember for two main reasons:

  • Ember and Angular seemed to me the most stable, and the ones that have some serious customers using them in production.
  • Ember’s convention over configuration. Ember forces you to follow a particular structure, which can be a very good thing as it solves many of the engineering decisions for you. Plus, no one has to be the “code police” that makes sure you put files in particular directories, or call classes with certain names. I liked that, and since I’m not only the engineer but also the project manager and team lead, and needed to put more developers from various time-zones on the project, I thought that would be awesome. Boy, was I right.

Getting Started With Ember:

Now, once I'd chosen my tools it was time to start getting some hands on experience in building my MVP.

Naturally I started on the Ember official tutorial. However their tutorial was based on used Ember-Data. Ember-Data is an ORM like layer that is separate to Ember. It provides the front-end with models, and all related operations and uses dependency injection to inject an object called the "store" into all parts of an Ember app. While the concept is wonderful, at the time I was starting out Ember-Data was separate to Ember, and they had mentioned it would soon make it's way into the main branch. As such I decided it was too big of a risk to rely on it and developed my own data layer, using my own store. At the time of writing this post it has not made it's way in, and went through many breaking changes. As such this decision was spot on.

The first week with Ember was painful. The concept of convention over configuration means there is the Ember way, and the Ember way. Trying to do things in a none-Ember manner just creates frustrations and will get you no where, so the best way is to understand what Ember expects and to do it that way. For example, let's say you'd like to update certain elements, hide or show etc. While in traditional jQuery you'd just bind to an event, the Ember way would mean you should add an action to some element.

Ember a couple of months in:

Once you do understand how Ember works, and what to do, you become super productive and you can crank out code in a fraction of the time it would otherwise take you.

Ember has a very particular way for all objects in the system to interact. Ember gives you an excellent router, with each route having a very particular point in which it loads the model (Either stand-alone model object, POJO, or the ember data models), a controller object that wraps around your model, and can provide various computed fields, or connect various models together, handle actions and more, and the view that wraps the templates, which uses HTMLBars (ex. Handlebars) templates. Here is an anatomy diagram:


Ember really gives you a framework to do great stuff in a very easy to use manner (once you understand the manner as it's perceived by Ember); I found myself completing whole pages, and complex UI components, in lighting fast speeds, with it being re-usable in other places later on. Once you're up to speed with Ember, you're going to love it (or at least I hope so).

Ember Decision Epilogue:

My Ember journey has been very exciting. It started with Ember 1.3 and now I'm Migrating to Ember 1.10 with Ember-cli. I have written a complete data layer for my Ember app, and the application today is 150,000 lines of code and used by a significant company in production. I have gone through performance optimization and made a few interesting observations about Ember:

  • Us geeks tend to love technologies for technological reasons. But the reason I love Ember is from a business perspective - convention over configuration. This means that you can move new engineers on and off a project, and as long as they understand Ember they can start to be productive very quickly.
  • During my work on this project, I assisted the company with implementing a new consulting agreement. Having the customer use Ember made it very easy for me to detect design flaws, and to do isolated re-factoring, rather than a complete throw away!
  • The company had another codebase that everyone suggested was a throw away. The code was indeed poor in many ways, but since it was Ember it was easy to salvage, to re-factor and shift snippets around, to make it usable and re-factorable, saving the company months of work.
  • I have hired a few engineers to work on this project, from multiple places around the world. I was amazed at how fast they have picked up Ember, and even though they are junior, Ember has allowed them to be highly productive from day one.
  • The Ember-Cli build system is an awesome addition that takes care of so many things, there is no other MVC framework that provides it.

I'm finding more and more companies are using Ember, and I'm the big advocate of convention over configuration along with Ember. True, there is the downside that Ember has forced its users into making constant re-factors to it's code-base, but I suspect this is still better than Angular 2.0's throw everything away! Also when using the latest and greatest you should expect things to change, and overall Ember provides developers and product owners with lots of bang for their buck. I think it's more complete and my #1 choice for CRUD like application, front-end MVC.


p class="page-break">Thanks for reading and I hope you've found my experience helpful.
Dory :)