mentorship is broken

The Broken State of Software Developer Mentorship

Software developer mentorship today has issues. Hiring junior software developers and mentoring them to become mid and senior developers is an important goal for many if not all companies. In an ideal world, we would see a lot more mentorship happen in practice. However, we do not live in an ideal world.

In the real world, some software developers have limited mentorship and support in their workplaces. The sad reality is that most junior software developers must figure things out on their own. Today I'll strive to identify the main constrains that hinder software developer mentorship and how to tackle them.

Why is Software Developer Mentorship Broken?

For one, junior, mid, and senior software developers must complete tasks on each sprint to move up. This creates time constrains and limits mentorship and learning times. Today we'll discuss this conflict. We'll also explore the current state of software developer mentorship, why it's broken, and how to fix it.

As you read this post, most junior software developers are left out there to figure things out on their own. Either that or they get very limited software developer mentorship and support in their workplaces.

Mentoring junior developers is a novel idea. I wish that a lot more of it were happening. Companies want to invest in developer mentorship. Sadly, there are many reasons preventing that from happening in practice.
There are some very serious and common issues. While some have to do with the individual software developer’s personality, most can be fixed.

I'm very confident that it’s often the companies themselves that are preventing software developer mentorship from happening. Here are some common constraints or limitations that hinder successful mentorship from taking place:

  • Company culture – Short-term business goals
  • No manager – The senior software developer predicament
  • Developer personality/ego

Not only does this create frustration it also in great way to create more Tech Debt.

Company Culture – Short-term Business Goals

The biggest hurdle in software developer mentoring is the company itself and its culture and support for setting up a mentoring framework. While most places when hiring will ask senior software engineers how open they are to mentoring junior developers or to tell them more about how they mentor junior developers, very few companies in actuality create a supportive culture of learning. Most companies don't wish to hinder the mentorship, and the main reason for a lack of mentorship is business goals.

The Software Developer Mentorship Killers

Kanban Vs Agile

  • Scrum & Velocity - Many companies today work using Scrum, which means each developer needs to complete certain tasks or work items in a given sprint. While this allows companies to align business objectives in dedicated sprints and measure their team velocity, it also creates a downside. It directs junior, mid, and senior software engineers to focus on their short-term sprint-related tickets and on getting things working. They are measured against that, and the focus shifts to completing the task rather than on longer-term goals. This means that they push teaching (helping other developers) and learning (re-writing your code) aside to a lower priority. Even when teaching and learning do happen, I've found many junior or mid developers to be less open to feedback or re-factoring of their code. They're more set on wrapping up their work and moving on to the next priority.
  • Kanban with Sizing - While it's more apparent in Scrum, even in Kanban when using measured tickets, the software developer is focused on the estimated time for that ticket. Little time is left to guide someone else as a result. Yet again, focus shifts towards delivery rather than towards learning and re-factoring.

 

How to Fix It

The biggest and hardest fix for this situation is a shift in company culture and thinking. The company has to compromise and set up a framework for software developer mentorship success.

There a few ways to achieve this. They all require change, compromise, and focus on long-term individual software engineer success, not just on features and instant gains.

I’m not saying that the company shouldn't produce and be agile and fast to iterate, but it must invest in their developers.

Starting is Easy!

The good thing is that you don't need to introduce this company-wide from day one. You can try it with a small team or even a sub-team. Try to experiment with even two developers, a junior and a senior. It's very simple to begin. Here are three tangible ways to do it:

  1. Mentor Tickets - Create specialized mentorship ticket tasks. Just
    as there are "story", "epic", "bug", "spike", and "task" items, I
    suggest we also add "mentor" type tickets. These are work items that we
    expect a younger developer to take on and learn together with a senior developer. These should be real tickets of work that need to be done. However, these tickets would be different in many ways:

    • These are tickets for two people. So, if you're using Scrum, you can have the developers split it up to mentor and mentee tickets.
    • The time estimates of these tickets are flexible and might require double or triple the time of a regular ticket.
    • You can easily expect these tickets to include one or more re-factors during the ticket.
    • Seniors can break these tickets down further by assigning "homework", such as tasks to investigate and learn.
    • The delivery of these tickets is just as important as the learning process and the growth achieved through these tickets.
      Later, I'll discuss how to approach and what to do in a mentorship ticket and how to conduct a successful mentorship.
  2. 10% Improvement Ticket/Time - For those that prefer a more casual approach, they can allocate 10% to improvement/growth time. In this approach, a ticket, task, or time is allocated 10% of the time in each sprint. This time allocation allows developers to learn, read, and mentor each other. It might include things like reading about a specific design pattern and trying to implement it along with a senior member. Using this approach, you essentially allow your team to decide how to spend this time and with whom.
  3. Good Team Mixture – Another issue is your team mixture. If you have five senior software developers and one mid, that might not be the ideal setup for mentorship success. Such a team would be wonderful to work together on tight timeframes and urgent complex tasks. However, to facilitate learning, you should create the biggest knowledge gaps. Have a ratio of at least two seniors to one junior software developer. Avoid too many mids (or ideally any mids) in this group.

    Ideally remove mids from the mixture to start with

    Mids tend to feel they are just like seniors and might resist mentorship. On the other hand, that might misguide juniors with a lack of experience or knowledge. The best thing to do is to have a large enough group of seniors that can still produce results and have juniors that are eager to learn and do some joint tickets together. This will create a harmonious team mixture while still allowing you to get business results from that team.

  4. Start Small - Grow Big - Any huge company culture shift is almost always doomed to fail. People tend to be resistant to change, especially when said change has yet to happen. The great thing about this is that you can start with a group as small as two people, a junior and a senior, and assign them 1-2 mentor tickets. See how they manage through those. You might find that the senior developer is still free to do his work while the junior is happy and excited to learn and grow.
  5. Either way, experiment -  Remember that during this experiment you must remove all burden of deadlines from the equation. You can reintroduce them as the mentor and mentee become more comfortable with these types of tickets. Also, you want them to get "wins" on the board to feel comfortable and confident. If the experiment goes well, bring more team members on to do mentorship tickets.

No Manager - The Senior Software Developer Predicament

The second issue I've come across is when the senior software engineer and junior software engineer are on the same team as peers. While the junior might respect and admire the senior, and while the senior has every desire to teach and mentor the junior, there still might be a gap.

As a peer, the senior cannot ask the junior to re-factor, re-do, or follow his guidance. All of this is based on the deadlines, desires, and wishes of the junior developer.

Understand your junior engineer better

The junior may sometimes ask the senior for help, but only in cases when he ends up completely blocked or unable to perform. While that might seem like a good way to do the mentoring, it’s not.

You don't want your junior software engineer going wild and only asking for help when he becomes blocked. I faced this issue myself many times. In these cases, I would see bad code and would try to help a young developer, only to find that it's easier just to take that bad code and re-write it myself.

Understand your senior engineer better

It's all because people and companies align their goals toward delivering features and completing tasks rather than learning, creating good code, and avoiding technical debt. As a senior software engineer, I can say that when you're someone's direct manager, it's simple to mentor. It’s much harder when you’re just their peer.

How to Fix It

First off, it's important to get company support. Just as I outlined before, if dedicated software developer mentorship tickets and a proper framework existed, perhaps some of the focus would shift.

It can be the goal of a junior software engineer to learn and produce good goal, and that can align with the company goals. I cannot image how much companies pay later for bad code.

Company support is crucial

If the company sets up targets for junior developer to learn and produce better goals, allocating the time and resources to do so, the output of the team will be better. The company can follow the suggestion of trying out mentorship tickets, which focus on learning with outcomes. It can also set up any other framework they feel works for them.

I believe that focusing on aligning personal goals to company goals can help resolve this issue as well.

Developer Personality/Ego

Software developers come in all sorts of personalities.
It's common to come across a very intelligent, bright, and promising junior developer.
It's also very common to come across confident developers with 1-2 years of experience that have gained some tracking. While these are just examples, I've seen many types of developers from closed off to feedback, especially in feature/velocity-focused companies.

I've met various developers that just want to produce something. Their ability to accept feedback is limited, as well as their willingness to hear options.

This creates a problem as you have people producing hard-to-read, hard-to-maintain code. Since the organization is chasing features/velocity, no one stops to say that's not how it’s supposed to run.

If the person noticing this is also a peer of the junior or a mid developer, he has limited authoritative power, aside from "telling the boss".

How to Fix It

Fixing the organization and the approach to code would also address this issue well. You should empower your developers to think of code quality. Choose to dedicate time to improvement or choose mentorship tickets or other routes to improving the quality.

Helping everyone on the team work together towards quality and becoming better software engineers and developers will communicate to everyone that you measure how you produce, not just what you produce.

Review PR to understand openness

Review the PR, see how open people are to feedback, and have those that are resistant to change work with people they appreciate on the team. You can craft your culture to help people move from their mindsets.

It's not an easy task. It requires out-of-the-box thinking. If after all you do someone still doesn't react positively to feedback, I suggest re-thinking his place on your team. Negative and uncooperative people are toxic to your whole team. I'm not suggesting letting someone go, I'm just saying your team must be constantly striving to be better.

People must be open and have the right framework to learn or develop. Just like a business must grow and expand to survive, so must truly great engineers do the same.

Final Thoughts on Software Developer Mentorship

I tried to outline some issues we've seen in the workplace in multiple companies. Personally I enjoy mentoring.

Sense of satisfaction

There is satisfaction in working with a junior or mid-level software engineer or developer and helping him simplify complex code.

A sense of mastery and accomplishment is highly important for people to feel good and be creative, as well as to keep developer retention high. Good developers won't leave a company that has personal development as part of their corporate strategy.

In future posts

In one of my future posts, I'll talk in more depth about how to handle mentorship tickets and improvement time. I'll try to help senior engineers and managers to think in terms of how to get work done and mentor at the same time. We will discuss techniques to implement and ideas for how to approach software developer mentorship on the micro level.


Nomad Ergonomics. Traveling Workstation. The Ideal Setup.

Nomad Ergonomics comes to address how to setup a proper workstation on the road.
If you're a software engineer, designer, sales person, entrepreneur or any other road warrior or digital nomad it means you're on the road a lot and you need to work from multiple places.
This means slouching over a laptop in a coffee shop or hotel desk.
This creates stress on your whole body , from your neck and back to your wrists.
As such you need to think about nomad ergonomics.

The Proper Nomad Ergonomics

Understanding the recommended and correct seating and setup when working as a software developer is very important.
There are many good resources and videos that explain how to setup properly.
Here is an example of how to setup correctly:

ergonomics for software developer

If you wish to read more, here is a suggestion from Microsoft how to setup your office

However here are the basics:

  • monitor - eye level.
  • arm rest support at 90 degrees
  • adjustable height char.
  • get legs at 90-110 degrees.

All very easily done at home or an office, but what about when traveling?

The search for the travel workstation setup

The first item I miss the most from the home setup is not the overall nomad ergonomics but it's the keyboard.
At home and in the office I use the Microsoft sculpt keyboard.
It's a wonderful keyboard and once you get used to the natural layout of the keyboard it's very comfortable and much softer on the wrists.
So I decided that when traveling I'd like the same setup.

Initially, I had a pretty decent sized backpack, so I tried to travel with a whole keyboard in my back-pack, however that didn't fare well.
The keyboard stopped working shortly, or having keys pressed constantly, draining the battery from the laptop and keyboard.
At the end the keyboard itself failed, forcing me go back to nomad ergonomics drawing board on that one.

The Keyboard - GoldTouch Go!2

After some research I found the GoldTouch Go!2 Keyboard.

This keyboard can act as a regular keyboard:

simple travel keyboard

As an ergonomic keyboard with an unlimited number of configurations. Just pull the leaver and adjust to what's most comfortable for you.

flat split travel keyboard
ergonomic travel keyboard

I found it super handy. It folds nice to half of it's size, which means it can be stored in a backpack of almost any size.

folding travel keyboard

The Stand - The Roost

Once I've solved my ergonomic keyboard, I was still faced with an issue.
How can I position the small laptop monitor to be on eye level or close to it?
I've tried various stands, but those were either un-comfortable, or didn't fit in a bag.
I even experimented with some cardboard boxes.

That's until I came across the Roost Stand.
The Roost stand, folds into a stick size package making it ideal to travel, and when setup in props up your laptop at eye level.
Their version 2.0 also has multiple height adjustments:

Please note that the roost is very expensive, and I've had a few friends order from ali-express Roost like stands:
NextStand Ali Express

However I would still opt for the Roost, it was the original and I believe it's better made. So I would prefer to put my laptop in the base stand possible.

Nomad Ergonomics - Last Tips

This along with any wireless mouse, let's me setup shop anywhere with a pretty ergonomic setup.
Many cafes don't have armrests, but I check with hotels prior to booking, and many do offer height adjustable seats with armrests.
Other than that Try to sit as close as possible to the table, and that also prevents slouching.

Overall, I'm very happy with this nomad ergonomics setup, it's light and easy to use.
Only downside is that people might look at you funny, when you setup a workstation area in a cafe!
Hope you have found this useful!


managing technical debt

Managing Technical Debt

Managing technical debt is a big issue for most companies.
In the previous post I outlined exactly what is technical debt, how it's created and why it's worse than financial debt/.
Today I'll talk about about managing technical debt from the view point of solutions and practical steps.
We will outline and suggest practical examples and ideas you can implement today so you can start managing your technical debt.
Let's dive into it.

Managing Technical Debt issues - Identify and prioritize

The first step in managing technical debt is to identify where the biggest issues are. While this might sound obvious, I'm not talking about where the worst code lives. You should find the most impactful issues to the business. Many times technical people look from tech perspective and not from a business perspective.
This means, you need to ask questions, ideally a business stake-holder the product owner, or yourself if you know a lot about the business.

  • What is slowing us down the most today?
  • Where is our biggest bottleneck today? Where is our biggest slowdowns
  • How can we impact business the most? Sales? Improving Customer Service?
  • What about your interactions with Tech & IT frustrate you the most?

As such we should identify what system would make the most significant impact.
Try to find review all the current Tech issues you have an rank them according to priority.
Make a list of all issues you have while thinking about the business impact first.

Tech Debt Issue Lists

When you're writing out the list, try to be as detailed as possible. This means , change architecture, replace infrastructure, are not detailed enough.
You want to strive to create a detailed list, that can be easily converted into work items / Jira items / etc:
i.e. something like re-factor / re-write order processing module. services.py is 3000 lines long, should be broken down and re-written. many functions such as calculate price, has too many flows and is over-complex. Remove duplicate code and cleanup

is much better.

Cost Evaluation

As you're writing out your debt list, either while or after, review it and find a way to attach a cost to each item.
You can use any scale you want, such as Jira points, days of work, dollar cost, etc.
However it's important as that is the second part of the tech prioritization.

After you've complied the list, you should have a list of business impact first, cost second analysis.
This is your tech "re-payment" analysis, and at this stage we are ready to tackle the tech debt head on.

Automated testing - respecting the contract

I'm not a big fan of throwing out code as it's badly written.
Don't get me wrong sometimes it does make sense to chuck a whole piece of code away.
However in most cases code can be refactored into something that's nice, cohesive and maintainable.

Does not matter if you're a python developer using unittest, or a Node.js developer using Mocha Chai, or whatever other framework / language you use.
What does matter is that before you start re-factoring code, you want to setup unit test / and perhaps also integration and end-2-end tests.
Maybe your code base already has that, however most "tech debt" code doesn't.
We will discuss unit testing strategies in another post.
You can read about unit test in Node here.

Re-factor, Re-factor

At this stage, you're ready to begin the clean-up. You need to re-factor your code. Ideally by someone very experienced.
This is not for the faint of heart, you'll be doing lots of activities and changes, and potentially introducing issue regression.
However it's a cost you must pay. Here are some tips to focus on:

  • DRY - Find code duplication, and merge into functions.
  • Review complex long functions, break into small ones, strive to make code read like English
  • Split complex function with switch likes params into several stand-alone functions
  • When needing complex function, make complex function private, expose simple interface functions that will call the complex one
  • If you can't understand what a piece of code does, try to break it down into part of re-write
  • Make sure function names make sense, if not change the function names, preserve old names if needed during transition with proxy functions
  • Make sure code is now easy to understand. Imagine you're seeing this code for the first time, does it make sense
  • Manage complexity, by breaking down into small bite-like chunks, either new classes, or functions

Summary

We hope this helps in setting up your strategy for managing technical debt. We will be doing a follow up post focused only on re-factoring.
In the meanwhile, if you need any assistance or help with managing your technical debt, feel free to reach out to us.
Thanks and until the next time!
The CoreTeam.io crew.


Technical Debt

Understanding Technical Debt And Why It's Worse than Debt

Does not matter if you’re a Node.js developer, a Python engineer, a React / Front-end wizard, a product manager or the CEO. Good chances you’ve heard of technical debt. And if you have not, or you have lack any understanding of technical debt I’ll grantee you’ve been on the receiving end of it and have not realized.

In this post I’ll try to outline and get you understanding technical debt. What is it and why it’s actually worse than debt. In our next post we'll deal with how to manage or deal with it.

What is Technical Debt?

A quick way to understand technical debt is to look it up.
According to Wikipedia, the definition is:

 a concept in software development that reflects the implied cost of additional rework caused by choosing an easy (limited) solution now instead of using a better approach that would take longer.

However Technical Debt, in my view is much more broad. It's any code that has issues such but not limited to:

  • Is not easy to maintain
  • Hard to read
  • Has code duplications
  • Lacks testing
  • Not well engineered
  • Has very long functions doing many different things
  • Is not modular, and compartmentalized, as functional or OO
  • Many others...

Types of Technical Debt

While it's possible to categorize many types of technical debt I'd like to bucket them into two types of technical debt you most likely have in your systems today.

Intentional

The Wikipedia term identified that type of debt. It's when we logically make a decision to under engineer or hack a solution together, rather than build software that is well crafted, designed and easy to maintain.
There are many times reason to do that (i.e. experimenting with a new feature, temporary solutions, proof of concept, deadlines, and more).

Accidental

Wikipedia does dive deeper, and shows a quadrant of types, which is nice, and bit more helpful. But I believe the underlining assumption that Technical Debt is a choice is flawed.
Much of Technical Debt in the best case scenario, is a result of knowing better after, as identified in the inadvertent part. As we always know better in retrospect
(i.e. we have a much better understanding of how a piece of code should be used after we have written it and sent it out to the wild, a bit like the "lead startup")

However I think that again makes a graceful assumption. a lot of "accidental" debt is generated, by bad software developers (experienced as they might be),
by lack of good software design practices such as rushed code reviews for example, not following DRY principle, inventing rather than using existing design patterns,
junior developers writing code that "works" but never properly reviewed, multiple contractors working on the code, technical decisions made by managers that lack technical knowledge nor listen to their technical team,
and many more reasons.

Why it's worse than debt?

Warn Cunningham coined the term Technical Debt back in 1992. The term draws similarity to financial debt.
In that regard, you're "borrowing" time by implementing quick/bad code today, which would need to be repaid in future re-work.
While this all sounds lovely and novel, it's fair from reality.
In fact when you have a better Understanding of technical debt you'll realize that it's considerably different and worse that financial debt.
Financial debt is very well defined, technical debt is not:

Technical DebtFinancial Debt
How accumulatedIntentional / AccidentalOnly Intentional
Is AvoidableNoYes, you can live without debt
Amount "Borrowed"Not ClearExact, documented in contract
Repayment ScheduleNot clear, No defined, not easily quantifiableExact, documented in contract
Typical Interest RatesNot defined. Can be 10x or more!Well defined in contract, normally 5%-10%
Non-Payment consequencesNo one knows. Can be huge / loss of business!Well defined in contract

This table is not complete, while it's not hard to think of many more ways tech debt is different to financial debt. It's not the point.

Is it avoidable?

What you have to remember, is that Technical debt is unavoidable and exists in all companies in the world! Let me repeat that once more.
You cannot avoid Technical Debt. You just can't even if you have the best software developers in the world, just by the mere fact that you will know how to do things better, once you've done it.
As such accept you'll accumulate Technical Debt, that's money back guarantee!

eBay Case Study

Let's look back at eBay in early 2000's. They were world leaders, in a perfect position to be the online market to buy and sell everything online.
And today? They are still in business, and work well, but they have lost all their edge. People use Amazon a lot more, and many other website instead of them.
So what happened? Technical debt did. Their systems were complex, rigid and filled with issues.
Here an excerpt from a WSJ article:

eBay’s system, which involved 25 million lines of inflexible code, soon became a liability. The company, for example, couldn’t figure out which of its hundreds of thousands of ‘iPod’ listings were for a given model or for iPod accessories. EBay’s challenges with outdated technology are common for Web pioneers, whose systems were built with custom software that is now too old and rigid to adapt to a competitive and fast-moving market.

So in reality there system prevented them from moving quickly, by the time they had to "re-write many of their systems" ending in them losing their competative edge.

So What should I do

Most chances are your systems are in a much worse state than eBay's. Most chances are you have lots of Technical Debt that needs to be paid.
Does it mean that all is gone, and you're doomed? No, there are many paths forward. One this is sure you have to start thinking about technical debt and how to address it today.
I believe at this stage you have a much better understanding of technical debt, which is a great start.
In the next part, we will discuss how to tackle technical debt and how to prevent your company from hitting the same walls that eBay did.

Until next time.


DeployBot - Simplified DevOps - A Kubernetes SlackBot

Why production auto-deployments is a bad idea?

Most companies DevOps have setup multiple environments: dev, test ,stage, production. And if not you should be doing that today!
These enable you to reduce risk and ensure software engineering ships quality code.
You might have even hired a DevOps developer which has setup a CI/CD pipeline, helped Dockerize your app and run them in a Kubernetes Cluster.
This is a very common setup these days, it ensures developers can easily make code change, which case then be ready to test on the cloud in a matter of minutes.
This automation is blessed for most environments, when a developer wants to test a new feature, when someone needs to qa or approve the feature, etc.

DevOps - DeployBot

I don't like the idea of auto-deploy to production. Many times we want to control and decide how and when we deploy to production.
Maybe we want to group together a bunch of features, maybe we want to deploy new features only on Monday, etc.

Enter DeployBot. This Slack ChatBot can deploy your application by sending a message on a secure Slack channel. As simple as that.
You don't even need technical skills. It's developed in Python, and if you don't have those skills in-house, you can hire a python developer to configure my code very quickly for you.

DeployBot Slack Configuration

The first step, is to setup the SlackBot application and channel for using SlackBot.
Also please note, this configuration is correct as per March, 2020, so future changes to the api might be needed:

  • First go to you Slack app page https://api.slack.com/apps?new_classic_app=1 and sign in.
  • Make sure you create a Class App as in the link I have attached above.Give it your DeployBot Name (something like DeployBot) and choose the workspace you would like to run in.
  • One the App Configuration Screen, from the Features and Functionality select the bots.<;li>
  • Add a Legacy Bot and give it a name and select a user. Something like DeployBot and Deploy_bot for example.
  • Click on oAuth and Permission, and scroll down to "scopes"
  • DO NOT CLICK UPDATE SCOPES!
  • Use the add oAuth Scope and add the following permissions: app_mentions:read, chat:write, im:write, incoming-webhook
  • After that click install into workspace (Auth and Permissions) and install your Slack Bot App
  • Take a note of the oAuth token and Slack bot Token (we will need them later)
  • Then go to your slack client, and click on apps (bottom left hand side list of contacts). you should see your slackbot there
  • Click on it and hover over the SlackBot name, make a note of the url, the last part is your slackbot ID.
    https://hexanow.slack.com/team/U012GGEU0HJ would mean that U012GGEU0HJ is your slackbot user id!

Now we are ready to configure the bot and install it into the workspace.

Bot Configuration and Installation

Now we need to setup the Deploybot installation in your organization.
This might require some simple DevOps skills, and perhaps some very basic Python skills. If you do not have these skills in-house, it can be very easy to hire a DevOps engineer and hire a Python developer to do this step for you. Feel free to contact us for help in getting setup.

Steps for setup

  • Clone the project:

    git clone [email protected]:DoryZi/SlackKubeDeployBot.git

  • Edit your setup.yaml

    This file contains all your definitions. First we need to update the secrets yaml.
    In this section please update the AWS SECRET, you Slackbot oauth token from the Slackbot installation, and the Kuberentes Cluster Token and Token2.
    The KUBE TOKEN are not mandatory, the kubernetes setup will try to use in Cluster Authorization. If that fails, or if you Slackbot is running in a different cluster.
    You can use the AUTH TOKENS. The first one is the default Auth token for all your cluster, the second one, is used if you need to override for a particular app.
    For example if you have 2 different clusters you wish to run deploy bot on. You can also easily extend this setup to support multiple clusters. If you need to hire a python developer to help you setup this up, feel free to reach out to us.
    You need to base64 encode you secrets:
    echo "" | base64 -w 0

    
    apiVersion: v1
    kind: Secret
    metadata:
      name: slackbot-secrets
      namespace: slackbot
    type: Opaque
    data:
      AWS_SECRET_ACCESS_KEY: <base64 encoded secret access key>
      SLACKBOT_API_TOKEN: <base64 encoded copied slackbot oauth token>
      KUBE_TOKEN: <base64 encoded kubetoken>
      KUBE_TOKEN2: <base64 encoded kubetoken>
    

    More info on kuberenetes secrets

  • Configure environment variables and App Information

    So this step involves setting up your app configuration. This is contained in the ConfigMap part of the setup.yaml
    The env variables, bellow are pretty simple and strait forward, I will try to explain how the app config works.
    For each up app you wish Deploybot to recognize you must add an APP CONFIG entry.
    Each entry consists of:

    • "app-name" - name of your app, this will be the base when it looks for ECR entries for new builds too
    • "deployment" - the name of your kuberenetes deployment running this application.
    • "container-name" - the name of your kuberentes deplyoment container name
    • "cluster-token" - optional - a cluster token if this deployment runs in a different cluster to the default one. This is taken from KUBE_TOKEN2.
    • "cluster-endpoint" - optional a second cluster endpoint, if this deployment runs in a different cluster to the default one, This is tkane from KUBE_ENDPOINT2
    
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: slackbot-config
      namespace: slackbot
    data:
      APP_CONFIG: |
      {
        "app-name": {
          "deployment": '',
          "namespace": "",
          "container-name": "",
          "cluster-token": NEW_CLUSTER_TOKEN, # this is optional if you're running in another cluster
          "cluster-endpoint": NEW_CLUSTER_ENDPOINT # this is optional if you're running in another cluster
        },
      }
      ECR_REGISTRY: ""
      DEPLOYBOT_USER_ID: ""
      AWS_ACCOUNT_ID: ""
      AWS_DEFAULT_REGION: ""
      AWS_ACCESS_KEY_ID : ""
      KUBE_CLUSTER_ENDPOINT: ""
      KUBE_CLUSTER_ENDPOINT2: ""
    
  • Apply your configuration

    : kubectl apply -f setup.yaml

  • After all of this, your slackbot is ready to be used.
    Feel free to create a secure channel, or directly message to your chat bot.
    It should be able to run a few commands such as:
    You can check for a new image in the registry (compared to what your deployment is running), check your running image, or deploy the latest image to the live.

    Extending - a DevOps engineer or Python Developer task

    This app uses the kubernetes API and aws API. It can easily be extended to run on GCP, however this would require you to have Python skills and DevOps skills.
    If you do not have them in-house feel free to reach out to us to hire a python developer.
    We hope this helps, and that you find DeployBot useful.


How to Hire a Full-Stack Developer (Interviewing Guidelines)

Finding good developers and software engineers is always hard. I've said that multiple times.
This is another installment in a series in how to hire a full-stack developer, top software engineers or pretty much any type of software programmer.
We've previously talked about outreach when hiring top software developers.

The Broken Software Developer Interview

Today I'll address another pain-point: "The Interview Process". My main grief is that the correlation between those that seem amazing during the interview / screening process but turn out to be duds, against those that seemed average turn out to be stars is very weak. It’s just so hard to find who is the right candidates to hire.

As such I’d like to outline some points that can help in making the right decision and reduce the margin of error. Important to mention, no matter how good your interview / screening process is. When you hire a full-stack or any type software engineer, you will not know for 100%. The only way to know if someone is a good fit is to work with him. As such if you can take someone on for one week, as contractor to try him out that is the best option!

In this post today I’ll talk about the key points and methods that can help you mitigate some of the risk when you hire a software developer.

Structured Interview and Process

One of the key mistakes that people make when hiring a full-stack developer or any software engineer for that case is not having a repeatable and structured process. I cannot emphasis this enough, having a repeatable, consistent, well-thought hiring process.

Technical Test Points to Consider

  • Think of your phone screen questions and prepare them.
  • You need to outline what technical questions you’ll ask in your coding or tech assessment and why.
  • Prepare what general HR questions you’ll ask the candidate.
  • You’ll need to figure out how many people would need to talk to the person.
  • Be able to communicate the process to the candidate early on.

These points are very important. Without them you’re really taking a guess based on feeling, emotions, and you have no way to measure all candidates fairly. Here is an excellent article from the New york times on the topic

Technical test should simulate work

The technical test / screen helps is as a quick filter to help find people that can potentially have the skills to work as a successful engineer in your company. It’s only potentially as most of these tests are not a reflection of what real work would be like. As such I would urge you to try and construct your tech test with that in mind. Try you best to simulate work and test relevant skills. Also when you’re hiring a full-stack developer you’re looking for people that have back-end and front-end experience so keep that in mind too. Here are a few ideas of what your tests / screens can include:

Ideas for Testing Software Engineers

  • Code a simple algo problem – choose something simple, not too complex and let the person run through it. See that the person can code. So many can’t code despite their resumes looking mighty impressive!
  • Try to go through a bug issues and problem tracing – setup either a theoretical or practical (meaning a bug they have to figure) and see how the candidate solves the issues or tries to at least.
  • Test some front-end abilities – Code, HTML and CSS, something simple. You can even setup a react project and ask the candidate to go through it, and try to make changes, add layouts etc.
  • Ask about his work history and focus on the technical aspects. Then try to drill down and see how much he really understands. Hiring a Node.js developer or React.js Developer, or any JavaScript developer? ask him about prototypical inheritance, what it is and how does it work. If you are about to hire a python developer, then ask him what algorithm does .sort() run and how it works, etc.
  • Ask some architecture questions, and see how he things. Do things like memory limitations, time complexity limitations, how does he handle large scale throughput? see how that person thinks and how he approaches different situations.

This should give you a clearer indication of someone’s abilities, either way use a consistent and repeatable process. Now it’s not always critical that he solves everything, it’s important he understand, writes clean code and understands what he is doing. After all at work he will have a lot more time to work on problems than in a 60 minute test for example.

Focus on his abilities to deliver business results

Good engineers are able to bang out code, that works. However if you are able to hire a full-stack developer that is great, he will also understand the business goal behind what he is developing. A great software engineer will be able to not only require minimal supervision, but he will also be able to build software with business needs in mind. He can enhance or add additional benefits to the software as he suspect will be needed.

Good vs Great Full-Stack Developer Example:

Imagine you’re building a shopping app, and you have some quick search that let’s you scan all the products in your shop. You’ve asked you developer to add few more product categories to a drop-down. A good developer would just add those. A great one, might notice that this list is growing and growing, so maybe we should think about pagination. Or perhaps would suggest limiting the number of results back. Or adding de-bounce (sending the search only when you finish typing), etc. He would improve on said feature. Maybe he will just create additional todos and discuss with you. Either way he would raise those points. You can try to structure questions to see how a person might help in this way!

Until Next Time!

This should give you some important points to note when you’re looking to hire a full-stack developer, or any other type of senior software engineer. I hope this has helped.

Till the next time!


Hire a Top Software Developer (Outreach Considerations)

Pre-approach when Hiring a Top Software Developer

I've spoken before on how great software developers are hard to recruit and even harder to retain.
And I've given examples of bad pre-approach vs good pre-approch. Feel free to reach about that.

However today I want to talk about our pre--approach in particular. In today’s world top software developers and engineers have new opportunities daily from multiple recruiters and companies reaching out to them. Hence if you’re looking for a programmer you’re up against every company out there from Google & Facebook to the next self driving car startup.

  • So how do you stand-out?
  • What can make you attract and retain the best software developers and talent out there?
  • And more can you even compete and hire great talent?

You can stand out and reach great engineers

Well the good answer is you can hire a top software developer you just need to focus on the key points that resonate with developers. In this post I’ll try to help you find the correct approach to hiring people and how to be successful at hiring.
Factors to Consider before Hiring Top Software Developer
So let now try to break down a few important points based on actual research and numbers to help us hone in our hiring process and make our developer search success. Here are the factors to consider when building your outreach to potential top software developers:

  • Speak to a person. Make connections. Not fill job vacancies
    First you must PERSONALIZE your outreach to each individual based on multiple factors. You must know who you’re reaching out to and why. If you focus on creating a connection, rather than filling a job need, you’ll be ten steps ahead than most companies out there. Look at their LinkedIn, Blog, GitHub. Are they passionate about your technologies, do they have relevant experience, when was their last role etc.
  • Languages, Frameworks and Technologies
    The most important factor for software developer and software engineers when looking for their next job is what languages, frameworks and technologies they will be working with, everything else is second to that. This means you not only need to know and understand what languages and technologies your company is using (i.e. Python, JavaScript, React, AWS, GCP, Kubernetes etc) you must also understand the candidate’s potential to fill those needs. Has he worked with Angular and Vue? Tell him about how his experience can be jumping board into him mastering React, and how you guys enable people to learn. He already has 3 git repos contributing to React? Focus on his achievements, and how you can use his libs in your project (if possible). You guys used PHP? Find PHP experts, people that are passionate about PHP, don’t approach Node.js enthusiast. These are just some examples, modify them to suit your needs.
  • Environment and Culture
    This is the second most important factor, this is where you can really shine. It’s not about free pizza’s or pretty offices. It’s more about the people, team and environment. Do you encourage growth? How do you handle failures? Do you allow people to learn? What makes your environment healthy and exciting? Why will that person feel that your company is a good place for him? Again think of growth, learning and recognition. Watch it and adjust your outreach to take that into consideration and you'll be on your way to hire a top software developer.
  • Timing with Candidates
    32.4% of software developers already changed their job last year alone! This means your email to those developer or engineers should be different than say 14.5% of software developers that have changed their jobs 3-4 years ago. When approaching the first batch you should consider an introduction, more than a hire potential. However the second group could be happy or ready for a change. There is no magic formula here. You just have to create connections with great developers. This has worked wonders for me, sometimes the email got them at a perfect timing, other times there were happily employed or just started a new job. Find out more about your timing and adjust your approach accordingly.
  • Flexibility and Remote work
    Another point you can really win on is focusing your company on results not attendance. This means emphasis to potential software developers how flexible you are with time, offer work from home opportunities, or better yet allow for remote work. If you can move away from thinking you need to see developers in their seats from 8-5 and start focusing on managing outcomes, you’ll not only attract amazing talent, you’ll also get outstanding productivity. I’d even claim, if you only get results when you have people in their seats next to you for 8, 9 or more hours per day, you management is the problem. People love to be recognized for their achievements, focus on that, not working time.
  • Salary, price and compensation
    While how much money you’ll be paying your software developer or software engineer is not the first considering for potential candidates, it does of course come into play and varies from person to person. First off pay too little and almost no one will want to work for you. Top software developers and engineers want a company that values them, so the money does matter. Your best bet it to pay as high as you can, while keeping in mind your budget of course. The business reason behind that is that a great engineer will make your project super successful, a bad one will cause you 10x damage. So think twice before trying to save here.

In Closing

There you have it! Considering these factors would surely come handy to hire a top software developer for your next project. Customize your approach focus on the key points and prioritize quality over anything to boost your business the right way.


Software Engineering Interviews Mistakes - Homework Tasks

Recruitment is hard and Software Engineering Interviews are complex

Recruitment is a complex and difficult matter, just like Software Engineering Interviews are far from perfect and are very exhausting both for companies and candidates alike.

Engineers looking for new opportunities have a great dislike for the process.  It includes lots of calls, interviews, tests, and more. It’s a common feeling for many that they just took on a second job: looking for work.

Companies, on the other hand, don’t have an easier time with it. They have to sift through loads of resumes, read an insane amount of emails, and answer tons of calls. All in order to decide who they will actually interview in person. There are so many candidates that sound and look the part while many are barely qualified to make coffee in real life.

Common mistakes in software engineering interviews

I’ve been involved in hundreds—if not thousands—of these processes and worn multiple hats while doing so, and I’d like to make a few observations and important notes to candidates and companies alike.

First you have to remember: Great interviews don’t mean great hires! Both sides have to remember this, as it’s a critical point! There are many things that you will not know:

  • How hard-working will that person be?
  • When will they give up when confronted with hard tasks?
  • Are they be able to find creative solutions?
  • Can they be a good coder or not?

There are many other things you won’t know; you’ll only know if the person has potential and how well they do at interviews!

When we interview as candidates and as companies we get very excited about certain opportunities (great cultural fit, amazing performance on interview tasks, everyone seems so nice, the unexpected feeling of a strong work connection, etc.) No matter how logical, measured, or obscure your personal reasoning is about that candidate or company, you won’t really know what it means to work together until you actually work together

None of the big guys do it

Google , Facebook, Twitter and many of the big guys, could have easily sent homework task to all candidates, but they don't! They spend a day or more with a candidate, they run through code together, and they get a sense for what that person is like. So why are you trying to re-invent the wheel? You're not going to write you own front-end framework, you'll use React or Angular, so why not also recruit as they do? Doesn't that say something about their Software engineering interviews?

Why homework tasks are silly and what should we do?

Way too many companies send people take-home tasks or, better yet, some silly HackerRank that tests people on solving a problem in a very time-limited manner.

I’m not sure who in The Valley started this and made everyone follow this detached from reality pr.

You’re well-funded? That’s no reason to assign a technical a homework task. Feel free to offer it, as some candidates like it, but your best bet is to spend time with a person, solve a problem, code together, etc.

Since we agreed good interviews != good hires, then why not do your best to simulate the environment of solving a real task at work? Isn’t that what you’d want that person to do anyhow?
Run through some code problem together and get a sense for what it is like to work together.
You’ll see how a person thinks, how he/she tackles hard problems, and gain much more insight than you would from a random test or take-home task.

What is the logic behind sending some obscure test or asking someone to build a software for you for free? Are you trying to miss out on good candidates? Should someone that is busy spend half a day, a day, or even more writing free code to prove that he/she is worthy of employment? Maybe that’s okay for recent graduates, but what about for people with 5–10 years’ experience or more? What profession in the world does that?

I’m a big believer in fairness, and if you ask someone to invest time then be willing to invest the same time yourself as well. While it will be more time-consuming, you will both have the chance to work through a task together and you’ll get a good sense for working with each other.

When homework tasks make sense and how to give them?

Personally, I say only if you’re willing to pay that person for their time and show that you value their time. Say you’re a starving startup—pick a small task, offer it as a stand-alone project, and assuming that the code is good, the candidate would sign over the rights and you might even use it. Then pay them for their time except if, of course, the code is bad and they do not pass. I think in these cases having a homework task can be a good replacement to software engineering Interviews.

In the next part I’ll talk about more interviewing tips and suggestions. Stay tuned!
D.


Cost Effective Docker Jobs on Google Cloud

Recently I wanted to run some jobs. I'm a huge advocate of using docker, so natrually I was going to build a docker image, to run my python scripts, then I wanted to schedule said job to run once in a while. Doing so on AWS is pretty easy using lamda and step functions, however since this is wasn't a paid gig, I wasn't able to get someone to fork the bill, enter Google Cloud!

Google Cloud Platform (GCP), is in a way the newer kid on the block. AWS has a long history of cloud platform and excellent customer support, whereas Google customer service is a bit like big foot, you've heard of it, some people say they seen it, but it doesn't really exist...BUT, google still is an amazing technology company, they release early, the imporve it to make it rock (i.e. Android). And best they offer 300 bux free credits. So I decided to go for google, how bad can it be?

In this post I'll talk about how I setup the google cloud to work for me, in a rather cool way. It took lots of blood, sweat and tears but I got it working. I schedule a job once in a while, I spin up a cluster of instances, run the job then shut it down! Not only is that cool (ya I'm a geek), it's also quiet cost effective.

I will outline what I did, and even try share the my code with you guys.
Here goes:

Step 1 - Build docker image and push to google cloud private registry

The first step was the easier and the most trival. It is pretty much the same as AWS.

Create a build docker image

Let's start with creating a build image. GitLab ci allows you to use your own image as your build machine, this is cool. If you're using a different ci, I leave it to you to adjust this to your our system.


from docker:latest

RUN apk add --no-cache python py2-pip curl bash
RUN curl -sSL https://sdk.cloud.google.com | bash
ENV PATH $PATH:~/google-cloud-sdk/binser

RUN pip install docker-compose

This a Dockerfile for the build machine. It uses docker machine and it pulls pip, and installs glcoud.

Then I push this build image to docker-hub. If you haven't done this before you need to:
1) Singup to docker cloud https://hub.docker.com and remember your username.

2) in the build machine folder, run docker build . -t /build-machine
3) run:


$ docker login
$ docker push /build-machine:latest
service

Create a GCP service account

You have to create a service account, give it access to the registery then export the key file as json. This is very simple step. If you're unsure how to do it, just click through the IAM / Admin, you need to create a user, give it an IAM and export the key. Very easy.

Customize CI Script to push to private registery

Once this is all done, you have your build machine, we can now work on your ci script. I will show you how to do this on gitlab ci, but you can adapt this to your own environment. First create a build environment variable called CLOUDSDK_JSON and paste the contents of the json key you created in the previous step as the value of that key. Then add the following .gitlab-ci.yaml file to your project.


image: /build-machine

services:
  - docker:dind

stages:
  - build
  - test
  - deploy

before_script:
  - apk add --no-cache python py2-pip
  - pip install --no-cache-dir docker-compose
  - docker version
  - docker-compose version
  - gcloud version

build_image:
  stage: build
  except:
    - develop
    - master
  script:
    - docker build -t :latest .


deploy:
  stage: deploy
  only:
    - develop
    - master
  script:
    - docker build -t :latest .
    - echo $CLOUDSDK_JSON > key.json
    - gcloud auth activate-service-account  --key-file=key.json
    - docker tag :latest $PRIVATE_REGISTERY/:latest
    - gcloud docker -- push $PRIVATE_REGISTERY/:latest
    - gcloud auth revoke

Please adjust the job-image-name to your job docker image name, service_account_name to the service acocunt name you created and the build image to the image you pushed to docker hub. This yaml file is directect at a python job, but you can change it to any other language.

I have 3 stages: build, test and deploy.
I build and test on all branches, but only deploy on master. Gitlab ci has an issue, each step can happen on a different machine, so my first build step isn't kept to the deploy phase, which forced me to re-build in the deploy phase.

Once this is done, you ci system should be pusing your image to your google private registery, well done!

Step 2 - Running Jobs in a Tеmp cluster

Here come the tricky part. Since jobs only need to run every x time, and only for a limited period, it's ideal to be run as a google function. However those are limited to one hour, and can only be written in JavaScript (AWS support multiple languages with lamda and with state machines). And since I didn't want to pay for full time cluster time running, I had to develop my own way to run jobs.

Kubernetes Services

Controlling jobs in a cluster and cluster control can be achieved using Kubernetes. This is one part of GCP that really shines, it let's you define services, jobs, and pods (a collection of containers), and to run them.

To do this, I wrote a KubernetesService class in python that will:

- Spin up / create a cluster.
- Launch docker containers on the cluster.
- Once jobs finish, shutdown the cluster.


class KubernetesService():

    def __init__(self, namespace='default'):
        self.api_instance = kubernetes.client.BatchV1Api()
        service = build('container', 'v1')
        self.nodes = service.projects().zones().clusters().nodePools()
        self.namespace = namespace

This is the class and constructor. The full code for this class has more configuration and env varibles, as is part of the appengine cron project. I will include repo, if you want full details on how to achieve this.


def setClusterSize(self, newSize):
        logging.info("resizing cluster {} to {}".format(CLUSTER_ID, newSize))
        self.nodes.setSize(projectId=PROJECT_ID, zone=ZONE,
                           clusterId=CLUSTER_ID, nodePoolId=NODE_POOL_ID,
                           body={"nodeCount": newSize}).execute()

This function can control the cluster size. It can spin it up, before jobs need to be run, then shut it down after:


    def kubernetes_job(self, containers_info,  job_name='default_job', shutdown_on_finish=True):

        # Scale the Kubernetes to 3 nodes
        self.setClusterSize(3)
        timestampped_job_name = "{}-{:%Y-%m-%d-%H-%M-%S}".format(job_name, datetime.datetime.now())
        # Adding the container to a pod definition
        pod = kubernetes.client.V1PodSpec()
        pod.containers = self.create_containers(containers_info)
        pod.name = "p-{}".format(timestampped_job_name)
        pod.restart_policy = 'OnFailure'
        # Adding the pod to a Job template
        template = kubernetes.client.V1PodTemplateSpec()
        template_metadata = kubernetes.client.V1ObjectMeta()
        template_metadata.name = "tpl-{}".format(timestampped_job_name)
        template.metadata = template_metadata
        template.spec = pod
        # Adding the Job Template to the Job spec
        spec = kubernetes.client.V1JobSpec()
        spec.template = template
        # Adding the final job spec to the top level Job object
        body = kubernetes.client.V1Job()
        body.api_version = "batch/v1"
        body.kind = "Job"
        metadata = kubernetes.client.V1ObjectMeta()
        metadata.name = timestampped_job_name
        body.metadata = metadata
        body.spec = spec
        try:
            # Creating the job
            api_response = self.api_instance.create_namespaced_job(self.namespace, body)
            logging.info('job creations result'.format(api_response))
        except ApiException as e:
            print("Exception when calling BatchV1Api->create_namespaced_job: %s\n" % e)

kubernetes_job function creates continers (an additional function that creates container objects with env variables. Containers are then part of a pod, and that pod is part of a job template which is part of a job spec. You can read more about it in the Kubernetes docs.


def shutdown_cluster_on_jobs_complete(self):
        api_response = self.api_instance.list_namespaced_job(self.namespace)
        if next((item for item in api_response.items if item.status.succeeded != 1), None) is None:
            logging.info("no running jobs found, shutting down clutser")
            self.setClusterSize(0)
        else:
            logging.info("found running jobs, keeping cluster up")

If you don't want to code to continue to wait for the jobs, you can poll for completion, and that is what shutdown_cluster_on_jobs_complete is for. It will shutdown the cluster once there are no running jobs.

This class controls all the job scheduling execution successfully.
And it's part of an appengine (however they can be used independently).
Next we we need to have this script scheduled or triggered to activate.
And that is our cron scheduler task.

Cron scheduler appengine service

Sadly google doesn't give you an easy way to run code in the cloud, you actually have to write more code to run code (silly right?)

The concenpt is that appengine provies you with a cron web scheduler that calles you own apps endpoints in given intervals.

First you add cron.yaml to your project and you configure which endpoint and the time interval to hit that endpoint:


cron:
- description: task to kick off all updates
  url: /events/run-jobs
  schedule: every 2 hours
- description: task to shutdown jobs when finished
  url: /events/shutdown-jobs
  schedule: every 5 min

Then we can add a handler to shutdown the jobs, and to kick off the jobs.


class RunJobsHandler(webapp2.RequestHandler):
      def get(self):
        try:
            logging.info("running jobs")
            jobs_list = Settings.get("JOBS_LIST").split()
            for job_name in jobs_list:
                job_name = job_name.replace("_", "-") //names cannot have underscore
                logging.info('about to publish job {}'.format(job_name))

                containers_info = [
                    {
                        "image": Settings.get("IMAGE_NAME"),
                        "name": job_name,
                        "env_vars": [
                            { "name": "SOME_ENV_BAR", "value": some_value}
                        ]
                    }
                ]

                job_env_vars = Settings.get('JOB_ENV_VARS').split()
                for env_var in spider_container_env_vars:
                    logging.info('adding container var {}'.format(env_var))
                    containers_info[0]['env_vars'].append({
                        "name": env_var,
                        "value": Settings.get(env_var)
                    })
                kuberService.kubernetes_job(containers_info, job_name, False)
            self.response.status = 204
        except Exception, e:
            logging.exception(e)
            self.response.status = 500
            self.response.write("error running jobs, check logs for more details.")
        else:
            self.response.write("jobs published successfully")

Last we want to add a Setting class to load env like variables from the datastore:


import os
from google.appengine.ext import ndb

if os.getenv('SERVER_SOFTWARE', '').startswith('Google App Engine/'):
    PROD = True
else:
    PROD = False
class Settings(ndb.Model):
    name = ndb.StringProperty()
    value = ndb.StringProperty()

    @staticmethod
    def get(name):
        NOT_SET_VALUE = "NOT SET"
        retval = Settings.query(Settings.name == name).get()
        if not retval:
            retval = Settings()
            retval.name = name
            retval.value = NOT_SET_VALUE
            retval.put()
        if retval.value == NOT_SET_VALUE:
            raise Exception(('Setting %s not found in the database. A placeholder ' +
                             'record has been created. Go to the Developers Console for your app ' +
                             'in App Engine, look up the Settings record with name=%s and enter ' +
                             'its value in that record\'s value field.') % (name, name))
        return retval.value

Note that most the app depends on the datastore. Sadly google doesn't allow you to have env variables easily, but you can setup env variables in the datastore.
For this I added a class called Settings.

Then we just add bind the route handler:


import webapp2


app = webapp2.WSGIApplication([('/events/run-jobs', RunJobsHandler)],
                              debug=True)

This should allow our app, to spin up a cluster, launch containers and then shutdown the cluster. In my code I also added a handler for the shutdown.

Then make sure you have gcloud installed (here is how and just deploy the appengine using the gcloud deploy command and you should be good to go ( here is how

While my example runs the same docker image, and just has different operation with different env variables, you can easily adjust this code to suit whatever need you might have.

Here is the full git repo:

Hope you find it useful!


mocha-chai-sinon-testing

JS Testing Survival (Mocha, Chai, Sinon)

This post is a simple guide to JS testing with Mocha, Chai, Sinon on CircleCI. It will show you how to setup for testing, some great tips for good coverage and more.
I'll cover some best practices I use for testing JS code. It's not official best practices, but I use these concepts as I found they make it easier to get easy to read test with full converge and a very flexible setup.

This post will dictate a unit test file to see the different points I found helpful when composing unit test files:

Setup

mocha is a testing framework for js, that allows you to use any assertion library you'd like, it goes very commonly with Chai. Chai is an assertion library that works with mocha. chai You can read there about how mocha and chai work, how to use it and more.
One of chai's strong points is that you can easily extend it using support libraries and plugins. We will use a few of them, so let's first setup our dependencies in our project:

npm install mocha chai chai-http chai-as-promised co-mocha sinon --save-dev

We are installing a few liberaries:

  • mocha - js testing framework.
  • chai - the chai library, has a good reference for how to use chai to assert or expect values, and a plugin directory - This is a valuable resource!
  • chai-httpchai-http - This is a chai extension that allows us to hit http endpoints during a test.
  • chai-as-promised - mocha support tests / setup that return a promise. This enables us to assert / expect what the result of the promise would be. We will see this in action shortly.
  • co-mocha - a mocha extension that allows us to use generator functions inside mocha setup / mocha test. If you do not do this step, and try to use a generator function, the test will finish and will not run yield correctly in test code. This means you will have twilight zone like results, of tests passing when they should fail!
  • sinonjs - cool test mocks, spies and stubs for any JS framework. Works really well, and very extensive

After we install all the packages, let's create a new file, and add all the required libraries to it as follows:


//demo test file
const chai = require('chai');
const chaiHttp = require('chai-http');
const server = require('../server');
const chaiAsPromised = require('chai-as-promised');
require('co-mocha'); 
const sinon = require('sinon');


const TestUtils = require('./utils/TestUtils'); //explained later on.
const server = require ('../server.js'); //explained later on

In this example I'm testing an express server, but you can use any type of node http server (assuming you are testing a server). Just make sure you export the server from you main or server file, and then you can require it from your test files.



We will see how we use the server later on in the test.

//server.js
const express = require('express');
const server = express();
//all server route and setup code.
module.exports = server;

Grouping tests using 'describe'

Mocha does a great job at grouping tests. To group tests together, under a subject use the following statement:

describe('Test Group Description"', () => {
  // test cases.
}

'describes' are also easily nest-able, which is great. So the following will also work:

describe('Test Endpoint "', () => {
  describe('GET tests "', () => {
    // GET test cases.
  }
  describe('POST tests "', () => {
    // POST test cases.
  }
  describe('PUT tests "', () => {
    // PUT test cases.
  }
  describe('DELETE tests "', () => {
    // DELETE test cases.
  }
}

This groups them together, and if you're using something like intelliJ or webstorm then the output is displayed in a collapsible window very nicely:
unit-test-run-example.PNG

Test hooks

When running tests many times we need to do setup before each test, before each test suite. The way to do that is to use the testing hooks before, after, beforeEach and afterEach:


describe('hooks', function() {

  before(function() {
    // runs before all tests in this block
  });

  after(function() {
    // runs after all tests in this block
  });

  beforeEach(function() {
    // runs before each test in this block
  });

  afterEach(function() {
    // runs after each test in this block
  });

  // test cases
});

Also these hooks can return a promise, the test framework will not continue until the promise is resolved, or will fail it is rejected:


before(() => {
  // do some work, return promise / promise chain
  return new Promise(() => true);
}

after(() => functionThatReturnsPromise());

Also since we have require co-mocha, our hooks can also run a generator function:


let stuffINeedInTests = null;
before(function* () {
  const result = yield functionThatReturnsPromise();
  const resultFromGen = yield* generatorFunction();
  stuffINeedInTests ={ promiseResult: result, genResult : resultFromGen } 
}

I can then use the stuffINeedInTest in my test files. You can also do this setup using promises as shown above.

Hook on root level

Test hooks are awesome, but sometimes we might want some hooks to run not only once for a test file, but once for all our tests. mocha does expose root level hooks, so in order to achieve that we will create a new hooks file: root-level-hooks.js
and put our hooks in there with no describe block around it:


//root-level-hooks.js

require('co-mocha'); //enable use of generators

before(() => {
  //global hook to run once before all tests
});

after(function* () {
  // global after hook that can
  // call generators / promises 
  // using yield / yield*
});

Then at the top of each test file we will require this file in:


//demo test file
require('./root-level-hooks');

//demo test file 2
require('./root-level-hooks);

This way our hooks will run once for all test runs. This is the perfect place to load up a test db, run some root level setup, authenticate to system etc.

External System Mocking

Some systems / modules call other systems internally . For example think of a functions that processes a payment for an order. That function might need to call a payment gateway, or after the order is processed, send the shipping information to a another system (for example a logistics system or upload a file to s3). Unit test are intended to be very stand alone, and not depend on external systems. Therefore we need a way to mock those external systems, so when the tested code reaches out to these systems ,the test case can respond on its behalf.

In our test we will use sinon.
Basically we will mock the calls using a test class or mocked calls, that reads a response file and send it's back.
This makes the mock strait forward:


const requestMock = {
    get: sinon.spy((input) => {
      switch (input.url) {
        case 'http://externalSystemUrl: {
          const campaignsResponse = fs.readFileSync(path.join(__dirname, '../files/testData.json'), 'utf8');
          return Promise.resolve(campaignsResponse.trim());
        }
        case 'http://anotherExternalSystemUrl:'
          return Promise.resolve(JSON.stringify('http://s3.amazon.com/your-generated-file));
        default:
          throw new Error(`unmocked ${input.url} url request, error in test setup`);
      }
    }),
    post: sinon.spy((input) => {
      switch (input.url) {
        case 'http://someServer/check-if-items-invalid: {
          return Promise.resolve(input.body.map(entry => false));
        }
        default:
          throw new Error(`unmocked ${input.url} url request, error in test setup`);
      }
    })
  };

What we are doing here is creating a mock object, in this case we are mocking the axios, as my server code uses it, but we can use the same construct to mock any external system.
Our request mock will provide a get and a post methods, just like the axios library does. I'm using the sinon.spy to check what URL is requested by the module code, and a switch statement to handle the different urls requested by the module. Our mock can return urls, json, promises, files, or whatever is needed to successfully mock the external system.

const axios = require('axios')
  before(() => {
    sinon.stub(axios, 'get').callsFake(requestMock.get);
    sinon.stub(axios, 'post').callsFake(reqeustMock.post);
  });

  after(() => {
    axios.get.restore();
    axios.post.restore();
  });

I'm then using the before hook to register the mock as axios mock, so when the module called require('axios') it will receive my mock and not the node_module that actually does the http request.

Then I'm using the after hook, to disable the mock and return to normal.

Test Cases

Mocha let's us create tests very easily. You use the 'it' keyword to create a test.
Either:


it('Unit test description and expected output', () => {
  return value or return a promise.
});

Or using generators


it('Unit test description and expected output', function* () {
  //yield generator or promise.
});

You can also use the done callback, but I prefer not to use it.
I like to keep code a small as possible, and without any distractions.
However it's here if you need it


it('Unit test description and expected output', (done) {
  //call done when finished some async operation
});

<

Each test case is composed out of two parts:
1) The test itself
2) Expected result

Test themselves

Since we have added the mock for external system we can safely use our test code to hit a function, or if we are testing a rest endpoint we can call that endpoint:


chai.request(server)
  .get('/serverPath')
  .then(function (response) { 
   // process response  
  });

//or 

const response = yield chai.request(server)
  .post('serverPath')
  .send({testObject : { name: 'test'}});

In this example we are testing an endpoint, but calling a function would have been even easier.

Expected Result

The second part is includes looking at the results of our test runs and we will be using chai to look at the responses. chai provides a long list of ways to look at responses either using expect, should or assert, whichever you prefer.
I try to use expect often as it doesn't change the Object.prototype. Here is a discussion on the differences expect vs should vs assert


expect(res).to.have.property('statusCode',200);
expect(res).to.have.property('body);
assert.isOk(res.statusCode === 201, 'Bad status code');
TestUtils.testForSucessAndBody(res,expect, 201);
TestUtils.test

Failing these will trigger the test to fail.
I normally use a test helper class with a few standard ways to test for correct response and to compare return object to the expected object:

Test for failures

Using promises, I can also quickly test for failures to ensure our code doesn't only work properly for valid input, but it should also work for invalid input.

I can test to see that code will fail with bad input:


it('GET /endpoint/BADID should return 400 bad request', () =>
      expect(
        chai.request(server).get('/endpoint/BADID`)
      ).to.eventually.be.rejectedWith('Bad Request')
    );
//or missing field

it('PUT /endpoint/:id with missing property name should return 400', () =>
      TestUtils.testMissingField(server, 'put', chai, expect,
        `/endpoint/${inputObjectWithoutNameProperty.id}`, inputObjectWithoutNameProperty, 'name')
    );

TestUtils class

TestUtils is a utility class that I created with some expected results that allows to easily test for missing fields, to iterate the body for all the fields I expect or for a simple 200 and body.


class TestUtils {
  static testMissingField(server, command, chai, expect, url,
    baseObject, fieldToCheck, sendAsArray) {
    const missingNameObj = JSON.parse(JSON.stringify(baseObject));
    delete missingNameObj[fieldToCheck];
    return expect(
      chai.request(server)[command](url)
        .send((sendAsArray) ? [missingNameObj] : missingNameObj)
    ).to.eventually.be.rejectedWith('Bad Request');

  }

  static testAllPropertiesInSrcExistInTarget(expect, assert, srcObj, targetObj) {
    Object.getOwnPropertyNames(srcObj).forEach((propName) => {
      expect(targetObj).to.have.property(propName);
      if (Array.isArray(srcObj[propName])) {
        expect(targetObj[propName].length).to.equal(srcObj[propName].length);
        return;
      }
      if (moment(srcObj[propName], ['YYYY-MM-DD', 'moment.ISO_8601'], true).isValid()) {
        assert.isOk(
          moment.utc(targetObj[propName])
            .isSame(moment.utc(srcObj[propName])), // eslint-disable-line eqeqeq
          `{expected ${srcObj[propName]}, got ${targetObj[propName]} when comparing ${propName}`
        );
        return;
      }
      assert.isOk(targetObj[propName] == srcObj[propName], // eslint-disable-line eqeqeq
        `{expected ${srcObj[propName]}, got ${targetObj[propName]} when comparing ${propName}`);
    });
  }

  static testForSuccessAndBody(res, expect, code = 200) {
    expect(res).to.have.property('statusCode', code);
    expect(res).to.have.property('body');
  }
}

I then require the TestUtil class in my test file, and then I can use the test utils for quickly expecting or asserting different conditions.

Mocha tests on circle

When using CircleCI, it's great to get the output of the test into the $CIRCLE_TEST_REPORTS folder, as then circle will read the output, and present you with the results of the test, rather than you looking through the logs each time to figure out what went right and what went wrong. Circle guys have written a whole document about that, and you can see it CircleCi Test Artifacts.

In our discussion we will focus on using mocha and getting the reports parsed. In order to do so, we need mocha to output the result in junit xml format. This can be achieved easily using the mocha-junit-reporter. This lib will allow mocha to run our test and outpu the results in the correct format.

So the first step is to run

npm install mocha-junit-reporter

And to add in package json output in junit format:


  "scripts": {
    "lint": "node_modules/.bin/eslint .",
    "test": "NODE_ENV=test npm run lint && npm run migrate && npm run test:mocha",
    "test:mocha": "NODE_ENV=test ./node_modules/.bin/mocha --timeout=5000 tests/*.test.js",
    "test:circle-ci-junit-output": "npm run lint -- --format=junit --output-file=junit/eslint.xml && MOCHA_FILE=junit/mocha.xml npm run test:mocha -- --reporter mocha-junit-reporter",
   //other npm commands 
 },

This output the information in the junit folder for both eslint (if you are using it) and for mocha.

Now all that is needed is to create a link between your junit folder and the CIRCLE_TEST_REPORTS, which can be done by editing the circle.yml file and adding the following line in the pre step for test.


test:
  pre:
    - mkdir -p $CIRCLE_TEST_REPORTS/junit
 # for none docker:
    

If you aren't using docker, you can also add a symbolic link after the creation of the folder - ln -s $CIRCLE_TEST_REPORTS/junit ~/yourProjectRoot/junit

However if you are using docker-compose, or docker run to execute your test inside a will also need to add a volume that maps you test output to the CRICLE_TEST_REPORTS.
For docker compose:


volumes:
    - $CIRCLE_TEST_REPORTS/junit://junit

for docker run you can do the same with using the -V command.
Once that is done, you'll get the report output in circle after the build finishes.

Good luck!