In 2014, I’ve designed, developed, and deployed two Node.js API servers. One is currently in use and experiences some pretty great uptime, my NC Traffic Cams API server. I’ve learned an incredible amount going through the effort of bringing up an instance (Amazon EC2, Google Compute Engine instance), connecting to it, installing all of the necessary software, then deploying code to it. Each time I do it, the better I get at it, and the more clever the server configuration and code.
Since I’ve never taken a class on Node.js or worked fulltime at a company who has deployed a Node.js server, all my learning has come from the internet and hands-on experience. Here are a few of the articles and guides I’ve used (or will be using soon) along the way.
Let’s talk about a technology that has been getting a lot of well-deserved hype lately: Node.js. Node.js is the hottest new technology in Silicon Valley. Currently in use by Microsoft, VMWare, Ebay, Yahoo, and many more top tech companies, Node.js is the perfect skill to open up amazing career opportunities for any software developer.
When I started out using node.js and async I didn’t find any good/thorough resources on how to really use the async module. That’s why I decided to make a little cookbook about it.
When running a node application in production, you need to keep stability, performance, security, and maintainability in mind. Outlined here is what I think are the best practices for putting node.js into production.
By the end of this guide, this setup will include 3 servers: a load balancer (lb) and 2 app servers (app1 and app2). The load balancer will health check and balance traffic between the servers. The app servers will be using a combination of systemd and node cluster to load balance and route traffic around multiple node processes on the server. Deploys will be a one-line command from the developer’s laptop and cause zero downtime or request failures.