Do you dream of a local development environment that's easy to configure and works independently from the software layers that you are currently not working on? I do!
As a software engineer, I have suffered the pain of starting projects that were not easy to configure. Reading the technical documentation does not help when much of it is outdated, or even worse, missing many steps. I have lost hours of my life trying to understand why my local development environment was not working.
An ideal scenario
As a developer, you have to meet a few prerequisites before contributing to a project. For instance, you must agree to the version-control requirements, and you need to know how to use the project IDE, how to use a package manager, and so on.
But nothing more. You don't need to learn a poorly documented, made-in-house framework just to satisfy the ego of an architect who wanted to reinvent the wheel. You don't need to run an external virtual machine to emulate the production environment. As a developer, you are free to invest your time in improving the code and adding value to the product.
A developer-centered approach to application development
My goal with this article is to describe strategies for building an Angular 8 application in a way that centers the developer experience.
The type of application is incidental. I describe a client application, but we could apply similar techniques to back-end modules. The framework, in this case, is Angular, but we could use similar techniques for practically any framework that you prefer.
Note: As a brief introduction, Angular is an application design framework and development platform for creating efficient and sophisticated single-page apps. You can learn more on Angular website.
The example application is a simple web app, with authentication, that performs several calls to REST endpoints. I won't offer many details about the domain and the business logic, because those factors don't matter for my discussion.
The primary requirements for this use case are to enhance the developer experience. The strategies follow from that.
Note: In cases where my strategies for resolving use-case requirements directly involve Angular and other software libraries, I will share details about those technologies. However, I am confident that similar options exist for other technologies and frameworks.
Requirement 1: No back-end information in the client application
Imagine the following scenario: A client-side application must perform a couple of GET
operations, which will fetch data for display on a web page. How do you know which is the host address, the protocol, and the port to call for each REST endpoint?
Typically, I have seen three approaches to resolving this issue:
- Add the back-end information to the application at build time.
- Pass the back-end information to the web application as parameters, or retrieve it from the environment variables.
- Locate the web application and REST service on the same machine. This approach lets the web app call the
localhost
at a specific port and path. In that case, we "only" need to hard code the port and protocol.
Unfortunately, each of these strategies leads to a black hole when developing your web application:
- You need to modify the runtime status while debugging.
- You need to hack the application to simulate the expected startup.
- Worst of all, you need to point to a real shared dev or testing environment.
Strategy: Reverse proxy
The concept of a reverse proxy is quite easy. First, let's consider it as a black-box feature.
Suppose that someone configures the machine that is hosting your web app so that when you call yourself (via localhost
) on a specific path (for instance, /api
), every call is automatically forwarded to the API server. With this configuration, it does not matter which is the address, the protocol, or the port in use.
Note: If you want to look inside the black box, you can learn more about configuring a reverse proxy on Apache HTTPD or NGINX.
Reverse proxy in Angular
Now let's consider a reverse proxy in Angular, using a slightly different scenario. Suppose that your static files are served by the Webpack dev server on port 4200, while a Node.js app serves the APIs on port 3000. Figure 1 shows the flow of this architecture (Credit to https://juristr.com/blog/2016/11/configure-proxy-api-angular-cli/.)
You can easily configure the global variable PROXY_CONFIG
as part of the Webpack dev-server lifecycle. You can choose to use proxy.conf.json
or proxy.conf.js
, depending on your angular.json
configuration file. Here's an example of a PROXY_CONFIG
file:
const PROXY_CONFIG = { "/api": { "target": "http://localhost:3000/", "secure": false, "logLevel": "debug", "changeOrigin": true } }; module.exports = PROXY_CONFIG;
Note that every HTTP call must point to /api
. There is no need to specify any other information. The reverse proxy does the rest for us, like so:
getPosts(): Observable { return this.http.get('/api/posts/'); }
As soon as you subscribe to getPosts()
, it calls the target address (in this case, http://localhost:3000/posts).
Note: Learn more about setting up an Angular CLI reverse proxy or a Webpack dev server reverse proxy.
Requirement 2: Offline coding (coding without an Internet connection)
When coding, you want your dependencies with the outside world to be as minimal as possible. There are many reasons to avoid connecting to a shared remote development machine. The remote machine might be:
- Not recently updated.
- Slow, because of its load.
- Delayed, because there is a VPN.
- Unavailable, because someone is updating it.
- Unreachable, because your Internet connection is not working.
You also don't want to launch a real instance of the development machine locally, however. Such an instance might:
- Have third-party dependencies that are difficult to mock.
- Be heavy to run, for instance, with a minimum requirement of 32GB of RAM.
- Be connected to a database, in which case you have to either install the database or connect to a real remote instance.
- Be difficult to update because your data are in a historical series, so what is valid today might not be valid tomorrow.
Strategy: Mocking data
There are several solutions to make development fast and agile. For example, you could use containers to provide isolated and reproducible computing environments.
When working on a web app, I believe it makes sense to use mocked APIs. If you are working with REST endpoints, I recommend the
package, which you can install both globally and locally. If you install json-server
json-server
globally, you can launch it anywhere you like. If you install it locally, you can install it as a dependency for your dev environment, and then create a Node Package Manager (npm
) script to launch a customized mocked server.
The setup is quite intuitive. Say that you have a JSON file as a data source; say, db.json
:
db.json: { "posts": [ { "id": 1, "title": "json-server", "author": "typicode" } ], "comments": [ { "id": 1, "body": "some comment", "postId": 1 } ], "profile": { "name": "typicode" } }
You can launch the file via the command line:
$ json-server --watch db.json
By default, it starts on localhost
, port 3000, so if you GET http://localhost:3000/posts/1
, you will receive the following response:
{ "id": 1, "title": "json-server", "author": "typicode" }
The GET
is just an example, you can use other HTTP verbs, as well. You also can choose to save edits in the original file or leave it as it is. Exposed APIs follow the REST standard, and you can sort, filter, paginate, and load remote schemas.
As I mentioned earlier, you can create your own script and run a json-server
instance programmatically:
const jsonServer = require('json-server') const server = jsonServer.create() const router = jsonServer.router('db.json') const middlewares = jsonServer.defaults() server.use(middlewares) server.use(router) server.listen(3000, () => { console.log('JSON Server is running') })
Mocked data in Angular
I can suggest a couple of strategies for making your Angular app work with mocked data. Both are based on the proxy.
Strategy 1: Configure the reverse proxy, pointing to http://localhost:3000/
in the target, so that every call points to the json-server
instance.
Strategy 2: Add a custom mocking rule to the proxy, so that it uses the bypass
parameter to return data for a specific path:
const PROXY_CONFIG = { '/api': { 'target': 'http://localhost:5000', 'bypass': function (req, res, proxyOptions) { switch (req.url) { case '/api/json1': const objectToReturn1 = { value1: 1, value2: 'value2', value3: 'value3' }; res.end(JSON.stringify(objectToReturn1)); return true; case '/api/json2': const objectToReturn2 = { value1: 2, value2: 'value3', value3: 'value4' }; res.end(JSON.stringify(objectToReturn2)); return true; } } } } module.exports = PROXY_CONFIG;
Requirement 3: Dev code should not affect production code, and vice versa
How many times have you seen something like this:
if (devMode) {...} else {...}
This code is an example of what we call code smell, meaning that it mixes code for development purposes with code intended for production only. A build targeted for production should not contain code related to development, and vice versa. The solution to code smell is to use different builds for different targets.
Code smell shows up in many different kinds of use cases. For instance, your application could be hosted behind a single sign-on (SSO) authentication system. The first time that a user requests the application in a browser, the request is redirected to an external page, which asks for credentials.
When you are in dev mode, you don't want to deal with the redirect. A less complicated authentication service is welcome.
Strategy: Use a file-replacement policy
In Angular, based on the current configuration, it is possible to specify a file-replacement policy. You can easily use this feature to replace a simple authentication service used for development purposes with a more robust and complex one required for production:
"configurations": { "production": { "fileReplacements": [ { "replace": "src/app/core/services/authenticator.ts", "with": "src/app/core/services/authenticator.prod.ts" } ], ... ... }
The codebase now has two separate authentication services, which are configured for use in two different environments. Most importantly, only one service will be included in the final artifact, based on the specific build parameter:
$ npm run ng build -c production
Requirement 4: Know what version of the application is currently running in production
Do you know at all times what version of your application is running on a given host? You can use build parameters like build time or the last-commit identifier to determine whether your current environment is updated for a recent update or bug fix.
Strategy: Use angular-build-info
Angular includes a command-line tool, called angular-build-info
, that produces a build.ts
file inside of your Angular project's src/
folder. Using this tool, you can import the build.ts
file inside of your Angular application and use the exported buildInfo
variable:
import { Component } from '@angular/core'; import { environment } from '../environments/environment'; import { buildInfo } from '../build'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.scss'] }) export class AppComponent { constructor() { console.log( `\nBuild Info:\n` + ` ❯ Environment: ${environment.production ? 'production ?' : 'development ?'}\n` + ` ❯ Build Version: ${buildInfo.version}\n` + ` ❯ Build Timestamp: ${buildInfo.timestamp}\n` ); } }
Note that the build.ts
content must be versioned, so you need to execute the following script at build time:
$ angular-build-info --no-message --no-user --no-hash
The parameters are optional so that you can customize the produced buildInfo
.
Requirement 5: A fast and effective quality check in the pipeline
Regardless of whether you are launching a build pipeline locally or if you have sent a pull request, it would be great to have an overview of the overall project quality.
Strategy: Static code analysis with a quality gate
When you need to measure the quality of a software, static code analysis might help. It provides several metrics about readability, maintainability, security, etc. without actually execute the software itself.
If you are able to measure quality metrics, then you can configure formal revisions that might help to evaluate the process used to develop and release new parts of the software. Such formal revisions are named quality gates.
Static code analysis must be fast, with clean results. You don't want to scroll through pages of redundant logged results. It matters—the phase, and the order, where you place the quality gate.
For this requirement, I would place the quality gate before test execution and immediately after compilation or transpiling (assuming that is happening). I recommend this placement for two reasons:
- It avoids wasting time checking the static code if it does not compile or transpile.
- It avoids wasting time executing a whole suite of tests for code that does not meet the minimum requirements that the team has defined.
It is important to keep in mind that a pipeline execution requires resources. A good developer should never push a commit without executing a local quality check first. You can also reduce the number of files to be checked by caching the results, or performing static code analysis, only on files that are involved in the change list.
Conclusion
When you start working on a new project, non-technical requirements should not slow down your productivity curve.
As a developer, you should not have to waste time on configuration issues, or a development machine that sometimes works and sometimes doesn't. Take care of these issues up-front. Happy developers spend more time coding than resolving technical impediments.
Improving your developer experience is not a one-time process, but an incremental one. There is always room for automation. There is always room for improvement.