The new test framework built-in to Node.js 18.8.0

; Date: Mon Sep 05 2022

Tags: Node.JS

New in Node.js v18.8.0 is a test framework highly remniscient of Mocha. It is marked as experimental, but points to a future where our unit tests will not require an external dependency like Mocha.

I have been using the Mocha unit test framework for years, along with the Assert flavor of Chai for assertions. This is a very useful combination, but when setting up a test suite you have to remember to install these packages as dependencies. And, you have to remember to update your dependencies to catch updates.

What if there were a simpler way, such as a testing framework built-in to the programming platform? It might simplify your work if it was not necessary to install anything extra to write unit tests.

Recently I came across such a package in Node.js. While creating tests for a new package I'm developing, I typed in describe('test description') and immediately Visual Studio Code auto-imported the following:

import { describe } from 'node:test';

This may happen to many other developers, who will be wondering what node:test is. Since Visual Studio Code autocompleted this into existence, we might end up using this when we expected to use use describe from Mocha. With Mocha there is nothing to import to get the describe and it functions. If we don't notice this new import, our test output will suddenly be weird causing confusion.

One choice is to simply delete this spurious import statement and go on with Mocha. But, the same thing happens with the it function, and keeps happening every time I typed in a describe or it function. For this article let's take a look at what's going on, and to see whether the node:test framework is useful.

This may be your first time seeing a package name with the node: prefix. This is new in Node.js, and appears to have come into use in Node.js 14. It is a package name prefix for the packages that are baked into Node.js. Because the ES6 import statement takes a URL, the node: prefix is treated as a URL protocol. The result is that you can import node:path instead of path.

What this means is that a new baked-in module, node:test, exists. The relavent documentation is here:

The node:assert module has been in Node.js all along. It was always possible to write unit tests with that package without relying on any test framework. The value of test frameworks is in reporting. Mocha supports several test results reporting formats, for example. The new node:test framework supports TAP, a venerable format for reporting test results.

Compared to Mocha and Chai, or other 3rd party test frameworks, the features of node:test and node:assert are minimal. The advantage is that these are built-in.

Setting up tests with node:test and node:assert

To learn about this let's create a few dummy tests.

The code shown below is available on GitHub: (github.com) https://github.com/robogeek/node-test-examples

Setup (after making sure Node.js 18.8 or later is installed):

$ mkdir node-test
$ npm init -y

The first advantage is that we're not required to install anything else.

Create a file named test1.mjs and type the following:

import test from 'node:test';
import * as assert from 'node:assert';

You are using ES6 module format nowadays aren't you? The Node.js documentation shows the CJS module format if you prefer.

Note that we must use node:test rather than test, because the latter will not work.

test('simple-test', function() {
    assert.strictEqual(1, 1);
});

test('fail-test', function() {
    assert.strictEqual(1, 2);
});

test('async test', async () => {
    let result = await new Promise((resolve, reject) => {
        setImmediate(() => {
          resolve(1);
        });
    });
    assert.strictEqual(1, result);
});

I said earlier the API was similar to Mocha, which describes tests using describe and it functions. Here we are describing tests using a function named test. The structure is similar to the Mocha it function, but the function is named test. We'll see describe and it used later.

Instead of using test as shown here, you can import the packages as so:

import * as test from 'node:test';
import * as assert from 'node:assert';

If you do this, executing the functions from the node:test package requires prefixing the functions with test., so in this example you would use test.test(...) instead of test(...).

One glaring difference is that this framework allows use of lightweight arrow functions, whereas Mocha requires use of normal functions.

To execute these tests:

$ node --test test1.mjs
TAP version 13
# Subtest: /home/david/Projects/nodejs/node-test/test1.mjs
not ok 1 - /home/david/Projects/nodejs/node-test/test1.mjs
  ---
  duration_ms: 0.071377364
  failureType: 'subtestsFailed'
  exitCode: 1
  stdout: |-
    TAP version 13
    # Subtest: simple-test
    ok 1 - simple-test
      ---
      duration_ms: 0.001492959
      ...
    # Subtest: fail-test
    not ok 2 - fail-test
      ---
      duration_ms: 0.001347439
      failureType: 'testCodeFailure'
      error: |-
        Expected values to be strictly equal:
        
        1 !== 2
        
      code: 'ERR_ASSERTION'
      stack: |-
        TestContext.<anonymous> (file:///home/david/Projects/nodejs/node-test/test1.mjs:10:12)
        Test.runInAsyncScope (node:async_hooks:203:9)
        Test.run (node:internal/test_runner/test:483:25)
        Test.processPendingSubtests (node:internal/test_runner/test:249:27)
        Test.postRun (node:internal/test_runner/test:553:19)
        Test.run (node:internal/test_runner/test:511:10)
      ...
    # Subtest: async test
    ok 3 - async test
      ---
      duration_ms: 0.002986099
      ...
    1..3
    # tests 3
    # pass 2
    # fail 1
    # cancelled 0
    # skipped 0
    # todo 0
    # duration_ms 0.057180525
    
  stderr: |-
    (node:1638685) ExperimentalWarning: The test runner is an experimental feature. This feature could change at any time
    (Use `node --trace-warnings ...` to show where the warning was created)
    
  error: 'test failed'
  code: 'ERR_TEST_FAILURE'
  ...
1..1
# tests 1
# pass 0
# fail 1
# cancelled 0
# skipped 0
# todo 0
# duration_ms 0.134320729

To run tests requires the --test flag. With this option Node.js will search for test modules. But I've explicitly named the test module on the command line. In this case, Node.js will execute only the named test modules.

The test results are printed in the TAP format. TAP stands for Test Anything Protocol, and there is a good writeup on (en.wikipedia.org) Wikipedia.

The TAP output format is not human friendly, and was designed to supply test results data to reporting software. I find it difficult to tread my way through these results, especially the failures.

So far these tests are all at the top level. We often want to add hierarchy to help with comprehension:

test('test container', () => {
    test('test 1', () => { assert.equal(1, 1); });
    test('test 2', () => { assert.equal(2, 2); });
});

Another thing I didn't know I wanted, but is supplied by the node:test framework, is the ability to skip tests.

test('SKIP', { skip: true }, () => { assert.equal(1, 1); });
test.skip('SKIP2', () => { assert.equal(1, 1); });

In the first form, the object passed between the test name and test function is used to pass options. One of those options, skip, says to ignore (skip) this test. Using .skip achieves the same purpose.

Using describe and it just like with Mocha

As said earlier, the node:test has describe and it functions that are similar to what Mocha provides.

Create a new file named test-describe.mjs containing:

import { describe, it } from 'node:test';
import * as assert from 'node:assert';

describe('Describe Container', function() {

    it('should equal 1 and 1', () => { assert.equal(1, 1); });
    it('should equal 2 and 2', () => { assert.equal(2, 2); });
    it('should FAIL equal 1 and 3', () => { assert.equal(1, 3); });
});

The import statement is changed to import these functions rather than to import the whole test package.

One interesting experiment is to comment out the import for node:test and then use Mocha to run the test suite. Other than the use of lightweight functions for tests, the test suite will work under Mocha.

The options object is available for describe and it, for example to support skipping tests. You can also use describe.skip and it.skip as desired.

Attempting to use before, beforeEach, after, and afterEach

Another useful set of test functions in Mocha is before, beforeEach, after, and afterEach. These execute before and after test functions and are useful for setting up and tearing down test conditions.

Going by the Node.js documentation one can create a file named test-before.mjs like this:

import {
    describe, it, before, beforeEach, after, afterEach
} from 'node:test';
import * as assert from 'node:assert';

let did_before = false;
let did_before_each = 0;
let did_after = false;
let did_after_each = 0;

function print_status(stage) {
    console.log(`${stage}: before ${did_before} before_each ${did_before_each} after ${did_after} after_each ${did_after_each}`);
}

describe('Describe Container', function() {
    before(() => {
        did_before = true;
        print_status('BEFORE');
        // console.log('IN BEFORE');
    });
    beforeEach(() => {
        did_before_each++;
        print_status('BEFORE EACH');
        // console.log('IN BEFORE EACH');
    });

    after(() => {
        did_after = true;
        print_status('AFTER');
        // console.log('IN AFTER');
    });
    afterEach(() => {
        did_after_each++;
        print_status('AFTER EACH');
        // console.log('IN AFTER EACH');
    });

    it('should equal 1 and 1', () => {
        console.log('1 and 1');
        assert.equal(1, 1);
        // assert.equal(did_before, true);
        // assert.equal(did_before_each > 1, true);
    });
    it('should equal 2 and 2', () => {
        console.log('2 and 2');
        assert.equal(2, 2);
        // assert.equal(did_after, false);
        // assert.equal(did_after_each > 1, true);
    });

});

print_status('FINISH');

According to the documentation this should work. But, confusion reigns, especially if all tests succeed. Hence, this test has excess code that demonstrates several things at once.

First, the output is very terse and does not show any information from the sub-tests:

$ node --test test-before.mjs 
TAP version 13
# Subtest: /home/david/Projects/nodejs/node-test/test-before.mjs
ok 1 - /home/david/Projects/nodejs/node-test/test-before.mjs
  ---
  duration_ms: 0.069937654
  ...
1..1
# tests 1
# pass 1
# fail 0
# cancelled 0
# skipped 0
# todo 0
# duration_ms 0.130741092

This may have to do with the TAP results format. It appears to be solely showing results for the top-level test. But, I believe that we want to know the status for every test case on every test suite run.

Secondly, what happened to the console.log output? In Mocha console.log is simply printed to the terminal, but none of that shows here.

How do we know that the before/after functions executed? In this example I've added extra status variables, and we can insert assert functions to check the status. Uncommenting those assert calls lets us see that indeed the status variables changed indicating that the before/after functions did execute.

Wouldn't it be more convenient if console.log output just printed? See this issue in the issue queue: (github.com) https://github.com/nodejs/node/issues/44372

One of the assertions you can uncomment does fail: assert.equal(did_after_each > 1, true), giving us the opportunity to see that the output includes the console.log output. Specifically, data for subtests is only visible if there is a failing test. If there is a failing test, node:test prints results for all subtests, and that includes the console.log output.

TAP version 13
# Subtest: /home/david/Projects/nodejs/node-test/test-before.mjs
not ok 1 - /home/david/Projects/nodejs/node-test/test-before.mjs
  ---
  duration_ms: 75.884131
  failureType: 'subtestsFailed'
  exitCode: 1
  stdout: |-
    FINISH: before false before_each 0 after false after_each 0
    BEFORE: before true before_each 0 after false after_each 0
    BEFORE EACH: before true before_each 1 after false after_each 0
    1 and 1
    AFTER EACH: before true before_each 1 after false after_each 1
    BEFORE EACH: before true before_each 2 after false after_each 1
    2 and 2
    AFTER: before true before_each 2 after true after_each 1
    TAP version 13
    # Subtest: Describe Container
        # Subtest: should equal 1 and 1
        ok 1 - should equal 1 and 1
          ---
          duration_ms: 0.729738
          ...
        # Subtest: should equal 2 and 2
        not ok 2 - should equal 2 and 2
          ---
          duration_ms: 1.389875
          failureType: 'testCodeFailure'
          error: 'false == true'
          code: 'ERR_ASSERTION'
          stack: |-
            Object.<anonymous> (file:///home/david/Projects/nodejs/node-test/test-before.mjs:49:16)
            ItTest.runInAsyncScope (node:async_hooks:203:9)
            ItTest.run (node:internal/test_runner/test:483:25)
            async Suite.processPendingSubtests (node:internal/test_runner/test:249:7)
          ...
        1..2
    not ok 1 - Describe Container
      ---
      duration_ms: 6.009156
      failureType: 'subtestsFailed'
      error: '1 subtest failed'
      code: 'ERR_TEST_FAILURE'
      ...
    1..1
    # tests 1
    # pass 0
    # fail 1
    # cancelled 0
    # skipped 0
    # todo 0
    # duration_ms 13.041273
    
  stderr: |-
    (node:2457472) ExperimentalWarning: The test runner is an experimental feature. This feature could change at any time
    (Use `node --trace-warnings ...` to show where the warning was created)
    
  error: 'test failed'
  code: 'ERR_TEST_FAILURE'
  ...
1..1
# tests 1
# pass 0
# fail 1
# cancelled 0
# skipped 0
# todo 0
# duration_ms 79.742712

Notice that following the line reading stdout |- is the output we expect from the console.log statements.

For the successful test, we only see the line reading duration_ms. For the failing test, more information is printed including the stack trace.

The console.log output is available but only if there is a failing test. When there is a failing test, the test results includes more data.

What I want is a simple method to insert tracing in test cases. When debugging tests in Mocha, I insert console.log statements to see what is available in the test. It's not that I'm a clueless newby programmer, because I've been doing this for decades. The print statement is a very useful debugging tool. You just have to remember to remove all debugging printouts before shipping to customers, etc.

It turns out that console.log may not be the correct thing to do. Later we'll investigate the TestContext.diagnostic function. That function lets us print messages that are correctly captured in the TAP results as diagnostic information. This is probably more correctly integrated with the TAP results format.

Why aren't it results printed if all tests succeed?

There's another issue in the previous section. The TAP results only outputs data from the top-level test, and it does not print results for the subtests. To see what I mean let's return to a modified version of the test shown earlier.

import { describe, it } from 'node:test';
import * as assert from 'node:assert';

describe('Describe Container', function() {

    it('should equal 1 and 1', () => { assert.equal(1, 1); });
    it('should equal 2 and 2', () => { assert.equal(2, 2); });
    /* it('should FAIL equal 1 and 3', () => {
        assert.equal(1, 3); 
    }); */
});

Each it test here will succeed, except for the one which is commented out. Run this test and you'll see this output:

$ npm run test:describe

> node-test-examples@1.0.0 test:describe
> node --test test-describe.mjs

TAP version 13
# Subtest: /home/david/Projects/nodejs/node-test/test-describe.mjs
ok 1 - /home/david/Projects/nodejs/node-test/test-describe.mjs
  ---
  duration_ms: 68.194493
  ...
1..1
# tests 1
# pass 1
# fail 0
# cancelled 0
# skipped 0
# todo 0
# duration_ms 71.352425

We're told there is one test, and that one test succeeded, but that's not accurate. There is a describe containing two it tests. Isn't that be a single test container, which contains two tests? Shouldn't that be what is printed as output?

Uncomment the failing test and you see this output:

TAP version 13
# Subtest: /home/david/Projects/nodejs/node-test/test-describe.mjs
not ok 1 - /home/david/Projects/nodejs/node-test/test-describe.mjs
  ---
  duration_ms: 72.890198
  failureType: 'subtestsFailed'
  exitCode: 1
  stdout: |-
    TAP version 13
    # Subtest: Describe Container
        # Subtest: should equal 1 and 1
        ok 1 - should equal 1 and 1
          ---
          duration_ms: 0.530474
          ...
        # Subtest: should equal 2 and 2
        ok 2 - should equal 2 and 2
          ---
          duration_ms: 0.213345
          ...
        # Subtest: should FAIL equal 1 and 3
        not ok 3 - should FAIL equal 1 and 3
          ---
          duration_ms: 1.318011
          failureType: 'testCodeFailure'
          error: '1 == 3'
          code: 'ERR_ASSERTION'
          stack: |-
            Object.<anonymous> (file:///home/david/Projects/nodejs/node-test/test-describe.mjs:10:16)
            ItTest.runInAsyncScope (node:async_hooks:203:9)
            ItTest.run (node:internal/test_runner/test:483:25)
            Suite.processPendingSubtests (node:internal/test_runner/test:249:27)
            ItTest.postRun (node:internal/test_runner/test:567:19)
            ItTest.run (node:internal/test_runner/test:511:10)
            async Suite.processPendingSubtests (node:internal/test_runner/test:249:7)
          ...
        1..3
    not ok 1 - Describe Container
      ---
      duration_ms: 5.370754
      failureType: 'subtestsFailed'
      error: '1 subtest failed'
      code: 'ERR_TEST_FAILURE'
      ...
    1..1
    # tests 1
    # pass 0
    # fail 1
    # cancelled 0
    # skipped 0
    # todo 0
    # duration_ms 11.093222
    
  stderr: |-
    (node:2435157) ExperimentalWarning: The test runner is an experimental feature. This feature could change at any time
    (Use `node --trace-warnings ...` to show where the warning was created)
    
  error: 'test failed'
  code: 'ERR_TEST_FAILURE'
  ...
1..1
# tests 1
# pass 0
# fail 1
# cancelled 0
# skipped 0
# todo 0
# duration_ms 77.252073

This time the subtest information is printed, including the failing test. Shouldn't subtest information always be printed?

I'm not strongly familiar with TAP and it may be this is how TAP is designed to be used. But looking at this user experience, I want to always know the status of all test cases. This is especially true if test results are being fed into a reporting system.

Test diagnostics allow for tracing execution and data values

Earlier we saw that console.log output is not treated as I expected. It does not simply appear on the console, and you can view it only if there are failing tests.

The node:test framework contains a feature that is the rough equivalent to console.log but integrates better with the TAP results format.

Consider:

import { test } from 'node:test';
import * as assert from 'node:assert';

test('Describe Container', async function(t) {

    t.beforeEach((t) => {
        t.diagnostic(`DIAG before ${t.name}`);
    });

    t.afterEach((t) => {
        t.diagnostic(`DIAG after ${t.name}`);
    });

    await t.test('should equal 1 and 1', () => { assert.equal(1, 1); });
    await t.test('should equal 2 and 2', () => { assert.equal(2, 2); });
    /* await t.test('should FAIL equal 1 and 3', (t) => {
        t.diagnostic(`In FAIL test ${t.name}`);
        assert.equal(1, 3); 
    }); */

});

The parameter t to the callback for the test function is an instance of the TestContext class. The documentation says this class is not available for instantiation by test authors. Instead the test function callback receives it as a parameter. The t parameter is not available with the it function, only with the test function.

The t.beforeEach and t.afterEach functions are equivalent to beforeEach and afterEach, meaning they are executed before and after each test invocation. And, instead of nesting it or test invocations, you instead use t.test. Also notice that with an async outer function we must use await on the t.test function calls.

The t.diagnosic function allows us to write TAP diagnostics to the output. In other words, it's similar to what I normally do with console.log but is integrated with the TAP output.

To see the diagnostic output there must be a failing test. As we said earlier, to see the full test results there must be a failing test. This also controls whether diagnostic output is printed. Uncomment the failing test shown here and you will see this output:

TAP version 13
# Subtest: /home/david/Projects/nodejs/node-test/test-diagnostic.mjs
not ok 1 - /home/david/Projects/nodejs/node-test/test-diagnostic.mjs
  ---
  duration_ms: 75.932386
  failureType: 'subtestsFailed'
  exitCode: 1
  stdout: |-
    TAP version 13
    # Subtest: Describe Container
        # Subtest: should equal 1 and 1
        ok 1 - should equal 1 and 1
          ---
          duration_ms: 1.290981
          ...
        # DIAG before should equal 1 and 1
        # DIAG after should equal 1 and 1
        # Subtest: should equal 2 and 2
        ok 2 - should equal 2 and 2
          ---
          duration_ms: 0.365355
          ...
        # DIAG before should equal 2 and 2
        # DIAG after should equal 2 and 2
        # Subtest: should FAIL equal 1 and 3
        not ok 3 - should FAIL equal 1 and 3
          ---
          duration_ms: 1.349598
          failureType: 'testCodeFailure'
          error: '1 == 3'
          code: 'ERR_ASSERTION'
          stack: |-
            TestContext.<anonymous> (file:///home/david/Projects/nodejs/node-test/test-diagnostic.mjs:20:16)
            Test.runInAsyncScope (node:async_hooks:203:9)
            Test.run (node:internal/test_runner/test:483:25)
            async TestContext.<anonymous> (file:///home/david/Projects/nodejs/node-test/test-diagnostic.mjs:18:5)
            async Test.run (node:internal/test_runner/test:484:9)
          ...
        # DIAG before should FAIL equal 1 and 3
        # In FAIL test should FAIL equal 1 and 3
        1..3
    not ok 1 - Describe Container
      ---
      duration_ms: 6.449412
      failureType: 'subtestsFailed'
      error: '1 subtest failed'
      code: 'ERR_TEST_FAILURE'
      ...
    1..1
    # tests 1
    # pass 0
    # fail 1
    # cancelled 0
    # skipped 0
    # todo 0
    # duration_ms 12.896941
    
  stderr: |-
    (node:2436416) ExperimentalWarning: The test runner is an experimental feature. This feature could change at any time
    (Use `node --trace-warnings ...` to show where the warning was created)
    
  error: 'test failed'
  code: 'ERR_TEST_FAILURE'
  ...
1..1
# tests 1
# pass 0
# fail 1
# cancelled 0
# skipped 0
# todo 0
# duration_ms 81.035372

The diagnostic output is printed with the # prefix. We only see diagnostic output for the failing test case, not for the successful test cases. Previously the console.log output was printed in sections starting with stdout |-.

The t.diagnostic calls therefore act like console.log but appears to be more correctly integrated with TAP.

Summary

Adding node:test to Node.js gives us a minimal test framework. While it is not as feature-filled as Mocha and Chai, it can be used right away.

Why bring this feature into the core of Node.js? I see a similar issue in the Bun project, where someone proposed adding a YAML parser/serializer to Bun. Should a platform like Node.js, or Bun, or PHP, or Python, add every possible feature? Each added feature makes the platform bigger, and creates more administrative overhead for the project team, because they must maintain/test/update/upgrade that feature.

Why should a test framework be built-in to the core platform? I don't quite understand the reasoning. The rationale for including YAML support in the core platform is stronger, since many applications need to read/write YAML files. But, I still don't grok why YAML must be added when there are plenty of YAML implementations. The same holds true for testing frameworks, since it is easy to install a feature-filled framework.

A quick check with PHP, Java, and Python documentation does not show a built-in test framework. In some cases there are assert statements, but no framework for organizing tests.

As a user of Node.js or Bun, I want the platform to be as robust as possible. That means the project teams should focus on the most important features. Is adding a test framework of high enough importance? Or is adding a test framework a distraction?

I trust that the Node.js team already debated that issue. Their choice to incorporate a simple test framework into the platform says there is a rationale for doing so.

Having used this, what do I see as lacking:

  • Having an option for console.log output or other debugging output to simply show up on the terminal.
  • A system for 3rd party plugins for other test results format. I especially want a human-readable results format like the default provided by Mocha. This should be provided by 3rd parties to minimize the bloat on the Node.js platform.

Should one abandon Mocha or other existing test frameworks? That's unlikely, since the existing frameworks offer more capabilities. Since the node:test framework should be kept minimal to minimize platform bloat, it's unlikely to gain enough features to become competitive against the 3rd party frameworks. That means the existing 3rd party frameworks will remain relevant because they have more freedom to add any desired feature.

About the Author(s)

(davidherron.com) David Herron : David Herron is a writer and software engineer focusing on the wise use of technology. He is especially interested in clean energy technologies like solar power, wind power, and electric cars. David worked for nearly 30 years in Silicon Valley on software ranging from electronic mail systems, to video streaming, to the Java programming language, and has published several books on Node.js programming and electric vehicles.

Books by David Herron

(Sponsored)