Using promises
A Promise
is an object representing the eventual completion or failure of an asynchronous operation. Since most people are consumers of already-created promises, this guide will explain consumption of returned promises before explaining how to create them.
Essentially, a promise is a returned object to which you attach callbacks, instead of passing callbacks into a function. Imagine a function, createAudioFileAsync()
, which asynchronously generates a sound file given a configuration record and two callback functions: one called if the audio file is successfully created, and the other called if an error occurs.
Here's some code that uses createAudioFileAsync()
:
function successCallback(result) {
console.log(`Audio file ready at URL: ${result}`);
}
function failureCallback(error) {
console.error(`Error generating audio file: ${error}`);
}
createAudioFileAsync(audioSettings, successCallback, failureCallback);
If createAudioFileAsync()
were rewritten to return a promise, you would attach your callbacks to it instead:
createAudioFileAsync(audioSettings).then(successCallback, failureCallback);
This convention has several advantages. We will explore each one.
Chaining
A common need is to execute two or more asynchronous operations back to back, where each subsequent operation starts when the previous operation succeeds, with the result from the previous step. In the old days, doing several asynchronous operations in a row would lead to the classic callback hell:
doSomething(function (result) {
doSomethingElse(result, function (newResult) {
doThirdThing(newResult, function (finalResult) {
console.log(`Got the final result: ${finalResult}`);
}, failureCallback);
}, failureCallback);
}, failureCallback);
With promises, we accomplish this by creating a promise chain. The API design of promises makes this great, because callbacks are attached to the returned promise object, instead of being passed into a function.
Here's the magic: the then()
function returns a new promise, different from the original:
const promise = doSomething();
const promise2 = promise.then(successCallback, failureCallback);
This second promise (promise2
) represents the completion not just of doSomething()
, but also of the successCallback
or failureCallback
you passed in — which can be other asynchronous functions returning a promise. When that's the case, any callbacks added to promise2
get queued behind the promise returned by either successCallback
or failureCallback
.
Note: If you want a working example to play with, you can use the following template to create any function returning a promise:
function doSomething() {
return new Promise((resolve) => {
setTimeout(() => {
// Other things to do before completion of the promise
console.log("Did something");
// The fulfillment value of the promise
resolve("https://example.com/");
}, 200);
});
}
The implementation is discussed in the Creating a Promise around an old callback API section below.
With this pattern, you can create longer chains of processing, where each promise represents the completion of one asynchronous step in the chain. In addition, the arguments to then
are optional, and catch(failureCallback)
is short for then(null, failureCallback)
— so if your error handling code is the same for all steps, you can attach it to the end of the chain:
doSomething()
.then(function (result) {
return doSomethingElse(result);
})
.then(function (newResult) {
return doThirdThing(newResult);
})
.then(function (finalResult) {
console.log(`Got the final result: ${finalResult}`);
})
.catch(failureCallback);
You might see this expressed with arrow functions instead:
doSomething()
.then((result) => doSomethingElse(result))
.then((newResult) => doThirdThing(newResult))
.then((finalResult) => {
console.log(`Got the final result: ${finalResult}`);
})
.catch(failureCallback);
Note:
Arrow function expressions can have an implicit return; so, () => x
is short for () => { return x; }
.
doSomethingElse
and doThirdThing
can return any value — if they return promises, that promise is first waited until it settles, and the next callback receives the fulfillment value, not the promise itself. It is important to always return promises from then
callbacks, even if the promise always resolves to undefined
. If the previous handler started a promise but did not return it, there's no way to track its settlement anymore, and the promise is said to be "floating".
doSomething()
.then((url) => {
// Missing `return` keyword in front of fetch(url).
fetch(url);
})
.then((result) => {
// result is undefined, because nothing is returned from the previous
// handler. There's no way to know the return value of the fetch()
// call anymore, or whether it succeeded at all.
});
By returning the result of the fetch
call (which is a promise), we can both track its completion and receive its value when it completes.
doSomething()
.then((url) => {
// `return` keyword added
return fetch(url);
})
.then((result) => {
// result is a Response object
});
Floating promises could be worse if you have race conditions — if the promise from the last handler is not returned, the next then
handler will be called early, and any value it reads may be incomplete.
const listOfIngredients = [];
doSomething()
.then((url) => {
// Missing `return` keyword in front of fetch(url).
fetch(url)
.then((res) => res.json())
.then((data) => {
listOfIngredients.push(data);
});
})
.then(() => {
console.log(listOfIngredients);
// listOfIngredients will always be [], because the fetch request hasn't completed yet.
});
Therefore, as a rule of thumb, whenever your operation encounters a promise, return it and defer its handling to the next then
handler.
const listOfIngredients = [];
doSomething()
.then((url) => {
// `return` keyword now included in front of fetch call.
return fetch(url)
.then((res) => res.json())
.then((data) => {
listOfIngredients.push(data);
});
})
.then(() => {
console.log(listOfIngredients);
// listOfIngredients will now contain data from fetch call.
});
Even better, you can flatten the nested chain into a single chain, which is simpler and makes error handling easier. The details are discussed in the Nesting section below.
doSomething()
.then((url) => fetch(url))
.then((res) => res.json())
.then((data) => {
listOfIngredients.push(data);
})
.then(() => {
console.log(listOfIngredients);
});
Using async
/await
can help you write code that's more intuitive and resembles synchronous code. Below is the same example using async
/await
:
async function logIngredients() {
const url = await doSomething();
const res = await fetch(url);
const data = await res.json();
listOfIngredients.push(data);
console.log(listOfIngredients);
}
Note how the code looks exactly like synchronous code, except for the await
keywords in front of promises. One of the only tradeoffs is that it may be easy to forget the await
keyword, which can only be fixed when there's a type mismatch (e.g. trying to use a promise as a value).
async
/await
builds on promises — for example, doSomething()
is the same function as before, so there's minimal refactoring needed to change from promises to async
/await
. You can read more about the async
/await
syntax in the async functions and await
references.
Note: async
/await
has the same concurrency semantics as normal promise chains. await
within one async function does not stop the entire program, only the parts that depend on its value, so other async jobs can still run while the await
is pending.
Error handling
You might recall seeing failureCallback
three times in the pyramid of doom earlier, compared to only once at the end of the promise chain:
doSomething()
.then((result) => doSomethingElse(result))
.then((newResult) => doThirdThing(newResult))
.then((finalResult) => console.log(`Got the final result: ${finalResult}`))
.catch(failureCallback);
If there's an exception, the browser will look down the chain for .catch()
handlers or onRejected
. This is very much modeled after how synchronous code works:
try {
const result = syncDoSomething();
const newResult = syncDoSomethingElse(result);
const finalResult = syncDoThirdThing(newResult);
console.log(`Got the final result: ${finalResult}`);
} catch (error) {
failureCallback(error);
}
This symmetry with asynchronous code culminates in the async
/await
syntax:
async function foo() {
try {
const result = await doSomething();
const newResult = await doSomethingElse(result);
const finalResult = await doThirdThing(newResult);
console.log(`Got the final result: ${finalResult}`);
} catch (error) {
failureCallback(error);
}
}
Promises solve a fundamental flaw with the callback pyramid of doom, by catching all errors, even thrown exceptions and programming errors. This is essential for functional composition of asynchronous operations. All errors are now handled by the catch()
method at the end of the chain, and you should almost never need to use try
/catch
without using async
/await
.
Nesting
In the examples above involving listOfIngredients
, the first one has one promise chain nested in the return value of another then()
handler, while the second one uses an entirely flat chain. Simple promise chains are best kept flat without nesting, as nesting can be a result of careless composition.
Nesting is a control structure to limit the scope of catch
statements. Specifically, a nested catch
only catches failures in its scope and below, not errors higher up in the chain outside the nested scope. When used correctly, this gives greater precision in error recovery:
doSomethingCritical()
.then((result) =>
doSomethingOptional(result)
.then((optionalResult) => doSomethingExtraNice(optionalResult))
.catch((e) => {}),
) // Ignore if optional stuff fails; proceed.
.then(() => moreCriticalStuff())
.catch((e) => console.error(`Critical failure: ${e.message}`));
Note that the optional steps here are nested — with the nesting caused not by the indentation, but by the placement of the outer (
and )
parentheses around the steps.
The inner error-silencing catch
handler only catches failures from doSomethingOptional()
and doSomethingExtraNice()
, after which the code resumes with moreCriticalStuff()
. Importantly, if doSomethingCritical()
fails, its error is caught by the final (outer) catch
only, and does not get swallowed by the inner catch
handler.
In async
/await
, this code looks like:
async function main() {
try {
const result = await doSomethingCritical();
try {
const optionalResult = await doSomethingOptional(result);
await doSomethingExtraNice(optionalResult);
} catch (e) {
// Ignore failures in optional steps and proceed.
}
await moreCriticalStuff();
} catch (e) {
console.error(`Critical failure: ${e.message}`);
}
}
Note:
If you don't have sophisticated error handling, you very likely don't need nested then
handlers. Instead, use a flat chain and put the error handling logic at the end.
Chaining after a catch
It's possible to chain after a failure, i.e. a catch
, which is useful to accomplish new actions even after an action failed in the chain. Read the following example:
doSomething()
.then(() => {
throw new Error("Something failed");
console.log("Do this");
})
.catch(() => {
console.error("Do that");
})
.then(() => {
console.log("Do this, no matter what happened before");
});
This will output the following text:
Initial Do that Do this, no matter what happened before
Note: The text "Do this" is not displayed because the "Something failed" error caused a rejection.
In async
/await
, this code looks like:
async function main() {
try {
await doSomething();
throw new Error("Something failed");
console.log("Do this");
} catch (e) {
console.error("Do that");
}
console.log("Do this, no matter what happened before");
}
Promise rejection events
If a promise rejection event is not handled by any handler, it bubbles to the top of the call stack, and the host needs to surface it. On the web, whenever a promise is rejected, one of two events is sent to the global scope (generally, this is either the window
or, if being used in a web worker, it's the Worker
or other worker-based interface). The two events are:
unhandledrejection
-
Sent when a promise is rejected but there is no rejection handler available.
rejectionhandled
-
Sent when a handler is attached to a rejected promise that has already caused an
unhandledrejection
event.
In both cases, the event (of type PromiseRejectionEvent
) has as members a promise
property indicating the promise that was rejected, and a reason
property that provides the reason given for the promise to be rejected.
These make it possible to offer fallback error handling for promises, as well as to help debug issues with your promise management. These handlers are global per context, so all errors will go to the same event handlers, regardless of source.
In Node.js, handling promise rejection is slightly different. You capture unhandled rejections by adding a handler for the Node.js unhandledRejection
event (notice the difference in capitalization of the name), like this:
process.on("unhandledRejection", (reason, promise) => {
// Add code here to examine the "promise" and "reason" values
});
For Node.js, to prevent the error from being logged to the console (the default action that would otherwise occur), adding that process.on()
listener is all that's necessary; there's no need for an equivalent of the browser runtime's preventDefault()
method.
However, if you add that process.on
listener but don't also have code within it to handle rejected promises, they will just be dropped on the floor and silently ignored. So ideally, you should add code within that listener to examine each rejected promise and make sure it was not caused by an actual code bug.
Composition
There are four composition tools for running asynchronous operations concurrently: Promise.all()
, Promise.allSettled()
, Promise.any()
, and Promise.race()
.
We can start operations at the same time and wait for them all to finish like this:
Promise.all([func1(), func2(), func3()]).then(([result1, result2, result3]) => {
// use result1, result2 and result3
});
If one of the promises in the array rejects, Promise.all()
immediately rejects the returned promise and aborts the other operations. This may cause unexpected state or behavior. Promise.allSettled()
is another composition tool that ensures all operations are complete before resolving.
These methods all run promises concurrently — a sequence of promises are started simultaneously and do not wait for each other. Sequential composition is possible using some clever JavaScript:
[func1, func2, func3]
.reduce((p, f) => p.then(f), Promise.resolve())
.then((result3) => {
/* use result3 */
});
In this example, we reduce an array of asynchronous functions down to a promise chain. The code above is equivalent to:
Promise.resolve()
.then(func1)
.then(func2)
.then(func3)
.then((result3) => {
/* use result3 */
});
This can be made into a reusable compose function, which is common in functional programming:
const applyAsync = (acc, val) => acc.then(val);
const composeAsync =
(...funcs) =>
(x) =>
funcs.reduce(applyAsync, Promise.resolve(x));
The composeAsync()
function accepts any number of functions as arguments and returns a new function that accepts an initial value to be passed through the composition pipeline:
const transformData = composeAsync(func1, func2, func3);
const result3 = transformData(data);
Sequential composition can also be done more succinctly with async/await:
let result;
for (const f of [func1, func2, func3]) {
result = await f(result);
}
/* use last result (i.e. result3) */
However, before you compose promises sequentially, consider if it's really necessary — it's always better to run promises concurrently so that they don't unnecessarily block each other unless one promise's execution depends on another's result.
Cancellation
Promise
itself has no first-class protocol for cancellation, but you may be able to directly cancel the underlying asynchronous operation, typically using AbortController
.
Creating a Promise around an old callback API
A Promise
can be created from scratch using its constructor. This should be needed only to wrap old APIs.
In an ideal world, all asynchronous functions would already return promises. Unfortunately, some APIs still expect success and/or failure callbacks to be passed in the old way. The most obvious example is the setTimeout()
function:
setTimeout(() => saySomething("10 seconds passed"), 10 * 1000);
Mixing old-style callbacks and promises is problematic. If saySomething()
fails or contains a programming error, nothing catches it. This is intrinsic to the design of setTimeout()
.
Luckily we can wrap setTimeout()
in a promise. The best practice is to wrap the callback-accepting functions at the lowest possible level, and then never call them directly again:
const wait = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
wait(10 * 1000)
.then(() => saySomething("10 seconds"))
.catch(failureCallback);
The promise constructor takes an executor function that lets us resolve or reject a promise manually. Since setTimeout()
doesn't really fail, we left out reject in this case. For more information on how the executor function works, see the Promise()
reference.
Timing
Lastly, we will look into the more technical details, about when the registered callbacks get called.
Guarantees
In the callback-based API, when and how the callback gets called depends on the API implementor. For example, the callback may be called synchronously or asynchronously:
function doSomething(callback) {
if (Math.random() > 0.5) {
callback();
} else {
setTimeout(() => callback(), 1000);
}
}
The above design is strongly discouraged because it leads to the so-called "state of Zalgo". In the context of designing asynchronous APIs, this means a callback is called synchronously in some cases but asynchronously in other cases, creating ambiguity for the caller. For further background, see the article Designing APIs for Asynchrony, where the term was first formally presented. This API design makes side effects hard to analyze:
let value = 1;
doSomething(() => {
value = 2;
});
console.log(value); // 1 or 2?
On the other hand, promises are a form of inversion of control — the API implementor does not control when the callback gets called. Instead, the job of maintaining the callback queue and deciding when to call the callbacks is delegated to the promise implementation, and both the API user and API developer automatically gets strong semantic guarantees, including:
- Callbacks added with
then()
will never be invoked before the completion of the current run of the JavaScript event loop. - These callbacks will be invoked even if they were added after the success or failure of the asynchronous operation that the promise represents.
- Multiple callbacks may be added by calling
then()
several times. They will be invoked one after another, in the order in which they were inserted.
To avoid surprises, functions passed to then()
will never be called synchronously, even with an already-resolved promise:
Promise.resolve().then(() => console.log(2));
console.log(1);
// Logs: 1, 2
Instead of running immediately, the passed-in function is put on a microtask queue, which means it runs later (only after the function which created it exits, and when the JavaScript execution stack is empty), just before control is returned to the event loop; i.e. pretty soon:
const wait = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
wait(0).then(() => console.log(4));
Promise.resolve()
.then(() => console.log(2))
.then(() => console.log(3));
console.log(1); // 1, 2, 3, 4
Task queues vs. microtasks
Promise callbacks are handled as a microtask whereas setTimeout()
callbacks are handled as task queues.
const promise = new Promise((resolve, reject) => {
console.log("Promise callback");
resolve();
}).then((result) => {
console.log("Promise callback (.then)");
});
setTimeout(() => {
console.log("event-loop cycle: Promise (fulfilled)", promise);
}, 0);
console.log("Promise (pending)", promise);
The code above will output:
Promise callback Promise (pending) Promise {<pending>} Promise callback (.then) event-loop cycle: Promise (fulfilled) Promise {<fulfilled>}
For more details, refer to Tasks vs. microtasks.
When promises and tasks collide
If you run into situations in which you have promises and tasks (such as events or callbacks) which are firing in unpredictable orders, it's possible you may benefit from using a microtask to check status or balance out your promises when promises are created conditionally.
If you think microtasks may help solve this problem, see the microtask guide to learn more about how to use queueMicrotask()
to enqueue a function as a microtask.
See also
Promise
async function
await
- Promises/A+ specification
- We have a problem with promises on pouchdb.com (2015)