SECRET OF CSS

How To Avoid Hitting API Rate Limits Using TypeScript | by Bret Cameron | Sep, 2022


Creating a TypeScript class to send requests in batches

Selective focus photo of brown and blue hourglass on stones
Photo by Aron Visuals on Unsplash

Rate limits are an important part of API security, helping to prevent malicious activity and reduce strain on server resources. However, for API users, they can also create headaches — and it’s not only less-than-perfect code that creates problems. Without a strategy in place, spikes in legitimate traffic can also lead to the dreaded 429 "Too Many Requests" error status.

So, how can we make sure we’re not sending too many requests? In this article, we’ll look into a simple pattern that will help us batch up our asynchronous requests into intervals, which can be configured to the specific rate limits of the APIs we’re working with. The examples below are written using TypeScript.

When working with a rate limit, we will typically be limited to a certain number of requests within a certain period: say, twenty requests every five seconds. Class syntax provides a useful way of managing our requests and allows us to create a separate instance for each API we may be working with.

At a minimum, our class should allow us to set the number of requests allowed in a given interval and the time (in milliseconds) each interval lasts.

Notice that we have also created a property, queuedRequests , which we can use to keep track of the number of requests being queued in a given interval.

Building the RequestScheduler class in this way will allow us to create multiple instances with different APIs’ requirements. For example:

Next, let’s create a schedule method which takes a function and — if the function is past the final one allowed in a given interval — adds a delay via setTimeout to wait for the new interval to start.

Our full class might look something like this. In the code snippet below, I have also added a debugMode option to the constructor, so we can log some useful information about when our functions are being triggered — to ensure that the method is working!

Let’s put our code to the test! First, we’ll create a new instance of our class in debug mode with relatively few requests, so it’s easier to see what’s happening.

const requestScheduler = new RequestScheduler({
requestsPerInterval: 3,
intervalTime: 5000,
debugMode: true,
});

Next, I’ll create an asynchronous function to mock making an HTTP request.

async function mockHttpRequest() {
return new Promise((resolve) => {
resolve("Hello World");
});
}

Finally, I’ll try to execute this function 11 times, wrapping each execution inside the schedule method.

(async () => {
for (let i = 0; i < 11; i++) {
await requestScheduler.schedule(mockHttpRequest);
}
})();

Running this code, we’ll see logs similar to this:

RequestScheduler: 0.818ms #1  mockHttpRequest
RequestScheduler: 3.459ms #2 mockHttpRequest
RequestScheduler: 4.751ms #3 mockHttpRequest
--- RequestScheduler: Wait 5000ms ---
RequestScheduler: 5.007s #4 mockHttpRequest
RequestScheduler: 5.010s #5 mockHttpRequest
RequestScheduler: 5.012s #6 mockHttpRequest
--- RequestScheduler: Wait 5000ms ---
RequestScheduler: 10.014s #7 mockHttpRequest
RequestScheduler: 10.017s #8 mockHttpRequest
RequestScheduler: 10.019s #9 mockHttpRequest
--- RequestScheduler: Wait 5000ms ---
RequestScheduler: 15.021s #10 mockHttpRequest
RequestScheduler: 15.024s #11 mockHttpRequest

Success! To play around with this code yourself, check out this CodePen. (Make sure to open the console to see the logs!)

The power of this pattern lies partly in the fact that we can use it to coordinate requests across our application. One real-world example of where I’ve found this approach useful is during the build step of a Next.js app when I am statically generating HTML at build time, partly dependent on third-party APIs.

My website has a third-party Content Management System (for hosting blog posts) and a third-party Applicant Tracking System (for hosting job listings). Each has its own rate limit, and the queries that make requests to each API can be found in many different files. Plus, a lot about when and how the pages are built is controlled under the hood by Next.js.

To handle this, I export the following RequestScheduler instances:

export const cmsScheduler = new RequestScheduler({
requestsPerInterval: 10,
intervalTime: 1_000,
});
export const atsScheduler = new RequestScheduler({
requestsPerInterval: 50,
intervalTime: 10_000,
});

Then inside the build step for a page, I can call, for example:

const blogPosts = await cmsScheduler.schedule(getBlogPosts);
const jobListings = await atsScheduler.schedule(getJobListings);

Now, as the website scales, I no longer need to worry about hitting the rate limits of our third-party APIs!



News Credit

%d bloggers like this: