SECRET OF CSS

Rust as Part of a Microservice Architecture | by applied.math.coding | Jun, 2022


Rust + TypeScript

1*Fttqmqpq6DKNX7nJlAGgPw

It is already some time ago that I have written a small account about how to use Rust within a full-stack web application (see here). This post is intended to follow up on this by offering an alternative approach in how to include Rust into the application.

In general, we might say:

The most apparent fortunes of Rust are the speed of solving CPU-intensive tasks and the very efficient handling of memory. The latter comes without the need for any garbage collector.

As nice as these features are, they come with a little downside — it requires us to stick with a very strict ownership model. Though, not always one considers this as a downside since sticking to the ownership model produces very stable and maintainable code.

However, there are times when speed of development is a big factor in a project. Especially then it is desirable to split the application into several parts and to use the most suitable language for each of these parts. One solution to realize this concept is by using a micro-service architecture and exactly this we are going to look at in this story.

Prerequisites to understand the contents of this article are basic knowledge of Rust and TypeScript. Both can be obtained from here resp. here.

In order to have a simple example, we will implement three micro-services altogether:

  1. main-server: This provides a public API and hosts a small Vue based client. The language is TypeScript and we will make use of the very popular framework NestJS.
  2. calc-engine: This is a Rust server that provides methods to do some CPU-intensive calculation.
  3. rabbitmq: This deems a message broker between the above-mentioned services and is backed by RabbitMQ.

Finally, all this will be deployed in a docker-compose to make it easily sharable.

In the following, I will restrict the descriptions only to the most essential parts of the implementation to provide an as much as pleasant reading experience. The entire code can be viewed at this repository that deems for educational purposes only.

This part is a typical cargo-based binary application. The dependencies are as follows:

amiquip = { version = "0.4.2", default-features = false }
serde_json = { version = "1.0.81" }
rayon = { version = "1.5.3" }
num_cpus = { version = "1.13.1" }
dotenv = { version = "0.15.0" }

The important one is amiquip that enables communication with RabbitMQ. The crate rayon is used to make the application using as many as possible computations in parallel. In particular, we are setting up a pool of threads like so:

fn setup_pool() -> rayon::ThreadPool {
rayon::ThreadPoolBuilder::new()
.num_threads(num_cpus::get())
.build()
.unwrap()
}

For CPU-heavy tasks, it usually makes sense to not have running in parallel more than available CPUs.

The connection to RabbitMQ is obtained like so:

fn setup_connection() -> Connection {
if let Ok(c) = Connection::insecure_open(&format!(
"amqp://{}:{}@{}:{}",
env::var("RABBITMQ_USER").unwrap(),
env::var("RABBITMQ_PWD").unwrap(),
env::var("RABBITMQ_HOST").unwrap(),
env::var("RABBITMQ_PORT").unwrap()
)) {
println!("Connected to rabbitmq!");
c
} else {
println!("Failed to connect to rabbitmq. Will retry in 2s.");
std::thread::sleep(std::time::Duration::from_secs(2));
setup_connection()
}
}

Besides of doing the conventional approach of taking certain values from an .env file, we cope with connection failures by retrying every 2s. This approach especially suits micro-service setups since every involved service should be able to recover from the temporary failure of other services.

As communication style between the services we are going to use RPC (remote-procedure-call). Not always this is the best fit but for here it suffices the purpose. Our ‘heavy’ CPU task will be to compute the solution steps of the Tower of Hanoi. This will explain the naming in what follows.

The first thing we do here is to ensure the queue named hanoi to exist on RabbitMQ. This is done by opening a channel over the connection and then by just declaring the queue.

This queue is used as message production/consumption across the services. We can listen to message being put onto the queue by calling queue.receiver(). This works similar to listening on Rust’s internal channels and ensure the code inside the block is being run message by message.

Each new message must be matched against its type, that in principle is either ConsumerMessage::Delivery or some connection error that we treat collectively in other. Beside the body the message has two other fields, that is, reply_to and correlation_id. The body represents the actual contents (here the number of elements in Hanoi game), reply_to an ‘exclusive’ queue that is used to send back the result and correlation_id is used to match the request with the response.

So a client sending a message to the hanoi queue, attaches an unique correlation_id that enables it to later pick the result written onto the reply_to queue by the server. In order to process as many CPU-tasks in parallel as possible, we just put the current request in a closure onto the pool of threads. Each such closure contains the logic to send back the result over the reply_to queue.

Sending is done by using an instance of Exchange::direct for the channel_for_msg channel. other is treated by just re-trying to establish this listener constructs every 2s. Again, this makes our service immune against connection failures or restarts of RabbitMQ.

First of all, the main server offers a public API that lets clients request the solution steps for the Hanoi game:

@Get('/hanoi')
getHello(@Query('n', ParseIntPipe) n: number): Promise<string> {
...
return this.appService.makeHanoi(n);
}

If you never have been introduced to NestJS, no problem, it is the most straightforward to learn web framework that introduces its principle concepts in this easy to read overview.

As you can see, the endpoint is delegating a method makeHanoi which finally ends up at the method MessageService.sendMessage. This service contains the logic of connecting to RabbitMQ but now from the main-server side:

async sendMessage(n: number): Promise<string> {
const channel = await this.ensureChannel();
const replyTo = await this.ensureResponseQueue(channel);
const correlationId = this.generateUuid();
channel.sendToQueue(
this.HANOI_QUEUE,
Buffer.from(`${n}`),
{
correlationId,
replyTo
});
return lastValueFrom(
this.queueResponse.pipe(
filter(
m => m?.properties.correlationId === correlationId
),
first(),
map(m => m.content.toString())
)
);
}

Here you can recognize all the previous bespoken message parameter, that is, replyTo and correlationId. Since we are on the producer side, these values are produced on this side and attached to the message that is sent onto the hanoi queue.

After sending onto the hanoi queue, we do register a one-time listener on the queue that is used as replyTo. More in detail, this later queue is being consumed internally to emit values to the following BehaviorSubject:

private queueResponse = new BehaviorSubject<Message>(null);

As you see, the above code registers a subscription on the first matching correlationId and translates this into a Promise by using the operator lastValueFrom.

Should you never have seen things like that, this is based on the popular reactivity library rxjs.

Without going into all details, let us have a short look at the implementation of ensureChannel.

Actually, the core part of this code is simple: connect(...) establishes a connection to RabbitMQ, connection.createChannel uses the connection to create a channel which asserts the queue hanoi to exist.

All the cluttering around this method is necessary to ensure this method to work correctly in the asynchronous context it is running. The established channel is stored in a local field to not having to recreate it every time a message is sent.

The main server in addition is hosting a small client application that is backed by Vue.js. In case you are interested in its implementation, you may find all the code at the aforementioned repository. Its aim is to provide the following simple UI:

1*epjNwbo3z8LgJk 9sf2rCA

Docker-compose:

All the above-described components are built and put together within a docker-compose container. The following is the content of the corresponding docker-compose.yml:

version: "3"
services:
main-server:
build: ./main-server
env_file: .env
environment:
- RABBITMQ_HOST=rabbitmq
ports:
- "3000:3000"
calc-engine:
build: ./calc-engine
env_file: .env
environment:
- RABBITMQ_HOST=rabbitmq
rabbitmq:
image: rabbitmq:3-management

If you clone the repository and have installed docker on your system, then you can run the code by issuing the following command from a terminal located at the repository’s root folder:

> docker-compose up

After this, you can point your browser to http://localhost:3000 to ‘enjoy’ the above UI.

This is not the only approach to dealing with separate parts of an application into several compilation units. A very promising in the context of Node.js is the use of NAPI-RS.

This avoids the overhead of messages and you can read more about this here. On the other hand, by putting different parts into a standalone server (micro-server?) you gain the fortune of differentiated scalability. So each part can be scaled up or done depending on its needs when running in a cluster.

Thanks for reading!





News Credit

%d bloggers like this: