SECRET OF CSS

Building the Rust Web App — How to Use Object-Relational Mapper | by Garrett Udstrand | Jul, 2022


To maximize your benefits with containers

0*v5TEreWRhusVimDN
Photo by Federico Bottos on Unsplash

This is the third part of a multi-part series about writing web apps. For this series, we will be writing the web app in Rust, and I explain to you how to write it yourself.

If you’d rather not write out the code yourself, however, I have made a repository with all the code written throughout this series here. I made a commit to the repository at the end of each part of the series. In the previous part, we covered using databases to make doing CRUD operations even easier. In this part, we discuss using an Object-Relational Mapper, or an ORM for short, to make it even easier to work with our database.

Object-Relational Mapping is a technique for converting data between type systems in object-oriented programming languages. To simplify, it is a technique for converting classes and their data into data that can be used by other programs. Generally, though, when some mentions “an ORM,” they are discussing a library that uses this technique.

ORMs are almost always used with databases. An ORM will take an object in a class, or an instance of a struct, and create an entry in a database for it. So, we could just feed an ORM an instance of our Task struct, and it will create an entry in the tasks table for us, rather than us being required to write the sql ourselves.

So, an ORM is basically a way to reduce boilerplate code. Rather than writing a bunch of repetitive sql code, we leave it to the ORM to generate the sql code based on our structs and the code we leave within those structs. Beyond that, ORMs often implement ways to manage to create the tables available in your database. These are called “migrations.”

Basically, our database driver gives us the ability to run sql commands on our database. It simply allows interacting with our database. An ORM allows us to write in the language of our choice, rather than running sql commands to interact with the database.

With the way we’ve configured Rocket, it is expecting to use an asynchronous ORM. The most popular async ORM in Rust at the moment is SeaQL/sea-orm, so this is what we’ll be implementing on our code base.

First, modify your Cargo.toml to look like the following:

With that, we have installed sea-orm and some other libraries we’ll need and have removed the libraries that are now redundant. Now, the first thing we are going to do is create those migrations I mentioned before. We’ll use sea-orm to create the tables in our database, so we don’t have to do that by hand.

In your terminal, run the following command:

cargo install sea-orm-cli

This will install a command-line interface (CLI), so we can run certain commands at our terminal. As you can imagine, these commands make it easier to do certain things with sea-orm. Using our newly installed CLI, we are going to create the directory to hold our migrations by running the following command:

sea-orm-cli migrate init

In my current version of sea-orm, this generates a slightly wrong Cargo.toml. Ensure that the Cargo.toml in your migration directory has the sqlx-postgres and runtime-tokio-native-tls features enabled for sea-orm-migration enabled and async-std and Rocket imported like the following (the import for async-std is the chunk of code that starts with [dependencies.async-std]).

Now, we’ll create a new migration by running the following command:

sea-orm-cli migrate generate create_tasks_table

Now, if you go to migration/src, you’ll see a file that starts with m, has some numbers and has create_tasks_table at the end. For me, it ended up being m20220623_084419_create_tasks_table, but the number is time-dependent so it will be different for you. Regardless, open that file. Now, put the following code in that file.

Note: under impl MigrationName..., keep the string the same as the name of the file. It should not be the name I have.

Delete the other file with the numbers at the front and modify lib.rs so it only uses your migration like this:

Next, make sure the main.rs in migration/src looks like the following:

Now, at the terminal run

cargo new entity

Change entity‘s Cargo.toml to look like the following:

Note that the name and path values underneath the [lib] are default values. But those are just put there in case the defaults ever change. All we really need is the [lib] to declare that Cargo has a library target.

Rename entity/src/main.rs to lib.rs. We’ll make sure that this gets replaced with some code that can be imported in a bit.

Over at our original src, sea-orm is going to be replacing the database driver we used in our example up until now. It’s the same database driver for the most part, but it has some added features and other peculiar things to make it work with sea-orm.

For now, we are just going to treat the code that allows us to connect to our database as a boilerplate that doesn’t need to be understood. We’ll be putting this boilerplate in its own file, and then importing the needed parts into main.rs. So, in src, create a new file called pool.rs with the following code:

Now, we are going to change our main function to connect to our newer, fancier database and run the migrations that we are creating. Thus, go into main.rs and change the rocket function to look like the following:

Now, it hurts to do this, but none of this code in main.rs is purposeful anymore, so we are going to delete it. Delete the Task struct, TaskItem struct, TaskId struct, TodoDatabase struct, DatabaseError struct, the two implementations for Database Error, add_task, read_tasks, edit_task and delete_task. When all is said and done, our main.rs is back to almost being new.

Now, over in Postgres, log into the todo database and delete our pre-existing tasks table with the following command:

DROP TABLE tasks;

With all of that out of the way, you can once again run cargo run in the terminal, which will use our migration to create a tasks table. It will also create a table called seaql_migrations, which will keep track of information on our migrations.

Entities are the fancy structs that we’ll use to pull data from and put data into the database. With SeaORM, there is the option of generating entities from our database after running our migrations. But it’s usually better to only ever run this once and write most of your entities by hand.

Mainly because we’ll be editing these entities to work better in our project and running the entity generation command overwrites the changes we’ve made. As such, I’m just going to give you the code for the entities we make, but keep in mind you can generate entities from an existing database if you ever find a need or a good use for it.

So, in entity/src create a file called tasks.rs and enter the following code:

By the way, you may notice that our id field has the attribute #[field(default = 0)] applied to it. This makes it so that when our data is processed, the id will default to 0 if no value is given, so Tasks can be entered with no id.

With that out of the way, create a file called lib.rs in entity/src and enter the following code into it:

pub mod tasks;

This will allow the Model struct in tasks.rs to be used in any package that imports the entity library crate (creating a folder called entity with this code has allowed us to locally make a library called entity. If you’d like to read more about how Rust deals with sharing code between files and projects, check out chapter 7 of the Rust book, which goes over it in some good detail).

So, in terms of our code, we can now use our exported tasks model as the struct for getting entered data. Note that the model implements FromForm, so we will be entering our data as x-www-form-urlencoded rather than JSON now. Considering our small number of basic parameters, this switch makes a lot of sense.

Further, in Rocket, form parsing is lenient by default, so if there are missing, duplicate, or extra fields, it will still parse. Considering we have places where we may get an id, but no item, an item, but no id, or both, this will reduce the extra code we had to write before.

In any case, let’s reimplement all our CRUD operations via the ORM. You’ll notice that I am mainly using the model to find items in the database and to add items I’ve put together using the model’s struct into the database. That’s the power of an ORM. I don’t have to write sql now; I can just write out functions in Rust.

So, let’s get started! The first is rewriting the creation operation. Here’s the code for that:

As you can see, we’ve once again used DatabaseError as a wrapper around the actual error from our database library, so that we can return those errors. In add_task, the into_inner functions pull the data we want from the variables we have.

Finally, in SeaORM, to use a model to update or create an item, it needs to be an ActiveModel, so we have to use the data we already have to create an ActiveModel version. Finally, the last line is, of course, inserting the item into the database.

Next is the read operation which is even easier. I’ve included the import for one of the variables used in this function, because it is a named import, and you wouldn’t be able to import it properly otherwise.

This one is pretty simple as well. Use the database to find all tasks and order them by ascending ids.

Next, edit_task is a bit more complicated, but it still is pretty simple. We find the task we are attempting to update, task_to_update, modify the fields we would like to on it, then call update to update the database.

Finally, delete_task is the only one that has had its API changed a little bit. Due to the fact that I only get a DeleteResult from SeaORM, I now just return how many tasks were deleted rather than the deleted task.

Also, because I can’t ensure that an id is given with a task form, and the only value I’m taking in is an id, I make id a parameter of the URL, which works nicely in this case. So, it looks different but basically does the same thing.

Thus, the overall main.rs looks like this:

And, with that, we have once again implemented our CRUD operations, this time using an ORM. However, we have added a lot of code to the code base and have only saved approximately three lines in main.rs. Let’s talk about that.

With any software problem, we can add complexity to increase flexibility. In other words, we can add more code and make our solution more complicated, so that it’s easier to change certain aspects of our solution or to add new features to it.

Generally, we want our code to be somewhat flexible. Since programming is iterative, it is almost assured that someone will look at something you wrote and decide to change pieces of it or add new features to it. If your code is flexible, whatever the developer has to do this will have a much easier time.

However, increased complexity makes it harder to understand your code. Further, the more complex a solution is, the more prone it is to bugs. Little problems or errors that are hard to expect or notice.

On top of that, more complex code is harder to maintain. There’s more code to maintain, the code is doing fancier things that have to be more carefully maintained, and, assuming this more complex code is using more functions or features from libraries, it is more liable to break when a certain library updates.

Thus, with any solution, you have to be careful. A very flexible solution will also be very complex and will be a nightmare to work with, even though the original intention was to make it easier to work with. On the other hand, a very simple solution will have no flexibility, so it will probably have to be rewritten from the ground up whenever a new feature needs to be added. Neither extreme is ideal.

When you “over-engineer” a problem, it means that you have made your code far too flexible, and, in the process, made it much more complex than it should be. Of course, if a solution is actually over-engineered or not is a matter of subjective opinion. It depends on the tastes and sensibilities of the developer.

But, just like any subjective thing, there are certain cases where most people agree. I believe most people would agree that implementing an ORM for the project we are doing is over-engineering.

An ORM makes it super easy to create large amounts of tables with a lot of relationships and to work with the data inside that large database. It’s a tool for when you have tens or hundreds of tables. In our case, we are only going to have two tables by the time we’re done with this app. While maintaining them directly via sql is more difficult, it also requires a lot less code complexity, and is manageable for a small number of tables.

So, why did I implement an ORM at all? As I stated at the start of this piece, my goal is to show off all the sorts of libraries and concepts that modern web apps and frameworks use, so you can get a better understanding of them. While an ORM is not a great idea for this app, it is a concept I wanted to demonstrate, because ORMs are pretty much ubiquitous in backend frameworks.

There are going to be many other things where the solution ends up adding a lot of complexity for very little benefit to the app, and that’s because I’m trying to demonstrate the concepts used in modern web dev, even if not all of them are perfect for the particular app we are making.

You may have noticed an interesting issue with our current database. That’s the fact that we have to install and set up an application on our machine to make our code work. When only a single developer is working on a project, this is fine. However, this can pose many challenges in a setting where multiple developers may be working on this project.

The first issue is that installing software is tedious, never fun, and often prone to error. The second is that not all applications are available on all operating systems. While postgres may be available on Windows, Mac, and Linux, having that sort of cross-compatibility is not a given with a lot of software.

Further, let’s say a developer working on this project wanted to use postgres at home. Using the same installation of postgres for multiple projects is possible but messy. Beyond that, figuring out where to distribute the configurations that are required to make the software work is difficult.

Is that done on a readme? Sent as a message in Slack? Put in a text file? And what happens if those configurations update or change? How do you notify people? Finally, local installations are prone to being corrupted or messed up by other things running on the developer’s machine.

It would be nice if there was a way to install postgres in such a way as to avoid all these problems. And, since I’m bringing it up, you know there is. The answer to these problems is containerization. How does containerization work? Instead of running the app directly on our OS, we run another OS and run the app in that.

By making the OS as small as possible, plus doing some other optimizations, this can be adequately fast and work well in a development environment. Even better, most containers (the OS and the app inside of them) can be configured via certain files. Thus, we can version control how an app like postgres is configured.

Since the container we wish to install and the settings for the app are controlled by configuration files, a developer doesn’t have to worry about installing the software by hand. Most containerization apps are cross-platform, too, meaning that you can run apps that usually aren’t cross-platform inside of a container.

Beyond that, you can have as many containers as you want in most containerization apps, so you can have an installation of postgres for work and an installation of postgres for home if you so desire. Beyond that, developers no longer need to be notified when configurations change, the containerization software will just take the changed configurations and apply them.

Finally, since our app is running in a completely separate OS, we don’t have to worry about it being messed up or corrupted by other apps on our machine.

With all that in mind, let’s use containers to make our postgres installation easier to work with. There are a variety of apps that provide containerization, but we are going to use Docker. A lot of people have started to dislike Docker in recent years for their pricing model, so, if you intend to use containerization in the future, you may want to consider other options. But for our toy example purposes, Docker will work just fine. So, go to the site, create an account, and install Docker as a desktop app.

Now, from here, we’ll be using the Postgres — Official Image to make our container. What’s an image? It’s just the OS with the app installed in it. The container is an instance running an image. So, we’ll use an image to run a container that has postgres on it.

How are we going to be running this image? We’ll be using the command-line tool docker/compose, which is installed with the desktop installation of Docker. docker-compose takes a file called docker-compose.yml and uses that to determine what images to run and with what configurations.

By the way, as a note, Dockerfiles can be used to modify existing images, allowing you to build your own images that better fit your purposes. Luckily, we can get away with just using the base postgres image, but Dockerfiles are very often used in development environments, so I thought it should be mentioned.

In any case, we’ll create a file called docker-compose.yml in the same directory as Cargo.toml. Inside of it will go the following code:

Now, if you go to the todo-app directory, and run docker-compose up, your database will start. With the database running, you can then open up another terminal and run your app. However, there is still one problem.

Since postgres is installed on your machine, it is constantly running a service that uses port 5432. Thus, when we access port 5432, we will be accessing our local app rather than the one in the container.

Turn off or uninstall your postgres installation to allow you to connect to the containerized database. Once you do that, you’ll notice nothing has changed, and the database works the same. It’s just easier to configure and work with now.

No, we don’t. We could run the image with the proper settings via the command line or the desktop app. But, using docker-compose allows us to leave a useful configuration file behind that details all the configurations used to make our app work. And, in the case we want multiple containers to be run for our app, we can list the configurations for all of them in docker-compose.yml.

I’ve never done this myself, but you can also use a Dockerfile‘s functionality to configure your image as well. So, if you want to get around using docker-compose for some weird reason, that may be an option. In any case, there’s lots of ways to work with Docker, but I believe this one really maximizes the benefits you see from containerizing applications that work with your software, so we use it in this series.

In any case, that’s all I have for today. In the next part, we’ll take our current app and address many of the problems it currently has, like its poor error handling and non-existent frontend.

Thank you for reading this article. I hope this series will continue to help you improve your web development skills.



News Credit

%d bloggers like this: