Contents

Full-stack application in Rust: Quick start

Rust project quick start

In the article introducing this series, I shared that I wanted to explore full-stack app development. In particular, I wanted to develop a full-stack application that helps people manage their checklists. Users will use a browser to connect to the front-end server, which, in turn, will use the API provided by the back-end server to access and modify the checklists.

This article is the first one describing what I am doing in Rust. I will cover project creation and structure, as well as build automation. We won't have much of the application implemented by the end of this article, but it should be a good and helpful start. So buckle up and let's get cracking!

Project structure and setup

First things first. Let's initialize the project with the following commands on the shell.

  mkdir rexlists
  cd rexlists

Next, let's create the seed for the cargo configuration of this workspace1.

  [workspace]
  resolver="3"

As I mentioned earlier, I plan to produce two binaries. The first will handle all front-end tasks as an HTTP server, serving HTML and related content, while communicating with the back end. The second binary will manage the back-end tasks and responsibilities. The front-end binary will implement the role of a Back end For Front end (BFF), which will be especially useful as the application grows to support clients like web, mobile, and desktop, and connects to various back-end services such as checklist management, payments, and calendar.

Now, let's make a package in the workspace for both the front and the back end.

  cargo new frontend
  cargo new backend

Workspaces take care of producing the binaries in the same target directory, which we add to our .gitignore.

With this structure in place, we can build and run both binaries.

  cargo run -p frontend
  cargo run -p backend

Initial HTTP servers

Both binaries will act as HTTP servers, and we will use Axum, one of the most popular web application frameworks for Rust, to implement that functionality.

Let's start by adding axum to the front end, which in turn adds Tokio. But since we need to enable a couple of its features, we also add it manually.

  cargo add -p frontend axum
  cargo add -p frontend -F macros,rt-multi-thread tokio

Now we modify the code in frontend/src/main.rs as shown below.

  use axum::Router;

  #[tokio::main]
  async fn main() {
      let server_addr = "localhost:3000";
      let router = Router::new();

      println!("Launching front end: http://{server_addr}");
      let listener = tokio::net::TcpListener::bind(&server_addr).await.unwrap();
      axum::serve(listener, router).await.unwrap();
  }

We can run the front-end server with cargo run -p frontend. You can connect to it from the browser using the default URL http://localhost:3000/. However, your welcome message is going to be more disappointing than warm: A 404 error. What?!? No worries, that is expected because no content is being served yet.

We can repeat, more or less, the same process for the back-end server. First, we add the dependencies.

  cargo add -p backend axum
  cargo add -p backend -F macros,rt-multi-thread tokio

And the code in backend/src/main.rs should also be almost identical.

  use axum::Router;

  #[tokio::main]
  async fn main() {
      let server_addr = "localhost:3100";
      let router = Router::new();

      println!("Launching back end: http://{server_addr}");
      let listener = tokio::net::TcpListener::bind(&server_addr).await.unwrap();
      axum::serve(listener, router).await.unwrap();
  }

You can run the back end with cargo run -p backend and try it in the browser too.

Simplify dependencies

If you take a look at the Cargo.toml files of each package in the workspace, you will realize that the dependencies are identical, which is fair because we added exactly the same ones to both packages. But we would like to keep them in sync instead of having to update each one independently.

I wish cargo add let me add dependencies to the whole workspace, but that's not possible yet. So we have two options:

  • We could use cargo-autoinherit, which finds dependencies used in more than one package and that can be merged in the workspace.
  • Alternatively, we can do it manually by editing the three Cargo.toml files.

In this case, I will go with the latter. In the manifest of the workspace, the one in the root of the project, we add a new section and copy the two dependencies that are shared by the two packages in their manifests.

  [workspace.dependencies]
  axum = "0.8.6"
  tokio = { version = "1.48.0", features = ["macros", "rt-multi-thread"] }

Then, in the manifests of the packages, we express dependencies using inheritance.

  [dependencies]
  axum = { workspace = true }
  tokio = { workspace = true }

We could also add the features and optional keys, should we need them. The former adds features to those declared in the workspace manifesto, specifically for the current package. The latter allows us to include them only when some feature is enabled.

Build automation

Cargo comes with most of the building tasks automated out of the box. But, just in case you want to add an automatic rebuild after saving changes, let's add that.

Enable clippy pedantic lints group

Cargo has a built-in subcommand to run the clippy linter, and it is very useful to improve the quality of our code2.

By default, only some of the lints are enabled. However, we can easily change the configuration of our project to make clippy a little more picky, using the pedantic lints group.

The configuration also uses inheritance, like we did with dependencies. In the project manifest, we add the following two sections. The first one avoids unsafe code, and the second one enables the pedantic lints group to produce warnings.

  [workspace.lints.rust]
  unsafe_code = "forbid"

  [workspace.lints.clippy]
  pedantic = "warn"

And we add the following section to each of the packages that wants to use the same configuration.

  [lints]
  workspace = true

Use Bacon to watch

Since cargo watch is now on life support, we will use bacon to monitor changes to the source and act accordingly. So we can start by installing it.

  cargo install --locked bacon

Bacon runs a job when the code changes. It knows the files it has to watch and, by default, ignores the same files that Git does. We can specify the job we want to run in the command line or use the one that is configured as the default. We can also use Control-J to select a different job from the ones that are configured.

I have produced a bacon.toml file with the default configuration using cargo --init, and I have modified that configuration to include the jobs that make more sense to me. Each job is defined in a different TOML section.

jobs.clippy
Runs clippy with the default arguments (using the configuration that we have set up above)
jobs.test and jobs.nextest
Run the tests with the default test runner or with Nextest, respectively
jobs.doc and jobs.doc-open
Produce the documentation using rustdoc and the latter opens the browser
jobs.run-frontend and jobs:run-backend
Run the front or back end package, respectively
  default_job = "clippy"
  env.CARGO_TERM_COLOR = "always"

  [jobs.clippy]
  command = ["cargo", "clippy"]
  need_stdout = false

  # This job lets you run
  # - all tests: bacon test
  # - a specific test: bacon test -- config::test_default_files
  # - the tests of a package: bacon test -- -- -p config
  [jobs.test]
  command = ["cargo", "test"]
  need_stdout = true

  [jobs.nextest]
  command = [
      "cargo", "nextest", "run",
      "--hide-progress-bar", "--failure-output", "final"
  ]
  need_stdout = true
  analyzer = "nextest"

  [jobs.doc]
  command = ["cargo", "doc", "--no-deps"]
  need_stdout = false

  # If the doc compiles, then it opens in your browser and bacon switches
  # to the previous job
  [jobs.doc-open]
  command = ["cargo", "doc", "--no-deps", "--open"]
  need_stdout = false
  on_success = "back" # so that we don't open the browser at each change

  [jobs.run-frontend]
  command = [
      "cargo", "run", "-p", "frontend",
  ]
  need_stdout = true
  allow_warnings = true
  background = false
  on_change_strategy = "kill_then_restart"

  [jobs.run-backend]
  command = [
      "cargo", "run", "-p", "backend",
  ]
  need_stdout = true
  allow_warnings = true
  background = false
  on_change_strategy = "kill_then_restart"

  [keybindings]
  c = "job:clippy" # comment this to have 'c' run clippy on only the default target
  shift-f = "job:run-frontend"
  shift-b = "job:run-backend"

We can run bacon in the terminal. If we don't provide any argument, it will run the default job, i.e., clippy for my configuration. But we can also use the name of any of the defined jobs as an argument and provide additional arguments, if needed. We can use this later on with bacon run-frontend to run the front end when its code changes.

Summary

This has been the first article of a series on writing a full-stack application in Rust. We have created a workspace for our project and a couple of packages for both the front and back end. We may add more in the future.

I have explained how to use dependency inheritance in a workspace, allowing dependencies to be shared among its packages.

I have also added some extra build automation tasks using bacon on top of the capabilities offered by cargo. We will have the clippy linter running across the entire project for us every time a change is made to the source code.

The full code repository for this project with individual commits for each part of the explanation is available for you to check what I did and where, and code along with me.

We don't have a full-stack application yet, but this is a pretty solid foundation. In the next article, we will write some real code.

Stay curious. Hack your code. See you next time!

Footnotes

Summary

Stay curious. Hack your code. See you next time!


1

The resolver is the algorithm used to resolve the dependencies.

2

Say what you will about linters, but they are one of the cornerstones to maintain mental health in development teams.