Coding a simple microservices with Rust

Gene Kuo
6 min readDec 6, 2020

Introduction

We will create a simple REST service providing the functions of CRUD (create, read, update, and delete) for in-memory user data. Without serialization and deserialization between JSON and Rust instances, which we will expand the source code on further articles, this example will handle only CRUD of user ids to focus on the coding of a HTTP service with Rust and relevant crates.

The sample code can be cloned from the following GitHub repository: https://github.com/genekuo/rust-microservices.git

After implementing the initial version of the microservices, we will test it by using curl commands to see how it works.

Initial setup with hyper crate

We have decided to make use of Rust’s hyper crate to build a server to handle HTTP requests and responses. hyper crate is built on top of the futures crate and uses tokio crate for the server runtime.

First, we will use the following command to make a new project:

cargo new rust-microservices

Within the created folder, we then add the following dependencies to the Cargo.toml file.

[dependencies]
hyper = "0.12"
futures = "0.1"

In the main function, we will do the following setup to function as a simple HTTP server:

We will create the SocketAddr from a tuple of the ([u8; 4], u16) and bind to it. This is hardcoded for now. Normally, SocketAddr should be created from external strings, like command-line arguments or environment variables.

let addr = ([0, 0, 0, 0], 8080).into();
let builder = Server::bind(&addr);

Before addingNewService to the builderto handle requests, we will define the data for the microservices. Then, we can pass it to the handler function when we create a NewService .

Guard the data among multiple threads

The data may be used by many handlers even though we will have only one handler making use of the data in this article. Therefore, we have to use an atomic reference counter Arcand mutual exclusive primitive Mutexfor protecting this shared data used with multiple threads.

We also use Slab to provide the pre-allocated storage for the data. It won’t resize if we remove the item from the underlining data.

type UserId = u64;struct UserData;impl fmt::Display for UserData {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.write_str("{}")
}
}
type UserDb = Arc<Mutex<Slab<UserData>>>;

The dependencies will then becomes:

[dependencies]
hyper = "0.12"
futures = "0.1"
slab = "0.4.2"

The request handler for CRUD functions on the shared data

Next, we will define a request handler called user_handler , which will parse paths from requests to perform CRUD functions on UserDb and return the corresponding responses.

The signature of user_handler is the following. It returns an implementation of the Futuretrait by value, which can be a hyper::Response<Body> or hyper::Error .

fn user_handler(req: Request<Body>, user_db: &UserDb)    -> impl Future<Item=Response<Body>, Error=Error>

In order to provide RESTful APIs corresponding to CRUD functions on UserDb, we will use regular expressions to match the paths extracted from requests and handle them properly.

The path patterns we will handle for this example will be:

  • A path for managing a single user (creating, updating and deleting).
  • A path for listing users.

We will also use lazy_static! macro to initialize and execute regular expressions and assign the results to the created static variables.

So we add both dependencies to the Cargo.toml file, which results in:

[dependencies]
hyper = "0.12"
futures = "0.1"
slab = "0.4.2"
lazy_static = "1.4.0"
regex = "1.4.2"

Now we can define the regular expressions and static variables.

lazy_static! {    static ref USER_PATH: Regex = 
Regex::new("^/user/((?P<user_id>\\d+?)/?)?$").unwrap();
static ref USERS_PATH: Regex =
Regex::new("^/users/?$").unwrap();
}

Inside the body of user_handler function, we will mainly construct a response based on identified path and the processing on UserDb , and return the response by using future::ok function.

fn user_handler(req: Request<Body>, user_db: &UserDb) -> impl           Future<Item=Response<Body>, Error=Error> {    let response = ...;    ...    future::ok(response)
}

We will extract method and path from requests and acquire a lock on UserDb

let response = {    let method = req.method();    let path = req.uri().path();    let mut users = user_db.lock().unwrap();    ...

Rust’s pattern matching for different request paths and methods

We can proceed to math USERS_PATH and USER_PATH with just extracted path .

If requests match USERS_PATH and HTTP GET, we can iterate through users, map, and collect UserId s to a Vecof String. We finally join Vec<String> to a single String for responses.

...let mut users = user_db.lock().unwrap();if USERS_PATH.is_match(path) {    if method == &Method::GET {        let list = users.iter()            .map(|(id, _)| id.to_string())            .collect::<Vec<String>>()            .join(",");        Response::new(list.into())    } else {
response_status(StatusCode::METHOD_NOT_ALLOWED)
}
} else if let Some(cap) = USER_PATH.captures(path) {...

If a request matches a specific USER_PATH, we can use RegEx::captures method to return a Captures with the value of captured group. Using Captures::name method, we can then get the captured value by the group name: user_id .

Once we have user_id from a request, we will use Rust’s pattern matching to handle different HTTP methods (GET, POST, PUT and DELETE).

...
} else if let Some(cap) = USER_PATH.captures(path) {
let user_id = cap.name("user_id").and_then(|m| { m.as_str() .parse::<UserId>() .ok() .map(|x| x as usize) }); match (method, user_id) { (&Method::GET, Some(id)) => { if let Some(data) = users.get(id) { Response::new(data.to_string().into()) } else { response_status(StatusCode::NOT_FOUND) }
},
(&Method::POST, None) => { let id = users.insert(UserData); Response::new(id.to_string().into()) }, (&Method::POST, Some(_)) => { response_status(StatusCode::BAD_REQUEST) }, (&Method::PUT, Some(id)) => { if let Some(user) = users.get_mut(id) { *user = UserData; response_status(StatusCode::OK) } else { response_status(StatusCode::NOT_FOUND) } }, (&Method::DELETE, Some(id)) => { if users.contains(id) { users.remove(id); response_status(StatusCode::OK) } else { response_status(StatusCode::NOT_FOUND) } }, _ => { response_status(StatusCode::METHOD_NOT_ALLOWED) }, }} else { response_status(StatusCode::NOT_FOUND)}...

Server configuration and runtime

After defining user_handler and its implementation, we can now complete main function.

We will first create user_db using UserDb type defined earlier and then use the builder to create the server with the handler function.

The serve method of Builder struct lets us configure the server created, such as configuring support for HTTP1 and HTTP2, and attaching a function that generates a Service instance for handling HTTP requests.

We pass a closure to serve method. In the closure, will clone a reference of the shared state and call service_fn with another closure passed in. The inner closure will move the cloned reference of the shared state to user_handler function and generate a Service from it. In short, service_fn let us create a Service from a function to handle each connection.

let user_db = Arc::new(Mutex::new(Slab::new()));let server = builder.serve(move || {let user_db = user_db.clone();service_fn(move |req| user_handler(req, &user_db))});

The Server returned from the builder’s serve method implements the Future trait with the hyper::Error type. We will further drop the error and change the error type with a new Future that hyper runtime expects. We can finally start the server with the runtime.

let server = server.map_err(drop);hyper::rt::run(server);

Containerization the microservices

After the completion of coding the simple Rust microservices, we will create a Docker image with the nightly version of the Rust compiler to compile the above code and build an image with it. The Dockerfile in the repository is borrowed from https://github.com/rust-lang/docker-rust-nightly/blob/master/stretch/Dockerfile.

First we can build the image that contains the Rust compiler.

cd nightly
docker build -t rust:nightly .

Furthermore, the Dockerfile in the root directory of the project will specify how to build and package the microservice to an image.

FROM rust:nightlyRUN USER=root cargo new --bin rust-microservicesWORKDIR /rust-microservicesCOPY ./Cargo.toml ./Cargo.tomlRUN cargo buildRUN rm src/*.rsCOPY ./src ./srcRUN rm ./target/debug/deps/rust_microservice*RUN cargo buildCMD ["./target/debug/rust-microservice"]EXPOSE 8080

Now we can build the microservices image and run the container from it with docker-compose.

cd ..
docker-compose build
docker-compose up -d

Testing

We will use curl and run the following commands to see the results as we expect.

curl -X POST http://localhost:8080/user/
curl -X POST http://localhost:8080/user/
curl -X POST http://localhost:8080/user/
curl -X POST http://localhost:8080/user/
curl -X DELETE http://localhost:8080/user/2
curl http://localhost:8080/users/

This concludes our simple implementation of a Rust microservices.

Summary

The simple microservices with Rust only uses some popular crates: hyper, futures, slab, regex and lazy_static , and implements simple request-response model on a simple data.

The Rust language, and more built-in and third-party crates can help us build a more full-fledged microservices with functionalities of logging, configuration, serialization and deserialization of data, data persistence, asynchronous operations, and so on.

Thanks for reading.

--

--

Gene Kuo

Solutions Architect, AWS CSAA/CDA: microservices, kubernetes, algorithms, Java, Rust, Golang, React, JavaScript…