Search code examples
concurrencyrustmutexshared-ptr

What is the idiomatic way to write Rust microservice with shared db connections and caches?


I'm writing my first Rust microservice with hyper. After years of development in C++ and Go I tend to use controller for processing requests (like here - https://github.com/raycad/go-microservices/blob/master/src/user-microservice/controllers/user.go) where the controller stores shared data like db connection pool and different kinds of cache. I know, with hyper, I can write it this way:

use hyper::{Body, Request, Response};

pub struct Controller {
//    pub cache: Cache,
//    pub db: DbConnectionPool
}

impl Controller {
    pub fn echo(&mut self, req: Request<Body>) -> Result<Response<Body>, hyper::Error> {
        // extensively using db and cache here...
        let mut response = Response::new(Body::empty());
        *response.body_mut() = req.into_body();
        Ok(response)
    }
}

and then use it:

use hyper::{Server, Request, Response, Body, Error};
use hyper::service::{make_service_fn, service_fn};

use std::{convert::Infallible, net::SocketAddr, sync::Arc, sync::Mutex};

async fn route(controller: Arc<Mutex<Controller>>, req: Request<Body>) -> Result<Response<Body>, hyper::Error> {
    let mut c = controller.lock().unwrap();
    c.echo(req)
}

#[tokio::main]
async fn main() {
    let addr = SocketAddr::from(([127, 0, 0, 1], 3000));

    let controller = Arc::new(Mutex::new(Controller{}));

    let make_svc = make_service_fn(move |_conn| {
        let controller = Arc::clone(&controller);
        async move {
            Ok::<_, Infallible>(service_fn(move |req| {
                let c = Arc::clone(&controller);
                route(c, req)
            }))
        }
    });

    let server = Server::bind(&addr).serve(make_svc);

    if let Err(e) = server.await {
        eprintln!("server error: {}", e);
    }
}

Since the compiler doesn't let me share mutable structure between threads I got to use Arc<Mutex<T>> idiom. But I'm afraid the let mut c = controller.lock().unwrap(); part would block the entire controller while processing single request, i.e. there's no concurrency here. What is the idiomatic way to address this problem?


Solution

  • &mut always acquires a (compile time or runtime) exclusive lock to the value. Only acquire a &mut at the exact scope you want to get locked. If a value owned by the locked value needs separate locking management, wrap it in a Mutex.

    Assuming your DbConnectionPool is structured like this:

    struct DbConnectionPool {
        conns: HashMap<ConnId, Conn>,
    }
    

    We need to &mut the HashMap when we add/remove items on the HashMap, but we don't need to &mut the value in Conn. So Arc allows us to separate the mutability boundary from its parent, and Mutex allows us to add its own interior mutability.

    Moreover, our echo method doesn't want to be &mut, so another layer of interior mutability needs to be added on the HashMap.

    So we change this to

    struct DbConnectionPool {
        conns: Mutex<HashMap<ConnId, Arc<Mutex<Conn>>>,
    }
    

    Then when you want to get a connection,

    fn get(&self, id: ConnId) -> Arc<Mutex<Conn>> {
        let mut pool = self.db.conns.lock().unwrap(); // ignore error if another thread panicked
        if let Some(conn) = pool.get(id) {
            Arc::clone(conn)
        } else {
            // here we will utilize the interior mutability of `pool`
            let arc = Arc::new(Mutex::new(new_conn()));
            pool.insert(id, Arc::clone(&arc));
            arc
        }
    }
    

    (the ConnId param and the if-exists-else logic is used to simplify the code; you can change the logic)

    On the returned value you can do

    self.get(id).lock().unwrap().query(...)
    

    For convenient illustration I changed the logic to user supplying the ID. In reality, you should be able to find a Conn that has not been acquired and return it. Then you can return a RAII guard for Conn, similar to how MutexGuard works, to auto free the connection when user stops using it.

    Also consider using RwLock instead of Mutex if that might result in a performance boost.