Search code examples
multithreadingrustmultiprocessingthreadpoolfibers

Why do my Futures not max out the CPU?


I am creating a few hundred requests to download the same file (this is a toy example). When I run the equivalent logic with Go, I get 200% CPU usage and return in ~5 seconds w/ 800 reqs. In Rust with only 100 reqs, it takes nearly 5 seconds and spawns 16 OS threads with 37% CPU utilization.

Why is there such a difference?

From what I understand, if I have a CpuPool managing Futures across N cores, this is functionally what the Go runtime/goroutine combo is doing, just via fibers instead of futures.

From the perf data, it seems like I am only using 1 core despite the ThreadPoolExecutor.

extern crate curl;
extern crate fibers;
extern crate futures;
extern crate futures_cpupool;

use std::io::{Write, BufWriter};
use curl::easy::Easy;
use futures::future::*;
use std::fs::File;
use futures_cpupool::CpuPool;


fn make_file(x: i32, data: &mut Vec<u8>) {
    let f = File::create(format!("./data/{}.txt", x)).expect("Unable to open file");
    let mut writer = BufWriter::new(&f);
    writer.write_all(data.as_mut_slice()).unwrap();
}

fn collect_request(x: i32, url: &str) -> Result<i32, ()> {
    let mut data = Vec::new();
    let mut easy = Easy::new();
    easy.url(url).unwrap();
    {
        let mut transfer = easy.transfer();
        transfer
            .write_function(|d| {
                data.extend_from_slice(d);
                Ok(d.len())
            })
            .unwrap();
        transfer.perform().unwrap();

    }
    make_file(x, &mut data);
    Ok(x)
}

fn main() {
    let url = "https://en.wikipedia.org/wiki/Immanuel_Kant";
    let pool = CpuPool::new(16);
    let output_futures: Vec<_> = (0..100)
        .into_iter()
        .map(|ind| {
            pool.spawn_fn(move || {
                let output = collect_request(ind, url);
                output
            })
        })
        .collect();

    // println!("{:?}", output_futures.Item());
    for i in output_futures {
        i.wait().unwrap();
    }
}

My equivalent Go code


Solution

  • From what I understand, if I have a CpuPool managing Futures across N cores, this is functionally what the Go runtime/goroutine combo is doing, just via fibers instead of futures.

    This is not correct. The documentation for CpuPool states, emphasis mine:

    A thread pool intended to run CPU intensive work.

    Downloading a file is not CPU-bound, it's IO-bound. All you have done is spin up many threads then told each thread to block while waiting for IO to complete.

    Instead, use tokio-curl, which adapts the curl library to the Future abstraction. You can then remove the threadpool completely. This should drastically improve your throughput.