Tokio's io::split with Cursor<Vec<u8>> won't get the full written data

while doing some tests with tokio, I noticed that when using tokio::io::split to have the writer and reader to be sent into different tasks, the reader part never waits for the writer to be closed nor for any write when the writing part is done in another task. I tried finding on the documentation but, nothing that would explain the reason for this. Did I miss something there?

This is a minimal code to reproduce that.

use std::io::Cursor;

use tokio::io::{self, AsyncReadExt, AsyncWriteExt};
use tokio::spawn;

async fn main() {
    let a = Cursor::new(vec![]);
    let (mut read, mut write) = io::split(a);

    spawn(async move {
        for i in 0..10 {

    let mut output = vec![];

    let written = read.read_buf(&mut output).await.unwrap();
    assert_eq!(written, 10);

Which can be executed from here.

The output on playground is the following:

Standard Error
   Compiling playground v0.0.1 (/playground)
    Finished dev [unoptimized + debuginfo] target(s) in 1.60s
     Running `target/debug/playground`
thread 'main' panicked at 'assertion failed: `(left == right)`
  left: `0`,
 right: `10`', src/
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Standard Output

It just returns 0 instead of ten. I'd imagine that it it sbecause the waker for Cursor<Vec<_>> would make sense to be always ready because there is no IO (assumption here) being done, unless you are piping data from somewhere else into the cursor. I also tried yield_now, or sleep but, it never gets any data written to it. I also made sure that the writer gets moved so on its drop it could flush and remaining data and close any handle it could have.

In a normal cursor I'd have to seek it to position 0 but, woudn't that defeat the purpose of split?


Here is the result and where it is being used so the functions can use AsyncRead and AsyncWrite.


  • This is not really an intended use-case. Tokio's io::split is primarily as a convenience for things like network streams (TCP, unix sockets, etc.) that have fully independent reading and writing streams. It is not intended for things like files and Cursors whose reading and writing streams are not independent.

    The problem you observe with Cursor is because it has only a single "position" that reads come from and writes write to. You could seek to zero for every read, but since that seek step is distinct from the read step, a write could happen in the middle and cause you to miss and/or misinterpret data.

    If you want to share data between threads, why not a simple channel or a duplex stream?