Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Intermittant Failure in Reading #4

Closed
jolhoeft opened this issue Nov 16, 2017 · 12 comments
Closed

Intermittant Failure in Reading #4

jolhoeft opened this issue Nov 16, 2017 · 12 comments

Comments

@jolhoeft
Copy link

I'm afraid I don't have a lot of details on this yet. As part of migrating nickel.rs to hyper 0.11.x, I am using this crate to support future based file reading. It seems to work fine whenever I connect to an example with a browser, but occasionally cargo test will fail. The failure is no data is returned, but no errors are returned either. Do you have any suggestions?

The diff where I added this: jolhoeft/nickel.rs@1d71d20

The pull request to nickel.rs: nickel-org/nickel.rs#410

@jolhoeft
Copy link
Author

This travis build has the failures: https://travis-ci.org/nickel-org/nickel.rs/jobs/302778599

I don't know that here is much useful information thee though.

@jolhoeft
Copy link
Author

jolhoeft commented Nov 16, 2017

Further data, loading the file assets/thoughtram_logo_brain.png (14109 bytes) is always succeeding, but assets/nested/foo.js (39 bytes) is intermittently failing.

@jolhoeft jolhoeft reopened this Nov 16, 2017
@jolhoeft
Copy link
Author

An update, I reimplemented the file load with futures-cpupool and normal file I/O, just reading everything into a buffer. I got the same behaviors as with futures-fs, an occasional failure to return any data. I was able to insert some eprintln! statements to determine that the file is getting read into the buffer, but not getting beyond that. I suspect this points to an issue in futures-cpupool, since that seems the common element.

@jolhoeft
Copy link
Author

My version using CpuPool, changing

let stream = self.fspool.read(path_ref.to_owned()).
    map(|b| Chunk::from(b)).
    map_err(|e| HyperError::from(e));

to

let stream = self.cpupool.spawn_fn(|| {
    let mut file = match File::open(path_buf) {
        Ok(f) => f,
        Err(e) => { return future::err(e) },
    };
    let mut buf = Vec::new();
    match copy(&mut file, &mut buf) {
        Ok(_) => {
            eprintln!("Got buf: {:?}", &buf[0..16]);
            future::ok(buf)
        },
        Err(e) => future::err(e),
    }
}).into_stream().
    map(|b| Chunk::from(b)).
    map_err(|e| HyperError::from(e));

Obviously, loading the whole file into a buffer before sending it will be suboptimal in many cases.

@jolhoeft
Copy link
Author

A note about the CpuPool version, the eprintln! indicate I am always getting the file; no errors are being thrown. But the data from the file is not always propagating through.

@arthurprs
Copy link

arthurprs commented Nov 22, 2017

I got the same behaviors as with futures-fs,

It could be an error in another part of the stack, which is even worse.

Are you still able to reproduce this?

@jolhoeft
Copy link
Author

I am still seeing the problem. I suspect it is something in futures-cpupool, but I've been trying to tease apart the stack to see if I could pinpoint it better before opening more issues. The stack currently looks like futures-cpupool -> hyper -> tokio -> futures, and I'm sure there is more I'm not aware of. The other side of the test harness has a similar stack, but since that is working for all but two tests (out of 56), I think the problem is on the server side.

As a side note, I see this both under Linux (Ubuntu 16.04) and Windows 10. I have an impression the it is more common on my slower systems, but I have not measured that.

@seanmonstar
Copy link
Owner

seanmonstar commented Nov 24, 2017 via email

@jolhoeft
Copy link
Author

I'm just pulling hyper 0.11.7 from crates.io. The only configuration setting I'm currently setting is to enable keep-alives.

I can test with the tip of master. Also I'll try some different file sizes to see if that turns up something.

Is this the best place to discuss this, or would the hyper issue I created (hyperium/hyper#1377) be better?

@jolhoeft
Copy link
Author

I've experimented with files sizes, and don't see a difference up to 30k or so. I didn't test larger than that. In the process I realized the tests were not quite as robust, and improved them. All files are seeing this problem, not just a subset as I mentioned earlier.

@jolhoeft
Copy link
Author

Testing with the master branch of hyper has the same behavior.

@jolhoeft
Copy link
Author

jolhoeft commented Jan 8, 2018

Turns out this was a bug in my test harness. Details at hyperium/hyper#1377

@jolhoeft jolhoeft closed this as completed Jan 8, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants