-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Intermittant Failure in Reading #4
Comments
This travis build has the failures: https://travis-ci.org/nickel-org/nickel.rs/jobs/302778599 I don't know that here is much useful information thee though. |
Further data, loading the file |
An update, I reimplemented the file load with futures-cpupool and normal file I/O, just reading everything into a buffer. I got the same behaviors as with futures-fs, an occasional failure to return any data. I was able to insert some eprintln! statements to determine that the file is getting read into the buffer, but not getting beyond that. I suspect this points to an issue in futures-cpupool, since that seems the common element. |
My version using CpuPool, changing let stream = self.fspool.read(path_ref.to_owned()).
map(|b| Chunk::from(b)).
map_err(|e| HyperError::from(e)); to let stream = self.cpupool.spawn_fn(|| {
let mut file = match File::open(path_buf) {
Ok(f) => f,
Err(e) => { return future::err(e) },
};
let mut buf = Vec::new();
match copy(&mut file, &mut buf) {
Ok(_) => {
eprintln!("Got buf: {:?}", &buf[0..16]);
future::ok(buf)
},
Err(e) => future::err(e),
}
}).into_stream().
map(|b| Chunk::from(b)).
map_err(|e| HyperError::from(e)); Obviously, loading the whole file into a buffer before sending it will be suboptimal in many cases. |
A note about the CpuPool version, the |
It could be an error in another part of the stack, which is even worse. Are you still able to reproduce this? |
I am still seeing the problem. I suspect it is something in futures-cpupool, but I've been trying to tease apart the stack to see if I could pinpoint it better before opening more issues. The stack currently looks like futures-cpupool -> hyper -> tokio -> futures, and I'm sure there is more I'm not aware of. The other side of the test harness has a similar stack, but since that is working for all but two tests (out of 56), I think the problem is on the server side. As a side note, I see this both under Linux (Ubuntu 16.04) and Windows 10. I have an impression the it is more common on my slower systems, but I have not measured that. |
How are you configuring hyper? If you're using the new "no proto" stuff,
there was a bug found that didn't properly handle back pressure on the
socket, so the file was read as fast as possible, and if the socket wasnt
ready, the bytes were dropped. This is fixed on hyper master, and will be
in a new version once I get back to reliable internet.
If you're not using the new stuff, then you may have found a different bug!
…On Wed, Nov 22, 2017, 2:38 PM Jeff Olhoeft ***@***.***> wrote:
I am still seeing the problem. I suspect it is something in
futures-cpupool, but I've been trying to tease apart the stack to see if I
could pinpoint it better before opening more issues. The stack currently
looks like futures-cpupool -> hyper -> tokio -> futures, and I'm sure there
is more I'm not aware of. The other side of the test harness has a similar
stack, but since that is working for all but two tests (out of 56), I think
the problem is on the server side.
As a side note, I see this both under Linux (Ubuntu 16.04) and Windows 10.
I have an impression the it is more common on my slower systems, but I have
not measured that.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#4 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AADJF3U4f7RkeERemJJQuvDA8zwAN9jHks5s5Hg4gaJpZM4QfyPP>
.
|
I'm just pulling hyper 0.11.7 from crates.io. The only configuration setting I'm currently setting is to enable keep-alives. I can test with the tip of master. Also I'll try some different file sizes to see if that turns up something. Is this the best place to discuss this, or would the hyper issue I created (hyperium/hyper#1377) be better? |
I've experimented with files sizes, and don't see a difference up to 30k or so. I didn't test larger than that. In the process I realized the tests were not quite as robust, and improved them. All files are seeing this problem, not just a subset as I mentioned earlier. |
Testing with the master branch of hyper has the same behavior. |
Turns out this was a bug in my test harness. Details at hyperium/hyper#1377 |
I'm afraid I don't have a lot of details on this yet. As part of migrating nickel.rs to hyper 0.11.x, I am using this crate to support future based file reading. It seems to work fine whenever I connect to an example with a browser, but occasionally cargo test will fail. The failure is no data is returned, but no errors are returned either. Do you have any suggestions?
The diff where I added this: jolhoeft/nickel.rs@1d71d20
The pull request to nickel.rs: nickel-org/nickel.rs#410
The text was updated successfully, but these errors were encountered: