-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flaky TimeSeriesGroupingIteratorTest #242
Comments
@Agorguy Thanks for the report, but if the tests are flaking out solely due to being run in a severely resource constrained environment, that's probably not a problem for this project. This project is designed to work work with Apache Accumulo, an inherently "big data" application, and it is not expected that to be run in such a constrained environment. So, if the tests are failing with timeouts, or out of memory issues, or similar, that's probably not a problem for this project. However, if the tests are failing with failed assertions, then voluntary contributions in PRs to make the test assertions more robust might be appreciated (like a test that looks for a watched condition that usually happens immediately, could be made to wait a bit longer for the condition to occur before failing after a time). |
@ctubbsii I appreciate your perspective on resource constraints and robust test assertions. However, it's important to specify minimum system requirements to prevent flaky test failures for users of this project. While designed for resource-rich environments, it's crucial to consider scenarios where users unintentionally run the project in other environments. Flaky test failures unrelated to their changes can lead to unnecessary debugging efforts. Our aim is to educate developers and users about the importance of specifying minimum requirements. This ensures project resilience against flaky tests, informs users about the expected environment, and avoids confusion or false bug reports due to insufficient resources. Including this information is a low-effort task that saves time debugging test failures when machines fall below the minimum specification. I understand that the project is complex, and fixing the flaky tests proved challenging for me. Despite my efforts, I couldn't resolve the issue due to the project's intricacies. To provide context, I'll share the stack traces of the flaky tests encountered, let me know if you have any ideas how to solve the problem based on this stack trace and I can give it one more try. |
Stacktraces:
|
@Agorguy I ran the build on my laptop with 32GB RAM, and the TimeSeriesGroupingIteratorTest also failed there. So, I think this test is flaky regardless of resources, but I don't know how to fix it. So, I will leave that to its primary maintainers to fix when they have time. Thanks for bringing this test flakiness to our attention. |
Hello,
We tried running your project and discovered that it contains some flaky tests (i.e., tests that nondeterministically pass and fail). We found these tests to fail more frequently when running them on certain machines of ours.
To prevent others from running this project and its tests in machines that may result in flaky tests, we suggest adding information to the README.md file indicating the minimum resource configuration for running the tests of this project as to prevent observation of test flakiness.
If we run this project in a machine with 1cpu and 500mb ram, we observe flaky tests. We found that the tests in this project did not have any flaky tests when we ran it on machines with 2cpu and 4gb ram.
Here is a list of the tests we have identified and their likelihood of failure on a system with less than the recommended 2 CPUs and 2 GB RAM.
Please let me know if you would like us to create a pull request on this matter (possibly to the readme of this project).
Thank you for your attention to this matter. We hope that our recommendations will be helpful in improving the quality and performance of your project, especially for others to use.
Reproducing
Build the image:
Running:
this configuration likely prevents flakiness (no flakiness in 10 runs)
checking results
this other configuration –similar to the previous– can’t prevent flaky tests (observation in 10 runs)
The text was updated successfully, but these errors were encountered: