-
Notifications
You must be signed in to change notification settings - Fork 460
Fix minification issue / make implementation more solid #824
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few things:
- Thank you for taking the time to contribute. It is appreciated.
- This unlikely to be an issue in the
next
branch. If it is, theSearchEntry
object will come from https://github.com/ldapjs/messages and the check should beObject.prototype.toString.call(msg) === '[object LdapMessage]' && msg.type === 'SearchEntry'
. - If we are going to publish this against the v2 release line (and I am really hesitant to make any further releases on that line), then this PR should target the v2 branch. I am not willing to make any further changes to the
master
branch until I am able to merge thenext
branch into it. - This PR would need a corresponding unit test to cover it before it can be accepted.
Thank you for the quick response. This issue is (still) causing a headache since 6 years. Unfortunately it's also a really sneaky issue, which can take hours to understand and find the root cause. Worst of all, it only shows very late in the development process (only in the integration tests, where the code is minified). The fix is simple and clean IMO. It makes the code easier to read and understand.
I understand that you'd like to have a regression test for this. I thought about it for quite some time on how to write one, but I didn't come up with a solution to emulate a different class name for As already shown, the current tests already cover the implementation.
I'm not sure, but I think
which could be replaced with What I found on the
OK, so I change the target to the v2 branch 🤞 |
The current tests pass. Therefore, the "issue" being fixed is not covered by the tests. A test should be added that fails without the change.
As I said, it will be replaced by the currently in-development package. I do not have
An |
a9b79f5
to
17cfc60
Compare
I changed the target branch to
Agree. At least the tests ensure that what was working before still works. And we know that it actually fixes a bug? Even if it wouldn't, would the code be easier to read and understand? Anyways - do you have a suggestion on how to add such a test? I don't know how to do that without big changes 🤔
Ok, got it.
Surprising! 🤔 Hmm. I see that you already handled the issue by overwriting Sorry for the false positive |
This seems equivalent, but I cannot say that for sure.
This is not accurate. The current package provides the message "classes" for anyone to use.
Correct.
Does it? I have no idea without a test showing what is being fixed.
Possibly? This seems like a weak argument when the stated intent is to solve some failure of the code. |
Let's get one thing straight: code which relies on
In general, relying on a specific class name (which is a specific implementation) violates LSP: Someone probably already noticed, that depending on the class name is not a good idea, and added a
(Some / The most?) popular minifiers will minify the function names / class names by default:
30bde8a emulates that the classes/functions are renamed => it tests that the code breaks in that case. There's no other test which checks that the message classes are named the way they are. But that doesn't mean nobody outside of this package will use these classes, and use Then, if we apply one of the solutions (the one in this PR, or e1146da), it will fix the test.
I understand that you are hesitant to merge this, and I understand that you want to have this bug first confirmed, then resolved, and then kept it that way. And the best way to guarantee that is to have a test. I know, 100% agree, I'm with you. But people are having real-life issues here. Otherwise there wouldn't have been all these issues / PRs. So we know there's a bug. We just don't know how to test it in an automated way, except writing a program, then minify it's dependencies, and then observe that it stops working. And that "test" would be out of proportion, wouldn't it? I mean technically we could write such a test, but is it necessary? I've tried to write a unit tests, and I couldn't manage it, because everything is so encapsulated that the only way I found to test it is to manually rename the class, which I did in 30bde8a. I'm not proud of that, and I'd prefer to have a better way. The good thing: the new code is tested. And it is simpler. Clearer. More direct. Less magical. So in the very worst case, it does nothing. Right?
Hmm. It's a completely different argument, agree. And yes, a unit test which shows the system failing would be a very strong argument. But by no means is this argument weak: code which is easier to understand and reason about is less likely to contain a bug - especially when we know that there is a bug in the current implementation :) |
No. If your tools are changing the guarantees of the language then your tools are to blame, not the code.
Uh, sure. As the code base is updated to use the new independent modules this will be resolved. Probably.
I have no idea why the
And here's the statement you will not like: I do not care about minifiers, or transpilation of any sort, in the slightest.
Good.
This is what a seemingly small subset of users claim. If it were a widespread problem there would certainly be much more discussion and interest in fixing it.
No one said writing such a test would be easy. The "proportion" of the test is related to the problem.
Write an integration test. But renaming the objects as you have linked to multiple times is not the right way to do it. As stated before: they are public objects.
No, it isn't. It is not tested according to the stated purpose of the change.
Again, we do not know there is a bug when there is not a test case to prove it. Here's the short of it: this project was never designed to be used in the manner in which you are using it. It is a Node.js module designed to be run on top of the Node.js runtime. Such modules have zero need to be transpiled. If you intend to transpile the code, then it is up to you to configure your tooling to not break your dependencies. That being said, we want contributors to this project and for this project to be a product of the contributors's work based upon their needs. But we need those contributors to support their work, and the minimum support they can provide is to provide a guarantee that their contributions do what they claim to do. This is done by including at least one test that proves the contribution. This PR could be the only thing you ever add to this project. If it is merged without a covering test, then in 6 months or 2 years from now when the code is changed again there will be no way to know if the problem has been avoided or reintroduced. And without you around to review every change, it is likely to be the case that the issue will be reintroduced. |
I give up |
Please include a minimal reproducible example |
master
branch.With the refactored implementation, all tests are green:
Changing
return sendResult('searchEntry', msg)
toreturn sendResult('somethingSilly', msg)
results in 14 asssertation errors:Changing
return sendResult('searchReference', msg)
toreturn sendResult('somethingSilly', msg)
results in 2 asssertation errors: