Posts Tagged ‘error’
I have been ranting about bad error messages, so in my own work, error messages better be helpful. At least I try.
As for the recent milestone 6 of the JDT a significant novelty was actually mostly about the wording of a few error/warning messages issued by the null analysis of the JDT compiler. We actually had quite a discussion (no, I’m not expecting you to read all the comments in the bug:)).
Why did we go through all this instead of using the time to fix more bugs? Because the entire business of implementing more compiler warnings and in particular introducing annotation-based null analysis is to help you to better understand possible weaknesses of your code. This means our job isn’t done when the error message is printed on your screen, but only when you recognize why (and possibly how) your code could be improved.
So the game is:
when you see one of the new compiler messages
“what does it tell you?“
Both methods basically have the same code, still lines 14-17 are free of any warnings whereas the corresponding lines 24-27 have one warning and even an error. What does it tell us?
Here are some of the messages:
|line 10||Redundant null check: The variable val1 cannot be null at this location|
|line 12||Null comparison always yields false: The variable val1 cannot be null at this location|
Before bug 365859 the second method would show the same messages, giving no clue why after the initial start all the same code gives different results later. The initial improvement in that bug was to update the messages like this:
|line 20||Redundant null check: The variable val2 is specified as @NonNull|
|line 22||Null comparison always yields false: The variable val2 is specified as @NonNull|
Alright! Here lies the difference: in the first method, compiler warnings are based on the fact that we see an assignment with the non-null value
"OK" and carefully follow each data-flow from there on. Non-null definitely holds until line 15, where potentially (depending on where
b takes the control flow)
null is assigned. Now the check in line 16 appears useful.
By contrast, the warnings in the second method tell us, that they are not based on flow analysis, but on the mere fact that
val2 is declared as of type
@NonNull String. This specification is effectual, independent of location and flow, which has two consequences: now the assignment in line 25 is illegal; and since we can’t accept this assignment, line 26 still judges by the declaration of
val2 which says:
|line 25||Null type mismatch: required ‘@NonNull String’ but the provided value is null|
|line 26||Redundant null check: The variable val2 is specified as @NonNull|
Communicate the reasoning
Three levels to a good error message:
- “You did wrong.”
- “Your mistake was …”
- “This is wrong because …”
|line 31||Null type mismatch: required ‘@NonNull String’ but the provided value is null|
|line 32||Null type mismatch: required ‘@NonNull String’ but the provided value is specified as @Nullable|
|line 34||Null type mismatch: required ‘@NonNull String’ but the provided value is inferred as @Nullable|
Line 31 is obvious.
Line 32 is wrong because
in is declared as
null is a legal value for
in, but since it’s not legal for
tmp2 the assignment is wrong.
In line 34 we are assigning a value that has no nullness specification; we say,
unknown has a “legacy” type. From that alone the compiler cannot decide whether the assignment in line 34 is good. However, using also information from line 33 we can infer that
unknown (probably) has type
@Nullable String. In this particular case the inference is obvious, but the steps that lead to such conclusion can be arbitrarily complex.
What does this distinction tell you?
The error in line 31 is a plain violation of the specification:
tmp1 is required to be nonnull, but the assigment attempts to definitely break that rule.
The error in line 32 denotes the conflict between two contradictory declarations. We know nothing about actual runtime values, but we can tell without any doubt that the code violates a rule.
Errors of the type in line 34 are reported as a courtesy to the user: you didn’t say what kind of variable
unknown is, thus normally the compiler would be reluctant to report problems regarding its use, but looking a bit deeper the compiler can infer some missing information. Only in this category it makes sense to discuss whether the conclusion is correct. The inference inside the compiler might be wrong (which would be a compiler bug).
Sources of uncertainty
Of the previous messages, only the one in line 31 mentions a runtime fact, the remaining errors only refer to possibilities of null values where no null value is allowed. In these cases the program might actually work – by accident. Just like this program might work:
While this is not a legal Java program, a hypothetical compiler could produce runnable byte code, and if the method is invoked with an argument that happens to be a String, all is well – by accident.
While we have no guarantee that things would break at runtime, we know for sure that some rule has been broken and thus the program is rejected.
What can we tell about this assignment? Well … we don’t know, it’s not definitely bad, but it’s not good either. What’s the problem? We need a
@NonNull value, but we simply have no information whether
unspecified can possibly be null or not. One of those legacy types again. After much back and forth we finally found that we have a precendent for this kind of problem: what’s the compiler say to this snippet:
Right, it says:
Type safety: The expression of type List needs unchecked conversion to conform to List
meaning: we receive an argument with a type that lacks detailed specification, but we require such details on the left hand side of the assignment. Whether or not the RHS value matches the LHS requirement cannot be checked by the compiler. Argument
unspecified has another kind of legacy type: a raw type. To gracefully handle the transition from legacy code to new code with more complete type specifications we only give a warning.
The same for null specifications:
|line 41||Null type safety: The expression of type String needs unchecked conversion to conform to ‘@NonNull String’|
In both cases, raw types and types lacking null specification, there are situations where ignoring this warning is actually OK: the legacy part of the code may be written in the full intention to conform to the rule (of only putting strings into the list / using only nonnull values), but was not able to express this in the expected style (with type parameters / null annotations). Maybe the information is still documented, e.g., in the javadoc. If you can convince yourself that the code plays by the rules although not declaring to do so: fine. But the compiler cannot check this, so it passes the responsibility to you, along with this warning.
Tuning comiler messages
If you buy into null annotations, the distinction of what is reported as an error vs warning should hopefully be helpful out of the box. Should you wish to change this, please do so with care. Ignoring some errors can render the whole business of null annotations futile. Still we hope that the correspondence between compiler messages and configuration options is clear:
These options directly correspond to the messages shown above:
|problems||controlled by this option|
|lines 31,32||Violation of null specification|
|line 34||Conflict between null annotations and null inference|
|line 39||Unchecked conversion from non-annotated type to @NonNull type|
The compiler does an awful lot of work trying to figure out whether your code makes sense, definitely, or maybe, or maybe not, or definitely not. We just decided, it should try a little harder to also explain its findings. Still, these messages are constrained to be short statements, so another part of the explanation is of course our job: to educate people about the background and rationale why the compiler gives the answers it gives.
I do hope you find the messages helpful, maybe even more so with a little background knowledge.
The next steps will be: what’s a good method for gradually applying null annotations to existing code? And during that process, what’s a good method for reacting to the compiler messages so that from throwing code at the compiler and throwing error messages back we move towards a fruitful dialog, with you as the brilliant Holmes and the compiler your loyal assistant Watson, just a bit quicker than the original, but that’s elementary.
Being a part-time release engineer for the Object Teams project I can only agree with every word Kim writes about the job, I wish I could hire her for our project 🙂
“Nobody in needs to understand how the build works, they just need to push a button. That’s great. Until the day before a release when your build fails with a cryptic message about unresolved dependencies. And you have no idea how to fix it. And neither does anyone else on the team.”
That puts a sad smile on my face and I’d like to add a little quality metric that seems cruel for today’s build systems, but might actually be useful for any software:
One extreme I experienced was in a PDE/Build-ant-build which I had to set to verbose to get any useful answer but then I had to find the relevant error message deeply buried in literally tens of megabytes of log output. Takes ages to browse that log file. Other tools rank towards the other end of the spectrum saying basically “it didn’t work”.
Why is the worst error message relevant? When you hit that worst message it’s close to saying “game over”. Especially when working on a build I’ve come to the point time and again where all my creativity and productivity came to a grinding halt and for days or weeks I simply made zero progress because I had no idea why that system didn’t work and what it expected me to do to fix the thing. Knock-out.
Obviously I hate that state when I make no progress towards my goal. And typically that state is reached by poor communication from some framework back to me.
I know people usually don’t like to work on improving error messages, but please, don’t think good error messages are any bit less cool than running your software on mars. On the one hand we try to build tools that improve developers’ productivity by a few percent and than the tool will give answers that bring that very productivity down to zero. That’s – inconsistent.
I’m tempted to repeat the p2 story here. Many will remember the merciless dump of data from the sat solver that p2 gave in its early days. Some will consider the problem solved by now. Judge for yourself: what’s the worst-case time a regular Eclipse user will need to understand what p2 is telling him/her by one of its error messages.
The intention of this post is not to blame any particular technology. The list would be quite long anyway. It’s about general awareness (big words, sorry 🙂 ).
Consider the worst case
Again, why worst case? Because the worst case will happen. And it’s enough if it hits you once to easily compensate all the time savings the tool otherwise brought to you.
Framework developers, tool smiths: let your software communicate with the user and let it be especially helpful when the user is in dire need of help.
One small contribution in this field I’d like to share with you: in the OTDT every error/warning raised by the compiler not only tries to precisely describe what’s wrong but it is directly linked to the corresponding paragraph in the language definition that is violated by the current code. At least this should completely explain why the current code is wrong. It’s a small step, but I feel a strong need for linking specific help to every error message.
But first, the software has to anticipate every single error that will occur in order to produce useful messages. That’s the real reason why creating complex software is so challenging. Be it a build system or the “real” software.
Be cool, give superb error messages!