Undefined behavior (UB) in the C programming language is a regular source of heated discussions among programmers. On the one hand, UB can be important for compiler optimizations. On the other hand, it makes is easy to introduce bugs that lead to security issues.
The good news is that N3322 has been accepted for C2y, which will remove undefined behavior from one particular corner of the C language, making all of the following well-defined:
memcpy(NULL, NULL, 0);
memcmp(NULL, NULL, 0);
(int *)NULL + 0;
(int *)NULL - 0;
(int *)NULL - (int *)NULL;
This only applies when a null pointer is combined with a "zero-length" operation. The following are still undefined:
memcpy(NULL, NULL, 4);
(int *)NULL + 4;
The removal of this undefined behavior is not expected to have any negative impact on performance. In fact, the reverse is true.
Motivation
The examples above are somewhat silly because they hard-code a NULL
/nullptr
constant. However, it is easy to run into this situation with a pointer that is only sometimes null. For example, consider a typical representation for a string with a known length:
struct str {
char *data;
size_t len;
};
An empty string would usually be represented as (struct str) { .data = NULL, .len = 0 }
, with the data
pointer being NULL
. Now, consider a function that checks if two strings are equal:
bool str_eq(const struct str *str1, const struct str *str2) {
return str1->len == str2->len &&
memcmp(str1->data, str2->data, str1->len) == 0;
}
This implementation looks very reasonable at first glance. However, it exhibits undefined behavior if both of the inputs are empty strings. In that case, we will call memcmp(NULL, NULL, 0)
, which is undefined behavior according to the C standard.
This kind of UB introduces the risk that the compiler will optimize away following null pointer checks. For example, GCC will happily remove the dest == NULL
branch in the following code, while Clang deliberately does not perform this optimization:
int test(char *dest, const char *src, size_t len) {
memcpy(dest, src, len);
if (dest == NULL) {
// This branch will be removed by GCC due to undefined behavior.
}
}
The correct way to write the str_eq
function is as follows:
bool str_eq(const struct str *str1, const struct str *str2) {
return str1->len == str2->len &&
(str1->len == 0 ||
memcmp(str1->data, str2->data, str1->len) == 0);
}
The new code is correct, but worse in every other way:
- It increases code size, by requiring an extra check at each inlined call-site.
- It decreases performance, by redundantly checking something
memcmp
has to handle anyway. - It increases code complexity.
At the same time, there is no useful way in which the C library can make use of this undefined behavior to provide a more efficient implementation. This is the kind of UB that benefits nobody, and should be removed from the language.
Null pointer arithmetic
The original proposal was focused on removing UB for memory library calls, but an early reviewer pointed out that this is not sufficient. After all, we also need to take into account how these library functions are implemented.
For example, let's consider a typical implementation for a memcpy
-like function:
void copy(char *dst, const char *src, size_t n) {
for (const char *end = src + n; src < end; src++) {
*dst++ = *src;
}
}
This function exhibits undefined behavior when called as copy(NULL, NULL, 0)
, because NULL + 0
is undefined behavior in C.
To avoid this, and make the overall language self-consistent, we need to define NULL + 0
as returning NULL
and NULL - NULL
as returning 0. This also aligns C with C++ semantics, where this was already well-defined.
Opposition
When this proposal was discussed at two WG14 meetings, the opposition didn't come from the direction I expected.
The most broadly controversial part of the proposal was to define NULL - NULL
as returning 0. The reason for this is that when address spaces get involved (which are not part of standard C, but may be implemented as an extension), there may be multiple representations of a null pointer. Making sure that subtracting two "different" nulls still results in zero might require the generation of additional code, breaking the premise that this change is entirely free.
However, the most vocal opposition came from a static analysis perspective: Making null pointers well-defined for zero length means that static analyzers can no longer unconditionally report NULL
being passed to functions like memcpy
—they also need to take the length into account now. If an _Optional
qualifier is introduced in the future, memcpy
arguments would have to be qualified with it. GCC is considering the introduction of a nonnull_if_nonzero
attribute to represent the new pre-condition.
After the seemingly negative discussion, I was somewhat surprised that the vote not only went strongly in favor of the change, but also came with a recommendation to implementers to apply the change retroactively to old standard versions. This means that, once compilers and C libraries have implemented the change, it should apply even without specifying the -std=c2y
flag.
Compiler builtins
I work on the middle-end of the LLVM compiler toolchain. Being far removed from any "user-facing" parts of the compiler, I am generally not involved with standardization efforts.
The reason I got involved here at all is the specification for LLVM's internal memcpy intrinsic:
The
llvm.memcpy.*
intrinsics copy a block of memory from the source location to the destination location, which must either be equal or non-overlapping. [...]
If<len>
is 0, it is no-op modulo the behavior of attributes attached to the arguments. [...]
The llvm.memcpy
intrinsic may lower to a call to the memcpy
function, which is treated as a "compiler runtime builtin" here, even though it is ultimately also provided by the C library.
When used as a builtin, LLVM requires that both memcpy(x, x, s)
and memcpy(NULL, NULL, 0)
are well-defined, even though the C standard says they are UB. GCC and MSVC have similar assumptions.
Making memcpy(NULL, NULL, 0)
officially well-defined removes one of the assumptions, while the memcpy(x, x, s)
case remains for now. Allowing this was originally also part of the proposal, but was later dropped, because it didn't fit well with the other changes.
In a weird turn of events, this change to the C standard came about because Rust developers kept nagging me about the mismatch between LLVM and C semantics.
Acknowledgements
This paper was a collaboration with Aaron Ballman, who also drove the discussion during the actual WG14 meetings. Special thanks go to David Stone, whose early feedback radically changed the direction of the proposal from memory library calls in particular to "zero-length" operations in general.