Within a company (at least in my experience) compilers/toolchains are changed infrequently, usually less than once a year. In this case, keeping the software building with -Werror is a heck of a lot easier than allowing warnings to creep in, then trying to fix it later. It ends up that “later” never happens. Using -Werror from the very beginning forces you to address warnings: either by silencing obviously false positives, or fixing legitimate problems.
A previous company I worked at had large C++ projects that built with 5000+ warnings. Nobody looked at them, obviously. Buried within them I recall one about printf-style formatting string mismatching arguments, that said something like “mismatching arguments will cause a crash if this code is executed.” I only noticed it buried in the list of warnings after debugging a segfault caused by that line of code.
I feel like you're replying to the title and not the article. The article starts with a huge disclaimer saying that the author has no tolerance for warnings getting merged in to their projects, but they have the warnings check in the CI pipeline. Doing that would totally prevent the kind of lax never-fixed errors you're talking about, while allowing more flexibility while the PR is still in progress.
That’s fair, I’m responding more to the general aversion to using -Werror at all than the specific case of the OP. If I were writing open source software expected to be used by a wide variety of compilers and platforms I’d probably also not want to make -Werror the default.
As an anecdote, there may be one or two packages in openwrt that won't build by default because of a strange interaction between certain glibc headers and the behavior of _FORTIFY_SOURCE on some embedded platforms, but only on some versions of gcc.
Werror tends to be placed eagerly pass the buck along to anyone building it, not just CI or developers. I used to feel that Werror was a good thing, but nowadays anything with CI on merge requests will tell me about any warnings, and someone wanting to build the software shouldn't get penalized for having a different build environment: there's a reason they aren't thrown as errors to begin with. Does this make sense?
I feel like every person who advocates for -Werror shows up to a codebase with a billion warnings once and then assumes that nobody can ever voluntarily keep their code compiling without warnings if they don’t have the build stop immediately so they can fix it.
I’ve worked on dozens of non-legacy codebases and in every single one we don’t have warnings last for long, because going from zero to a few warnings is annoying. I don’t want that spam in my logs and I definitely don’t want suspicious constructs in my codebase. For the ones that enable -Werror, by the way, I always disable it so I can work and clean things up before committing. It’s just basic hygiene and discipline, the same as formatting your code before sending a PR. If your team doesn’t have that then it has bigger problems.
It sounds like you don't want your compiler to stop on errors, but rather, you want your CI to recognize errors, and to prevent the merging of feature branches that have any warnings.
Maybe this is what you're saying, but to have CI prevent the merging of feature branches that introduce new warnings? That's my favorite setup for introducing lint or lint rules to a large codebase.
Yes, but no need for the added complexity of a delta of warnings against the base branch. The presumption here (given the GP comment) is that main has no warnings, since warnings are only allowed "during development", and should be cleaned up before production. So the CI only has to be intelligent enough to prevent merging (to main) of a branch with any warnings.
Professionally the ones that did this had between 2-200 engineers working on them at any given time. Most code was a decade or so old, with some legacy components that were older or vendored (but in many cases were not touched much and may have certain warnings disabled). iOS-focused, so Swift, Objective-C, C++ in Xcode.
In open source projects most of the ones I work on have a handful of regular contributors and a couple dozen one-off or other irregular contributions. In many cases we don't get all the warnings on every system that anyone uses but if someone goes "fix warnings for GCC 13 on Alpha" or whatever environment they're using we are more than happy to take their changes, and if it warns in CI or on any of our machines we'll either bring it up during PR review or submit a change soon afterwards to fix it. I'm too young to have projects more than a few years old but there's a decent number of other open source projects with a long history and many contributors that compile with no warnings on say their Linux CI and maybe a handful if you pass it through Apple Clang (since they don't test that much).
(Off-topic) Oh how I wish this were possible in the language I use at $dayjob. Swift does not let you silence warnings on a line by line basis. It’s easily one of the top 5 worst things about the language.
This would allow you to make a dialect where you call a function that Apple in their infinite wisdom has deprecated. Consider the tax this would impose on the ecosystem!
Are there companies which would allow or suggest employees or contractors just use whatever random toolchain they have installed?
I can understand the argument for open-source projects that aren't attached to a company, but otherwise? Maybe this is my bias being in a subindustry centered on distributed binaries and so have set standard toolchains (i.e. what the build machines use), but I'm still not seeing it. This seems like the more harmful practice than using -Werror
Most companies I have worked at have a specific compile script that everyone uses and they don't mind if you use something else as long as you aren't causing problems for everyone else. I typically run pre-release toolchains to spot errors early for example, or sometimes I'll swap out the compiler to test things.
Obviously, what gets built and deployed to production is not built off my computer; it goes through the standard CI they have.
That's exactly why the author says to check for warnings in CI. Most warnings will be noticed and fixed by the dev before pushing, and the rare ones that get missed will be caught in CI before merging.
Any time I have to compile ffmpeg, the quantity of warnings flying up my screen fills me with a desire to "fix" a bunch of them and send a pull request. Never enough to actually get started on it though.
Most of the c++ I've written used the arduino build system, so an absurd amount of things that'd normally be errors or warnings are just permitted and expected.
This is better understood when writing websites. A website shouldn’t be tied to a particular browser vendor or a particular browser version. Browsers don’t have any mechanism similar to -Werror because it would be a “break if the user upgrades their browser” flag. (There is “use strict” but it doesn’t become stricter with browser upgrades.)
Similarly, let’s say we’re talking about an open source project that users may download and compile from source. A portable open source project shouldn’t needlessly break if you use a newer or different compiler.
So:
- Do make the source code as clean as you can, using all tools available, including -Werror. Putting -Werror in your continuous build should be fine too.
- When a user downloads your library and compiles it in order to use it, the make target (or equivalent) that they use shouldn’t compile with -Werror, because we don’t know which compiler they have or which warnings that enables.
What you really want for working on a large scale codebase is "don't allow changes that introduce new warnings". And with current tooling that's something you can only really enforce at CI level if at all, but it shouldn't be that way. I'd like to have the same thing early in the test-edit cycle - compiling with a newer compiler that adds more warnings shouldn't be an error, but a code change that triggers a new warning should be.
We use -Wall -Werror -Wextra -Weverything, plus we use clang-tidy with -Werror as well, plus we use ASan, TSan, MSan, UBSan, plus fuzzing (5+ different types of it), plus we update to the latest compiler on every release, so today it is clang-17, and we build every dependency from source, and always use hermetic builds (cross-compiling).
I don’t do -Weverything but generally most of the rest. I am in the maximum build hygiene camp, which means builds have to work across GCC and Clang, and x86-64 and ARM64, or they fail in CI. It finds quite a few compiler bugs as well as subtle code bugs.
werror is absolutely your friend. The examples given in the article are borderline strawmen.
Changing toolchains is a very rare occurrence because the subtle differences will almost always cause problems. When someone suggests changing toolchains, they owe everyone a VERY CONVINCING argument. After which the changed warnings will be the LEAST of your problems as you migrate.
Toolchain upgrades will cause new warnings to surface as the detection gets better. But you DO want to fix them, don't you? You ARE a professional, aren't you? You DO care about your product's quality, don't you?
After over a decade without werror, followed by a decade with werror, I'd take werror every time. Warnings exist for a reason, and in the case of C compilers, VERY GOOD reasons that you really should address.
At my workplace, we use werror, static analysis, various lints, detekt... everything that can issue warnings about your code quality. If any of them complain, you can't merge your PR, because we're professionals.
You can do that if you have full control over the toolchain.
For example I ran into this issue with the AWS C++ SDK ( https://github.com/aws/aws-sdk-cpp/ ). And then I had to go into several CMake files and remove Werror. What do I as a user care about warnings? Can I only use that SDK once I have fixed (probably more like silenced) all the warnings?
I’m guessing you haven’t been in a situation where the problems are apparent? Your advice probably works fine for small projects. You can upgrade the toolchain and fix all the new warnings in the same commit. Often there’s nothing to do.
It doesn’t work for a monorepo where the toolchain needs to be upgraded for many projects at once. Enabling each new warning (turning it into a build-breaking error) may require touching hundreds of files. Getting a toolchain upgrade done in one monster commit doesn’t work; you will inevitably break something and have to roll back, or be unable to commit it all due to merge conflicts with other commits.
I’ve been on a team that needed to be able to upgrade the toolchain in one commit and then turn on each new warning in separate commits. Having an explicit list of all the warnings that are turned on and then gradually updating it to be the equivalent of -Wall was the way to go.
In the 1990s I think it was recommended for at least C++ to use at least two different compilers to catch as many errors as possible or to get a more understandable error message.
linters and staic anaylsis did not really exist then
Doesn't uh your build chain introduce a build chain dependency? Like each compiler compiles differently anyway, so if you care enough about warnings to have a zero warning policy, then don't you also care about it on every platform/build chain? I don't get this.
As someone in DevOps with very similar frustrations to those expressed in TFA, I think my main issue with it comes when the people maintaining the software (who added Werror) aren't the ones spearheading the migration to a new Ubuntu LTS and therefore a new toolchain. That centralized strike force migration team now has to either fix everyone else's piddly deprecations for them, or else have a way off temporarily patching out Werror until that work can be handed back to the original maintainers.
Exactly. -Werror for the normal make target is problematic.
But -Werror for a dev target that e.g. CI uses to compile-test pull requests on known toolchains and distros that you explicitly intend to support is useful. The alternative would be tons of effort to skip -Werror but still have CI flag that pull request that introduces new warnings. Just having the build fail because -Werror is far simpler and less fragile.
Yeah, use it with your CI or test suite. Also `-Wd` for Python, and I'm sure there are equivalents for other toolchains/languages.
IMHO it's similar to the robustness principle/Postel's law: "be conservative in what you send, be liberal in what you accept". Your build should pass without warnings on your system, but your build system should allow warnings on your customer's system.
So, how have linux distros worked for the past 25 years? Many independent projects write commonly used libraries and/or tools in C, and then each distro takes those and builds them with a slightly (or significantly!) different toolchain ... obviously it has worked pretty well. The pressure to work for all reasonable toolchains is good for software quality, instead of depending on quirks of a very particular toolchain to mask some bugs or nonstandard behavior dependency.
You bump the version gcc.spec in say, Fedora rawhide. What happens when random package foo doesnt compile with it? Foo is fixed or you skip that gcc version. Unless foo is very important and very hard to fix, you fix foo.
Distros already enforce a consistent toolchain. Arch, Debian, Fedora, Gentoo, etc.
A distro uses a consistent toolchain, but doesn't write most of the code. The projects writing the vast bulk of the code, like linux, git, firefox, gtk, qt, openssl, curl, etc, don't strictly control the toolchain, and it does vary between distros and distro releases.
Distros like Arch and Alpine keep the patching to a minimum, most packages need no patches. RedHat/Fedora and Ubuntu do patch significantly more. Debian is in the middle ... there was an infamous incident where they patched openssl to fix an uninitialized-memory warning, and caused a huge security issue for many people: https://www.schneier.com/blog/archives/2008/05/random_number... (yes, openssl sucks, but still.)
So this is why distros take their cleanup patches upstream. Distro patches are for real observed bugs that aren't fixed in an upstream release (or not in the desired major release branch), or to just disable -Werror in projects that don't know better.
Enable -Werror for the reproducible builds, leave it off for the rest. That should solve most of it.
The biggest and rather frustrating issue left is that warnings are heavily tool depended, not just in that new tools will produce new warnings, but in that different tools will produce incompatible warning. The workaround that makes the warning go away in one tool, will create a warning in the other. You can end up with a lengthy back&forth before you find something that makes all the tools happy and you frequently run into situation were you are really just pleasing the tools, not fixing any actual issues.
In open source you don't get that option. Where you have a known set of build tools it is easy to use -werror, and if a new toolchain is added someone gets assigned to clean it up.
This only works if the maintainer cares enough. I’ve encountered maintainers who, bizarrely, are strict with “no warnings” but only when it comes to their preferred toolchain. Me: “Here’s a patch to get your project to compile cleanly using the Green Hills compiler targeting MIPS.” Maintainer: IDGAF. Rejected.
I've not been in the situation yet, but it makes sense to me.
Accepting a patch like that can be a significant commitment of time and brain power. "What exactly does this fix? Does it actually fix it?". Such patches can be a lot more complicated than a straightforward new feature.
And then there's that oddball compilers may not support everything gcc and clang do, so making them happy may actually mean sacrificing something.
Thatis a different situation. In commercial softeware you know your compiler and (if you care) you assign someone to fix the warnings on every upgrade.
In open source you never know what compiler someone will use. You can probably say gcc 3.0 isn't supported, but someone might have just built the latest master clang with a new warning added.
Warnings spilling over from library dependencies can be mitigated easily by including them using -isystem instead of -I.
For code that is not libraries, there is nothing strange about having dependency on a specific toolchain, preferably two supported options.
Besides, warnings or not, devs using different toolchains than CI is just inviting for works-on-my-machine situations the rest of us left behind in the last century.
It's good clickbait. I think i was tired and wanted to get outraged (well, what's count as outraged to me, meaning shaking my head disapprovingly), i clicked on it then manage to agree with every points, and manage to learn something new with the "Refined Warning-As-Error Control" paragraph. Thank you author!
> This flag adds a toolchain dependency to your project. Newer compiler versions (or alternative compiler vendors) are likely to generate new warnings, making the build fail.
IMO most large projects will anyway have multiple toolchain dependencies. Ever tried to compile Chrome using a different compiler/build system? And why would different contributors use different toolchains anyway? If you want to contribute, just use the same tools everyone else on the project uses.
When upgrading compilers, incompatible warnings are going to be the least of your worries. So what is the problem that the author wants to solve by not using -Werror? He agrees that warnings should not be merged. So why not stop them right when they're written?
Google used to compile development builds with clang by default (better tooling and error messages) but production builds with GCC (better optimization). I don't know if they still do that.
Blacklisting specific warnings is a bad idea. Instead you should `-Werror` all warnings by default and then whitelist specific ones with `-Wno-foo` of `-Wno-error=foo` if you don't care about them.
To handle differences in GCC/Clang flags you just have to detect the compiler (trivial with CMake), or dictate the compiler.
But only do that for your developers and CI. Don't use `-Werror` for downstream users of your project.
Say I've decided in my project that shadowing is evil. Why wouldn't I turn on -Werror=shadow? Where is the bad idea?
Including downstream. I don't want people writing me patches that trigger my pet warnings.
This is entirely compatible with the idea of locally only turning on -Werror, and then disabling certain warnings that you don't intend to fix, without foisting -Werror onto downstream.
You can still have non-negotiable warnings that you turn on rather than off, and those do go downstream too.
Sure, other than that the patch compiles fine on their end, and the diagnostic is not something you are willing to disable, or address with an additional code change on top of their patch.
It's better if the deal-breaker diagnostics are on for everyone, so that everyone is working in the same programming language dialect.
It'd be cool if I could have a code review view that's instrumented with output from static analyzers. I know some places have this (FB/Meta?) but I don't think it's a very "out of the box" kinda setup.
For my own code I use "-Wall -Wextra -Werror" + a handful of warnings that are not in this set (like -Wsign-conversion) - and a similar draconian warning set on MSVC, both when building locally during development and on CI.
But since my libraries don't ship with build system files those build process decisions are my own and don't "leak" to the users of my libraries (e.g. it wouldn't even enter my mind to dictate a specific build system or build settings on my library users, I know how bad it is to deal with overengineered cmake files of other people).
When integrating other libraries I usually also just throw away the 1000+ lines of cmake files they come with and replace them with a few dozen lines of cmake script of my own (I wish that would be an exaggeration or a joke, but it's not - especially when it comes to "professional" C++ libraries coming out of Khronos or Google).
Only in your controlled environment. When you publish something and it gets picked up by a distribution, it will fail. Sometimes even on a bug in a specific compiler version - Ubuntu had a GCC which sometimes picked up a `write()` result as unused and you couldn't ignore/dismiss in any way, so I had to carry patches for some project.
If the project is an “executable” then the toolchain dependency should be explicit. If it’s a “library” then it should be implicit. But it’s really not hard to support Clang + GCC + MSVC. And if new toolchains introduce new warnings they should be fixed.
That said, my preference is for “dev” builds to not have warnings as errors, only “retail” which gets built by CI.
Third party libraries almost always have to be compiled without warnings as errors because almost all open source code is rife with warnings.
> If the project is an “executable” then the toolchain dependency should be explicit.
Cross-platform makes this fun. I work on a project that, in production, runs on an embedded Linux box. With a couple of features disabled, I can do development work just fine on my Macbook. Unfortunately, the embedded target uses a relatively old and crusty version of GCC and OS X has a newer version of Clang. We generally shoot for "zero warnings on production builds" but warnings in development on OS X are... okay. We pay attention to them and have occasionally benefitted from warnings that Clang provides that the older GCC does not, but some of the warnings come from some (admittedly questionable but work correctly) casts that a 3rd party library's header makes. Definitely not worth the effort to try to get upstream to fix the code gen that makes these headers to make OS X Clang happy.
I do a lot of Linux + macOS + Windows support. Yeah third-party libraries are effectively a lost cause. It's ok if it's limited to cpp files. If it's in headers it's annoying.
We vendor all third-party dependencies but would probably bias towards fixing the warning. It just spirals out of control too fast if you don't. Ancient toolchains for embedded or similar is definitely tough.
If you can use modern toolchains then the combination of warnings across GCC + Clang + MSVC is probably a win.
The issue I mentioned happened in a range of GCC versions. Both lower and higher fixed the issue. It's not a new warning either, it was an existing one raised by mistake. You can support clang+gcc+msvc and still run into that when someone compiled with gcc one point release lower (not higher) than yours. So yeah, it's pretty hard in real life.
If distros are compiling with an unsupported toolchain then its on them if it fails. This is similar to how some distros will build your application with unsupported versions of libraries resulting in crashes.
Every project is being compiled with an unsupported toolchain somewhere. The creators don't have every possible hardware and software combination possible, the time to test them, and they're not time travellers, so future distributions coming out with the same version of the project are by definition officially unsupported. That's just the reality we have to deal with.
In edge cases, we get maybe 2 versions of GCC to choose from in a distribution, but nobody's special enough to get their own just because it's officially supported. If it compiled and tested directly, your project gets built with v1.2.3 like everything else.
>but nobody's special enough to get their own just because it's officially supported
Why? Distros should run the actual build without trying to swap out the toolchain with something else. If a new compiler version causes a crash of performance regression that will look bad on the original developer.
It works the same with open source. If your open source library is always built with clang 16 then it will never fail to compile. If people want to use an unsupported toolchain they can figure out how to get it working themselves.
As an author of open source software, you must expect people to use their system's toolchain. You can define minimum versions sure, but if you use Clang 16, you really ought to not break when a user tries to compile with Clang 17, unless your build system actually automatically downloads and uses the right clang version automatically. Most of us aren't using such a build system.
>you must expect people to use their system's toolchain
Why? I don't expect people to build against their system libraries.
>unless your build system actually automatically downloads and uses the right clang version automatically
Nix, bazel, buck, etc all support defining the toolchain for your build. Reproducable builds are useful. You are supporting a known configuration. Crashes can all be collected an easily analyzed. You can be sure you have a properly compiled version by verifying the hash.
>Most of us aren't using such a build system.
That's okay. You can just ingest a prebuilt artifact if you aren't capable of tying it into your build system.
Because that's what people do. One of the big reasons that build open source software is to have fun and work together with other hobbyists, some of each don't even write software professionally at all. If you want to have fun with open source you really want to keep things simple and keep the entry barrier low. Otherwise, it'll just feel like an unpaid job.
Some people want to develop open source software in a professional manner. Having to support the output of a single toolchain is the definition of keeping things simple.
To be honest I don't understand what your goal is. Use one toolchain to produce the binaries you distribute, sure. Only provide support for those official binaries or binaries compiled with the officially supported toolchain, why not. And use -Werror with that standardized toolchain in CI to ensure builds with warnings don't get committed, I think that's great. But why are you hell bent on making it harder for your users to build from source?
>But why are you hell bent on making it harder for your users to build from source?
Building from source would use the toolchain you designed the source code for meaning that it will always succeed. If someone wants to fork the project and add support for a different toolchain they are free to do so. If they want to get rid of the requirement to have no compiler errors they are free to remove -Werror or disable the new warnings.
Honestly the only code base I've seen which does this is Google's Chromium stuff; by default, it will download a pre-built Clang as well as a Debian sysroot. IMO, it's completely insane and has always been a giant pain in the ass. Maybe it's warranted for something as complex as Chromium but I really don't think it's the right choice for some 10kLOC FOSS project.
The two options of reproducible builds and prebuilt artifacts is still a significantly limiting situation. There are plenty of situations where the desired prebuilt artifacts aren't available, and you'd prefer to not have 5 clang-s and 7 gcc-s installed. There can be problems with updating toolchains, sure, and it's reasonable to write software assuming a specific one, but -Werror adding a very unnecessary roadblock is just not nice.
>There are plenty of situations where the desired prebuilt artifacts aren't available, and you'd prefer to not have 5 clang-s and 7 gcc-s installed
You don't need to install a toolchain to use it. The build tool should handle everything for you and should handle clearing caches of unsued toolchains that are no longer needed.
>but -Werror adding a very unnecessary roadblock is just not nice.
Warnings should be looked at. If the errors are not important you can add flags to silence them.
> You don't need to install a toolchain to use it. The build tool should handle everything for you and should handle clearing caches of unsued toolchains that are no longer needed.
I don't want a dozen clangs and gccs wasting disk space (& increasing initial build times) regardless of who manages them! And I don't mean one project having a bunch of toolchains (I pray that never happens), I mean having dozens of random projects on my computer, most having been built just once and which won't be touched for an indeterminate amount of time.
> Warnings should be looked at. If the errors are not important you can add flags to silence them.
Yes, developers should look at them (and developers can look at them even without -Werror! or, as said elsewhere, it could be enabled conditionally). Users won't get much out of them, other than being annoyed at the project if they prevent the project from being built.
>I mean having dozens of random projects on my computer, most having been built just once and which won't be touched for an indeterminate amount of time.
You don't have to persist all of the build time dependencies forever. It can redownload them if you need to rebuild one of them.
But at which point would they stop being persisted? Surely not right after the build completing, so.. requiring manually running some "ok now clean redownloadable dependencies" command (noone's gonna bother, and at that point a regular clean makes more sense)? adding a cron job on my system (please don't)? on some other build command invocation after some time (will likely never happen)?
Most of your suggestions here make sense under the assumption that the project in question is one whose development I participate in/with some regularity engage with, but that simply isn't the case for most projects a given person may have some desire/need to build from source.
>But at which point would they stop being persisted?
You can either manually run the cleanup after the build or when you need free space. Alternatively a systemd timer that deletes old stuff that hasn't been accessed in weeks would work too.
The same issue exists even only the system toolchain is used. There will me gigabytes of dependencies or object files that aren't needed taking up space.
This is a problem of all dependencies. Toolchain dependencies are not special.
> There will me gigabytes of dependencies or object files that aren't needed taking up space.
Among the couple dozen things I've got built from source, most have their object files sum up to less than the 117MB of a libLLVM-15.so.1, and if not, are only slightly larger (200-300MB). (and, as for the topic in question, maybe only one downloaded its own toolchain, and everything else freely allowed me to give a local gcc or clang in 'CC='/'CXX=').
I think a build of all targets of QEMU is the only significantly larger thing I have (1.9GB of binaries, 2.1GB of object files).
> Alternatively a systemd timer that deletes old stuff that hasn't been accessed in weeks would work too.
Besides being awful (why does a build system need to even know about systemd?!?!!? much less make a semi-permanent addition), that also would fail if I moved the build directory to somewhere else (which, granted, would probably mean a clean would be needed to rebuild, but I will not care when I'm moving all of my git clones from one drive to another).
I pretty frequently argue that memory safety (etc) outcomes in modern C++ are pretty comparable to Rust, but that’s only true if you’re throwing clang-tidy, -Werror, -Wall, -Wpedantic, and CI under ASAN, MSAN, UBSAN, TSAN.
That’s pretty close to rustc.
And I get that you can’t throw that at a giant codebase in one day, and I get that dev builds probably drop —Werror. Thrrr are realities.
But new stuff not under clang-tidy, cppcheck, and pvs if you’re feeling speedy? People do that?
That's simply not true. All of those tools are regularly used by large projects like chrome and have been for years. Memory safety continues to be the largest source of issues for them in spite of that. Sanitizers and other modern tools are a fantastic improvement over the old way of doing things, but they don't catch everything and we shouldn't pretend they do.
-Werror is the least of your problems if you are compiling older C code in a newer GCC-based toolchain. Seemingly as a matter of policy, GCC has started playing very fast and loose with undefined behavior, treating it as a license to emit any code it wants or none at all, regardless of what most people would say the programmer "obviously" intended.
There's a good chance your old code will compile if you don't use -Werror, or even if you do, but whether it runs the way it used to is a different question entirely.
> Seemingly as a matter of policy, GCC has started playing very fast and loose with undefined behavior, treating it as a license to emit any code it wants or none at all, regardless of what most people would say the programmer "obviously" intended.
But isn't that exactly what the C standard has said Undefined Behavior is, for as long as it has existed?
Sigh. Yes, but here on Planet Earth, a lot of legacy code was (irresponsibly) written under the assumption that the contemporary compilers wouldn't change their behavior in the future.
Also here on Planet Earth, performance optimization has become much less important than not breaking older code that previously worked, but nobody told the GCC devs.
Never change, HN. Keep working toward a Utopia that never was and never will be. And keep downvoting helpful hints indicating additional things to watch out for in the context of the article.
> Sigh. Yes, but here on Planet Earth, a lot of legacy code was (irresponsibly) written under the assumption that the contemporary compilers wouldn't change their behavior in the future.
Why aren't they using the old version of the compiler? Keep your unoptimized, ill-formed code if you want, while everyone else can benefit from the newest features. You can't have your cake and eat it too. Either you want to use the new stuff and you'll have to adapt your code, or you don't want the new stuff and you can keep using your code as is.
Keep your unoptimized, ill-formed code if you want, while everyone else can benefit from the newest features.
Sadly, that's the dilemma many embedded developers eventually have to face. I'd personally like to use modern C++ features when working on a legacy codebase that I inherited. Even if I don't want or need that sort of thing, newer compilers are also better at flagging potential errors and UB (hence the article.) Overall, CPUs are much faster now, and memory is much cheaper. So I can afford to trade off a bit of overhead for better code quality and engineering discipline.
And yes, newer compilers are also better at optimization, but that's literally the last thing I care about in the embedded business. I'm not targeting 8051s and 8-bit AVRs anymore.
Unfortunately, what I can't afford is the time needed to review 200K+ LOCs to find all the hidden gotchas that will be exposed by newer, more aggressive compilers. There may be bugs that should have been fixed years ago, but that never surfaced. Some will trigger new warnings (again, hence the article) but will all of them be flagged?
My whole point is that dealing with a broken build due to -Werror is so far down in the noise it's not even worth mentioning. If I fix the issues responsible for the new error messages and walk away thinking my job is done, well... maybe this is my lucky day, and maybe it isn't.
No need to be this passive aggresive. There is no need to break old code compatibility, while adding features into the compiler; this is how Visual Studio C++ (and most certainly Intel C++) is and always was; no surprises and excellent code quality.
Previous poster just asked a question; not everyone deals with C code, or old C code.
I agree there's sometimes a certain lack of pragmatism, and that UB is used as the ultimate argument that trumps all arguments. But on the other hand, it really is UB, so that seems more of an argument for "we want a migration path" rather than "never change it until the end of time".
A previous company I worked at had large C++ projects that built with 5000+ warnings. Nobody looked at them, obviously. Buried within them I recall one about printf-style formatting string mismatching arguments, that said something like “mismatching arguments will cause a crash if this code is executed.” I only noticed it buried in the list of warnings after debugging a segfault caused by that line of code.