“Clang will now more aggressively use undefined behavior on pointer addition overflow for optimization purposes.” https://github.com/llvm/llvm-project/commit/c2979c58d49b
A whole lot of non-exploitable bugs may become exploitable pretty soon.
=> More informations about this toot | More toots from comex@mas.to
@comex Why do compiler devs (and standards writers) keep doing this kind of thing? Does it really make non buggy code significantly faster?
=> More informations about this toot | More toots from azonenberg@ioc.exchange
@azonenberg @comex yes, no branch is faster than a branch.
But also, the unexploitable bug had manifested as a warning (comparing signed to unsigned) pretty much forever, and a + unsigned b < a
is a really strange way to ask whether a < a - unsigned b
, which, if you assume you never want to overflow a pointer, is a really remarkably buggy thing to ask.
=> More informations about this toot | More toots from funkylab@mastodon.social
@azonenberg @comex I don't know about this specific case, but commonly this kind of optimization allows the compiler to, for example, automatically vectorize a loop that's written in idiomatic source code style. So you can write the code looking readable, and the compiler turns it into avx512 that runs 20x faster than the baseline version.
=> More informations about this toot | More toots from eqe@aleph.land
@comex This needs to be a warning at least. Also shout-out to clang optimizing crypto libraries to no longer be fixed time.
=> More informations about this toot | More toots from gudenau@fosstodon.org
@gudenau @comex it has been a warning forever.
=> More informations about this toot | More toots from funkylab@mastodon.social
@gudenau @comex and I'm not sure, but "I use a high level language whose model of the world explicitly says that execution duration is not an observable. My constant-time implementation isn't constant time" is the most "I'm infosec, I don't need to understand how things actually work" thing I keep on reading when it comes to complaining about compiler.
=> More informations about this toot | More toots from funkylab@mastodon.social
@funkylab @comex It was taking some math being done to avoid branches and optimizing it to use branches. It's kind of bad.
=> More informations about this toot | More toots from gudenau@fosstodon.org
@gudenau @comex no, that's the thing - if you need constant computation time, and are using C or C++, you're really simply using the wrong language. There's red tape all over the language specs that says "don't do that; the language only guarantees the result, not how you get there", explaining the execution & memory model, and people trying to be clever by finding some tricks the compiler doesn't yet know how to rewrite are simply gambling. Also, constant time is an extremely fragile claim
=> More informations about this toot | More toots from funkylab@mastodon.social
@gudenau @comex … on computers built in the last 30 years. The same instructions take different time on the same hardware depending on the data - and what takes how much can then change with the next generation of microcode on the same CPU, or latest when switching to the next newer processor gen.
If your constant time implementation doesn't even deal with the C model of results only mattering, I'll be hard to convince that the conditional memory access instructions that CPUs have had for
=> More informations about this toot | More toots from funkylab@mastodon.social
@gudenau @comex … decades now will not instantly leak information through timing.
Leave alone the question in which real world scenario you'd want a constant time algorithm rather than one in a secure enclave and a hardware-timer-enforced constant(or randomized) total run time, but I'm sure that exists in practice beyond the scope of software that I'm aware of.
=> More informations about this toot | More toots from funkylab@mastodon.social
@funkylab it is funny you use the sentence about not needing to understand how things actually work because in how things actually work, the high-level language that preserves “constant-time”, as the property is called, doesn't exist, and while OpenSSL developers appear to have the energy to produce assembly versions for all the targets on which one wants to execute cryptographic primitives, many other developers do not. If you factor in the fact that the protocol that uses the crypto primitives must be implemented in “constant time” too, the whole is always at least partly written in a high-level language, and the language in question never understands constant time or secrets that mustn't be copied willy-nilly. This is how things actually work.
=> More informations about this toot | More toots from void_friend@tech.lgbt
@void_friend I appreciate you pointing that out! Yes, you're right. Any cryptographical protocol that's only secure if you can make a machine that makes no guarantees on timing behave in a specific temporal behavior is of course fundamentally broken for real world application.
Luckily, you can often bound the amount of information leaked on real world machines. I already mentioned hardware timers to delay to remove any mutual information between runtime and data. That doesn't make the...
=> More informations about this toot | More toots from funkylab@mastodon.social
@void_friend system safe against e.g. power usage side channels. But in practice, against attackers without physical access, doing cryptography and then waiting until a defined time is over is better than trying to shoehorn C on an superscalar, speculative execution, conditional moves CPU into constant time computation.
These kinds of computers are not built for constant time computation, but the opposite. They are, however, built with reliable timers to allow constant latency.
I'm hence
=> More informations about this toot | More toots from funkylab@mastodon.social
@void_friend … often confused why people mention constant time computation on such machines, where it might be both hard to reliably achieve, and not better than using a timer to erase externally observable computation time altogether.
That being said, same-machine users do have cache usage, frequency scaling, scheduling freedom (and of course spectre-style) side channels. But in these cases: C code won't save you. Use the secure platform elements, as mentioned. They can make guarantees!
=> More informations about this toot | More toots from funkylab@mastodon.social
@comex
Is this optimizing the case where the pointers are the same? Why would you write code where the pointers are the same, if they're the same pointer it reduces to (unsigned < 0) which is, in fact, always false.
If the pointers are different this doesn't make any sense.
=> More informations about this toot | More toots from resuna@ohai.social
@comex Make it stop! (I know, it won't ever stop.)
=> More informations about this toot | More toots from phf@mastodon.acm.org
@comex but for real, how many applications would suffer an unacceptable performance degradation if the pointer overflow sanitizer was left turned on?
=> More informations about this toot | More toots from regehr@mastodon.social This content has been proxied by September (ba2dc).Proxy Information
text/gemini