periodic reminder for infosec folks: stop deciding things are done badly or "insecure" outside of the context of a threat model
it's disingenuous and irresponsibly ignores that security and cryptography are fundamentally about balancing risk tolerance and risk abatement
=> More informations about this toot | More toots from ktemkin@provably.online
@ktemkin ssh is insecure because it allows for remote code execution
=> More informations about this toot | More toots from tay@tech.lgbt
@tay @ktemkin in a way that's what multiple compliance checklists say 😂
=> More informations about this toot | More toots from viraptor@cyberplace.social
@viraptor @tay @ktemkin
Sad but true.
I struggled with countless checklists asking for "no remote access" for devices without a keyboard or a screen
=> More informations about this toot | More toots from realn2s@infosec.exchange
HTTP is insecure because it allows for remote code execution.
@tay @ktemkin
=> More informations about this toot | More toots from mkj@social.mkj.earth
@ktemkin Marginalized people: Signal's requirement for a phone number is a deal breaker for me, as it exposes me to tons of risks
Infosec people: But have you considered $Alternative_Service has AES implemented slightly wrong?
=> More informations about this toot | More toots from antonia@estrogen.network
@antonia @ktemkin phone number for registration is the only requirement, you aren't required to share it with people anymore, you can share usernames.
I agree it typically isn't the AES being done wrong that matters.
It's the terrible code smell from that being a problem :p
Or the marketing claims of the app
That or the business entity that owns the chat app 😱
=> More informations about this toot | More toots from risottobias@tech.lgbt
@ktemkin oh totally. The worst is when they get press for like “THEY CAN MITM THIS BANKING APP!!!1!1!!!”
and the journalists just kinda don’t get the fact that you’d need to take over the fucking CA and yoink their certificate for yours first
=> More informations about this toot | More toots from re@fuzzies.wtf
@ktemkin
The word “security” independent of a threat model is like the word “make” independent of what you’re supposed to be making
A table? A cake? Peace?
=> More informations about this toot | More toots from re@fuzzies.wtf
@ktemkin
I think there are two different categories here. System design needs to be evaluated in the context of a threat model, yes (and a lot of what gets called a threat model is at best a colloquial approximation of actual thinking), but basic vulnerabilities, whether that means parser and state machine issues, memory issues, or issues of incorrect implementation of a chosen set cryptographic primitives, all qualify as "done badly" in most cases and insecure in the majority of foreseeable threat models if they're in reachable code.
"Has an open port connected to the internet" implies a minimum set of things that must be accounted for in a threat model, as is "supports messaging between users".
=> More informations about this toot | More toots from dymaxion@infosec.exchange
@dymaxion threat models still apply to all of these things; they're literally what you use to determine where you spend the limited time you can spend on security (whether that's training coders, auditing code, choosing what to harden, etc etc)
folks can lecture for days about how parser bugs creating weird machines is a serious threat, but if the parser is parsing some saved configuration flash on my digital stylus, chances are the worst that could come of any attack is a bricked stylus.
if you've spent your limited time and energy making sure that's ultra-audited because it's a parser, you're taking time away from things like "realizing that the auth token included in the GET requests for the firmware updater actually also has permissions to fetch and overwrite other products' files"
=> More informations about this toot | More toots from ktemkin@provably.online
@ktemkin
We talk about these things because we have spent literally the last twenty years looking at threat models and at the failure of overworked dev teams to build good code with bad tools. It will be an amazing victory for the community when developers have to actually design the bugs that fuck them over. And no, the correct way to fix these issues has never been to write bad code and then try to audit it, obviously.
Yes, in the context of each individual program, the threat model wins. In the context of the entire industry, this is not how progress is made.
=> More informations about this toot | More toots from dymaxion@infosec.exchange
@dymaxion @ktemkin "Ambient attacks against open ports exposed to network" needs to be considered an implicit part of any threat model for software/devices designed to be connected to a network. It's not optional.
=> More informations about this toot | More toots from dalias@hachyderm.io
@dalias @dymaxion I don't generally subscribe to statements that ignore nuance.
While I accept that there's generally some responsibility to prevent machiens from themselves turning into threat vectors (e.g. it's not great to have your iot device be easily made part of a botnet), I am also not going to suggest there's an oversized onus on the authors of e.g. a Thread flashlight to make sure their little uC that's too slow to render its own confiugration UI and which could last maybe five minutes as a slow-as-hell active threat before its battery ran out is protected from the other devices on the theoretically-bridgable Thread network
especially when the time is better spent e.g. understanding that the best solution to most problems that could include many devices is to improve the UI so people stop using insecure defaults
this is why threat modeling is important and nothing is just an implicit "responsibility' with its priority fixed to '1'
=> More informations about this toot | More toots from ktemkin@provably.online
@ktemkin
One of the things I hope we can strongly agree on is that the place where we should be asking a lot more is at the library and language level. I agree it's implausible that small teams will fix annoying and subtle bugs and also do the basic security design work they're already not doing. However, it seems equally unlikely that people are going to stop doing dumb shit like connect things to the internet that really shouldn't be. Teaching the entire world how systems work to a level that allows them to have good intuition about what's a safe action is as hard as getting all the small dev teams to do the work. And harassing either users or devs about things outside of their scope of effective control of dumb and mean.
So that means we need language, framework, and library issues fixed at those levels, and then we need shaping incentives like liability to force migrations and rewrites, once we have meaningful solutions. When we get to that point, yes, a lot of small teams will need to end of life products or accept that they're going to need to write a lot less code — but at least they won't be playing whack-a-mole with problems further up stack and above their pay grade.
@dalias
=> More informations about this toot | More toots from dymaxion@infosec.exchange This content has been proxied by September (3851b).Proxy Information
text/gemini