=> gemini://gemini.conman.org/boston/2025/01/04.1
My first question to you, as someone who is, shall we say, “sensitive” to security issues, why are you exposing a network based program to the Internet without an update in the past 14 years?
There are trade-offs. The least bad option is to use w3m for most web browsing, given that the alternatives are worse: too bloated, too annoying, the cost too high to implement something less bad, etc. Security is not the only axis or parameter under consideration, and the context also matters. Same story for those better-than-C platforms (hypothetical, failed in the market, or otherwise) with formal verification and granular security baked in from day negative something: might be nice, but those are not viable for me at the moment.
I did not assert the code was free of error. I was asking for examples of actual attacks.
Only after the next zero-day drops do you find out what the previous whoopsie was. Again, these are somewhat difficult to provide in advance.
With additional security measures, such as pledge, the outcome of such attacks can change from "my, what a splendid view we have from this Dunkirk beach!" to "they got into the lowlands, but got bogged down there", to continue the old Maginot line.
If you don't want that defense in depth, eh, you do you.
The constant banging on the pledge() drum does nothing to show how such an attack works so as to educate programmers on what to look for and how to think about mitigations.
Someone who knows that exploit and refactoring stuff well could step up and write some docs. I mostly don't, so mostly won't. And constantly? Please.
Also, Linux is getting a landlock thing, which sounds maybe a bit like unveil. Are they likewise deluded, or maybe there's something useful about this class of security thingymabobber, especially with "defense in depth" in mind?
This is the crux of my frustration here—all I see is “programs bad, mmmmmmkay?” and magic pixie dust to solve the issues.
A different take is that pledge and unveil, along with the various other security mitigations, hackathons, and so forth, are a good part of a healthy diet. Sure, you can still catch a cold, but it may be less bad, or have fewer complications.
That pledge and unveil are reeeeeeeeeeeee bad bad bad, well, that might be a harder sell, and I sure ain't buying what you've said so far.
And as someone who doesn't trust programmers (as you stated), this isn't a problem for you?
Nope.
OpenBSD (or possibly macOS, once you cull as much of the notification spam as you can) is, at present, the least bad option for me, everything else being worse. Trade-offs! (Alpine linux isn't too terrible, but there are moving costs that are not low enough, and I did support Linux in production for too many years, so that old river-bed ain't much attractive.)
As you said youself[sic]: “I do not trust programmers (nor myself) to not write errors, so look to pledge and unveil by default, especially for ‘runs anything, accesses remote content’ browser code.” What am I to make of this, except for “Oh, all I have to do is add pledge() and unveil() to my program, and then it'll be safe to execute!”
Safe? Naw, less bad. A dam ain't ever safe. Various tradeoffs on time, cost, resources, outcomes, etc. have been made.
What, exactly, is your threat model? Because that's … I don't know what to say. You remove features just because they might be insecure. I guess that's one way to approach security. Another approach might be to cut the network cable.
Attacks against previously unknown errors in the w3m code. And, again, a new zero-day is not something I can provide in advance, so please stop asking. Zero days do exist, and the odds of there being at least one in w3m (or many, in Firefox or Chrome) is not zero. Firefox and Chrome got the pledge and unveil treatment, so they cannot so easily run off with ~/.ssh keys or overwrite random dot files. Likewise for my fork of w3m (and amfora, and irssi, and vi). If you can't or won't follow the logic of this paragraph, then we really don't have much of anything to talk about.
Put another way, I've got an old dam (w3m), and the alternatives are too costly or unsuitable. The least bad option was to add a spillway (pledge and unveil) to mitigate various problems, such as large amounts of rain (an unexpected exploit) that causes downstream problems (additional access to the system) should the dam fail. Some time later, I thought up a better way to support a fish ladder (handling textarea) without a higher risk of the dam being overtopped (letting w3m fork and exec programs, which I don't really need), and at almost zero cost to implement.
Others might use a different fish ladder design, where w3m is allowed to fork/exec, but is limited in what programs it can exec. I've certainly used that pattern in other programs, and may do so with w3m in the future. I do not use textarea hardly ever, so manually editing the rare temporary file is no big deal.
Then someone who it turns out doesn't use the dam and lives in a totally different watershed comes along and starts going on about how fish ladders and spillways are a total sham, asks for the water amount of the next 1,000 year flood so the dam can be designed all proper like, and then you won't need no spillway nor fish thing, those being so much as pixie toots. A top of the morning to you, too, sir!
I only ask as I was hacked once. Bad. Lost two servers (file system wiped clean), almost lost a third. And you know what? Not only did it not change my stance around computer security, there wasn't a XXXXXXXXXX thing I could do about it either! It was an inside job [6]. Is that part of your threat model?
Yes.
By the way, /usr/bin/vi -S is used to edit the temporary file. This does a pledge so that vi cannot run random programs.
But what's stopping an attacker from adding commands to your ~/.bashrc file to do all the nasty things it wants do to the next time you start a shell? That's the thing—pledge() by itself won't stop all attacks, but by dismissing the question of “what attack surfaces” can lead one to believe that all that's needed is pledge(). It leads (in my opinion) to a false sense of security.
The not serious answer: bash is not installed. Why this melts the brain of some folks, I do not know, but they insist that I must have a PATH problem, or something, and some even give commands to try to debug why bash cannot be found. Perhaps they are overtrained on helping linux users with pear-shaped systems? Hmm. That there might say something.
And I've dismissed the question of the attack surface? Pardon my French, but what the actual fuck? What do we do, toss out pledge and unveil, go back to OpenBSD's 1990s model of "review the code for errors and fix them" (wait, that does not plug all the gaps, therefore …), and thus return to the ever popular model of "oh, shit! patch now!" security incidents that pledge and unveil can make less bad, or prevent? Seriously, what is your alternative, and how is it less bad?
Serious answer: I took a few minutes yesterday and added unveil support to the so-called secure mode of my fork of vi, and now an attacker cannot write to any such files, in the (very unlikely) event they do find an exploit. Or, much more likely, me fat fingering something. Low risk, but again, low cost to implement, and likely handy in some other, more useful context, probably one near doas(1) or similar restricted access contexts. An attacker could still fill up a partition where writes are allowed, or exhaust inodes where they are allowed to write, but that is not a big deal on my current system, though could be on others; on a different system some other design may be less bad, or you may need to add filesystem quotas (which broke randomly on linux, so there can be maintenance costs), and so on and on such concerns, costs, and tradeoffs go.
The very fine and low-level permissions you seem to want sound a bit like what systrace provided, or worse, though systrace was tossed for, well, um, hedge? misallege? Something like that. The OpenBSD developers would be the ones to ask about why they moved away from systrace, as well as the reasons for the design of the current security features, the trade-offs, context, resources they have, etc. I somehow get the feeling that such a conversation may not be very productive. Sure, some features may turn out to make no sense, just like the fossil record is littered with failure, and science often advances one grave at a time, and these here computer things are terribly new for us, but in OpenBSD someone would need to make a convincing argument, or, better yet, provide patches that make the system less bad.
text/gemini
This content has been proxied by September (ba2dc).