Ancestors

Toot

Written by machinin@lemmy.world on 2024-12-22 at 10:21

Docker on VM vs bare install on VM

https://lemmy.world/post/23413638

=> More informations about this toot | More toots from machinin@lemmy.world

Descendants

Written by Scott on 2024-12-22 at 10:30

Containers are just processes with flags. Those flags isolate the process’s filesystem, memory, etc.

The advantages of containers is that the software dependencies can be unique per container and not conflict with others. There are no significant disadvantages.

Without containers, if software A has the same dependency as software B but need different versions of that dependency, you’ll have issues.

=> More informations about this toot | More toots from scott@lem.free.as

Written by machinin@lemmy.world on 2024-12-22 at 11:47

Thanks for this - the one advantage I’m noticing is that too update the services I’m running, I have to rebuild the container. I can’t really just update from the UI if an update is available. I can do it, it is just somewhat of a nuisance.

How often are there issues with dependencies? Is that a problem with a lot of software these days?

=> More informations about this toot | More toots from machinin@lemmy.world

Written by Passerby6497@lemmy.world on 2024-12-22 at 15:39

But rebuilding your container is pretty trivial from the command line all said and done. I have something like this alias’d in my .bashrc to smooth it along:

Docker compose pull; docker compose down; docker compose up -d

I regularly check on my systems and go through my docker dirs and run my alias to update everything fairly simply. Add in periodic schedule image cleanups and it has been humming along for a couple years for the most part (aside from one odd software issues and hardware failures).

=> More informations about this toot | More toots from Passerby6497@lemmy.world

Written by tofubl@discuss.tchncs.de on 2024-12-22 at 20:54

Is there a specific reason you’re taking the services down before bringing them back up? Just docker compose up -d recreates all services that had a new image pulled, but leaves the others running.

=> More informations about this toot | More toots from tofubl@discuss.tchncs.de

Written by Passerby6497@lemmy.world on 2024-12-22 at 21:09

Probably just a hold over from when I was first learning. Had issues with a couple services not actually updating without it, so I just do it to be absolutely sure. Also, I only ever run one app per compose, so that forces a “reboot” of the whole stack when I update.

=> More informations about this toot | More toots from Passerby6497@lemmy.world

Written by machinin@lemmy.world on 2024-12-23 at 01:37

I know rebuilding containers is trivial, but updating a service in the UI is more trivial than that. I’m just trying to make my life as trivial as possible 😁. It seems like containers may be worth the little bit of extra effort.

=> More informations about this toot | More toots from machinin@lemmy.world

Written by Voroxpete@sh.itjust.works on 2024-12-23 at 03:37

I mean, for anything where you’re willing to trust the container provider not to push breaking changes, you can just run Watchtower and have it automatically update. That’s how most of my stuff runs.

=> More informations about this toot | More toots from Voroxpete@sh.itjust.works

Written by Avid Amoeba on 2024-12-23 at 06:56

If you’re not using some sort of automatic updates, you’re not seriously trying to make your life as trivial as possible. 😂 Just use fixed major version tags where possible in order to avoid surprise breakage.

=> More informations about this toot | More toots from avidamoeba@lemmy.ca

Written by callcc@lemmy.world on 2024-12-22 at 13:56

I beg to disagree about the disadvantages. An important one is that you cannot easily update shared libraries globally. This is a problem with things like libssl or similar. Another disadvantage is the added complexity both wrt. to operation but also in general the amount of code running. It can also be problematic that many people just run containers without doing any auditing. In general containers are pretty opaque compared to os packaged software which is usually compiled individually for the os.

This being said, systemd offers a lot of isolation features that allows similar isolation to containers but without having to deal with docker.

=> More informations about this toot | More toots from callcc@lemmy.world

Written by Voroxpete@sh.itjust.works on 2024-12-22 at 10:37

Personally, I always like to use containers when possible. Keep in mind that unlike virts, containers have very minimal overhead. So there really is no practical cost to using them, and they provide better (though not perfect) security and some amount of sandboxing for every application.

Containers mean that you never have to worry about whether your VM is running the right versions of certain libraries. You never have to be afraid of breaking your setup by running a software update. They’re simpler, more robust and more reliable. There are almost no practical arguments against using them.

And if you’re running multiple services the advantages only multiply because now you no longer have to worry about running a bespoke environment for each service just to avoid conflicts.

=> More informations about this toot | More toots from Voroxpete@sh.itjust.works

Written by machinin@lemmy.world on 2024-12-22 at 11:48

Copying a response I wrote on another comment -

Thanks for this - the one advantage I’m noticing is that to update the services I’m running, I have to rebuild the container. I can’t really just update from the UI if an update is available. I can do it, it is just somewhat of a nuisance.

How often are there issues with dependencies? Is that a problem with a lot of software these days?

=> More informations about this toot | More toots from machinin@lemmy.world

Written by Voroxpete@sh.itjust.works on 2024-12-22 at 15:32

There’s no good answer to that because it depends entirely on what you’re running. In a magical world where every open source project always uses the latest versions of everything while also maintaining extensive backwards compatibility, it would never be a problem. And I would finally get my unicorn and rainbows would cure cancer.

In practice, containers provide a layer of insurance that it just makes no sense to go without.

=> More informations about this toot | More toots from Voroxpete@sh.itjust.works

Written by killabeezio@lemm.ee on 2024-12-23 at 04:32

Ok but containers generally have a lot less dependencies. If you are making your own images, then you know exactly how to rebuild them. In the event something happens, it makes it much easier to get up and running again and also remember what you did to get the service running. The only other thing that would be better is Nix.

If you use an image that someone is maintaining, this makes it even easier and there are services out there that will keep your containers up to date when a new image is available. You can also just automate your image builds to run nightly and keep it up to date.

=> More informations about this toot | More toots from killabeezio@lemm.ee

Written by trilobite@lemmy.ml on 2024-12-22 at 10:42

I’ve been asking myself the same question for a while. The container inside a VM is my setup too. It feels like the container in the VM in the OS is a bit of an onion approach which has pros and cons. If u are on low powered hardware, I suspect having too many onion layers just eat up the little resources you have. On the other hand, as Scott@lem.free.as suggests, it easier to run a system, update and generally maintain. It would be good to have other opinion on this. Note that not all those that have a home lab have good powered labs. I’m still using two T110’s (32GB ECC ram) that are now quite dated but are sufficient for my uses. They have Truenas scale installed and one VM running 6 containers. It’s not fast, but its realiable.

=> More informations about this toot | More toots from trilobite@lemmy.ml

Written by CameronDev@programming.dev on 2024-12-22 at 13:47

Container overhead is near zero. They are not virtualized or anything like that, they are just processes on your host system that are isolated. Its functionally not much more different to chroot.

=> More informations about this toot | More toots from CameronDev@programming.dev

Written by traches@sh.itjust.works on 2024-12-22 at 11:55

Cons of containers are slightly worse disk and memory consumption.

Pros:

Stick with the containers

=> More informations about this toot | More toots from traches@sh.itjust.works

Written by gazter@aussie.zone on 2024-12-23 at 07:52

Wait, ease of installation? As someone who had to walk away from a semi-homebrew, mildly complicated cloud storage setup recently, that’s not the experience I had. Networks within networks, networks next to networks not talking to each other, mapped volumes, even checking logs is made more complicated by containerising. Sure, I’m a noob, but that only reinforces my point.

=> More informations about this toot | More toots from gazter@aussie.zone

Written by traches@sh.itjust.works on 2024-12-23 at 09:36

I definitely see your point, but the difference is that it’s one thing to learn. Once you know docker, you can deploy and manage anything.

=> More informations about this toot | More toots from traches@sh.itjust.works

Written by vegetaaaaaaa@lemmy.world on 2024-12-25 at 15:45

security

with containers, maintainers also need to keep their image up-to-date with latest security fixes (most of them don’t) - whereas these are usually handled by unattended-upgrades or similar in a VM. Then put out a new release and expect users to upgrade ASAP. Or encourage redeploying the latest image every day or so, which is bad for other reasons (no warning for breaking changes, the software must be tested thoroughly after every commit to master).

In short this adds the burden of proper OS/image maintenance for developers, something usually handled by distro maintainers.

trivy is helpful in assessing the maintenance/vulnerability level of OCI images.

=> More informations about this toot | More toots from vegetaaaaaaa@lemmy.world

Written by sylver_dragon on 2024-12-22 at 13:44

I see containers as having a couple of advantages:

That all said, if an application does not have an official container image, the added complexity of creating and maintaining your own image can be a significant downside. One of my use cases for containers is running game servers (e.g. Valheim). There isn’t an official image; so, I had to roll my own. The effort to set this up isn’t zero and, when trying to sort out an image for a new game, it does take me a while before I can start playing. And those images need to be updated when a new version of the game releases. Technically, you can update a running container in a lot of cases; but, I usually end up rebuilding it at some point anyway.

I’d also note that, careful use of VMs and snapshots can replicate or mitigate most of the advantages I listed. I’ve done both (decade and a half as a sysadmin). But, part of that “careful use” usually meant spinning up a new VM for each application. Putting multiple applications on the same OS install was usually asking for trouble. Eventually, one of the applications would get borked and having the flexibility to just nuke the whole install saved a lot of time and effort. Going with containers removed the need to nuke the OS along with the application to get a similar effect.

At the end of the day, though. It’s your box, you do what you are most comfortable with and want to support. If that’s a monolithic install, then go for it. While I, or other might find containers a better answer for us, maybe it isn’t for you.

=> More informations about this toot | More toots from sylver_dragon@lemmy.world

Written by Mac on 2024-12-22 at 19:09

Man back when I played there was a community image at least

=> More informations about this toot | More toots from macgyver@federation.red

Written by sylver_dragon on 2024-12-22 at 20:04

I’m sure there are several out there. But, when I was starting out, I didn’t see one and just rolled my own. The process was general enough that I’ve been able to mostly just replace the SteamID of the game in the Dockerfile and have it work well for other games. It doesn’t do anything fancy like automatic updating; but, it works and doesn’t need anything special.

=> More informations about this toot | More toots from sylver_dragon@lemmy.world

Written by Nephalis@discuss.tchncs.de on 2024-12-22 at 14:02

Just to throw another option in:

Lxc are containers too. And they are the other major option proxmox comes with.

It feels more like bare metal installations, but are more lightweight and share there ressources they do not use.

I never got why having Proxmox and one VM with several docker containers except I absolutly don’t want to deal with installations at all.

On the other hands I wanted to learn about linux and the basics of handling proxmox.

=> More informations about this toot | More toots from Nephalis@discuss.tchncs.de

Written by atzanteol@sh.itjust.works on 2024-12-22 at 15:00

I wish the phrase “bare metal” would die…

=> More informations about this toot | More toots from atzanteol@sh.itjust.works

Written by Possibly linux on 2024-12-23 at 05:09

I wouldn’t recommend it

=> More informations about this toot | More toots from possiblylinux127@lemmy.zip

Written by corsicanguppy@lemmy.ca on 2024-12-23 at 05:18

Pros:

Cons: (vs docker-on-vm)

=> More informations about this toot | More toots from corsicanguppy@lemmy.ca

Written by ikidd@lemmy.world on 2024-12-23 at 05:29

I love it like this. Snapshot before updating the VM. Backup hourly with PBS and you can restore individual stacks from backup. Replicate your docker VM for HA.

Wouldn’t do it any other way.

=> More informations about this toot | More toots from ikidd@lemmy.world

Written by Avid Amoeba on 2024-12-23 at 06:57

Always use containers.

=> More informations about this toot | More toots from avidamoeba@lemmy.ca

Proxy Information
Original URL
gemini://mastogem.picasoft.net/thread/113695974814229325
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
394.291042 milliseconds
Gemini-to-HTML Time
13.472982 milliseconds

This content has been proxied by September (3851b).