Hmmm... podman manifest is confusing me.
podman manifest create $MANIFEST_NAME
podman build --platform linux-amd64,linux-arm64 --manifest $MANIFEST_NAME
podman manifest push --all localhost/$MANIFEST_NAME $REPO/$MANIFEST_NAME
...works as I expect. But now if I want to rebuild a new version, and I re-run the build command, I now have 4 containers in my manifest file. Do I need to run "podman manifest remove" for the old images before rerunning the build?
=> More informations about this toot | More toots from dneary@mastodon.ie
I'm not an expert at tag manipulation, as you can see. Maybe best practice is to use a new tag and create a new manifest every time I build?
=> More informations about this toot | More toots from dneary@mastodon.ie
@dneary This is the multi-arch stuff, right? If so, I would expect all containers in a single manifest to be related to the same build.
=> More informations about this toot | More toots from mattb@hachyderm.io
@mattb Right - the issue is when I change something and build new containers, the manifest now has 4 entries: two for arm64, two for amd64.
=> More informations about this toot | More toots from dneary@mastodon.ie
@dneary FWIW my knowledge of these is hazy because I create them with buildx, which just does it for you. I think podman can also do this... i.e. 1 command, and it does all your multi-platform builds, creates the manifest and pushes the result.
Clients point at the manifest, which has the build tag. If you're doing it manually, I'd assume each build needs a new manifest.
=> More informations about this toot | More toots from mattb@hachyderm.io
@dneary e.g. CPO: https://github.com/kubernetes/cloud-provider-openstack/blob/bbb82f45aec62b78d2bfaaee771b635e698849dd/Makefile#L172
=> More informations about this toot | More toots from mattb@hachyderm.io
@dneary I wrote about this some time ago, but not exactly about the rebuild scenario yet(1). So I am curious, as I need to do that for my images at some time as well.
In my case I need to build the images on different machines (no cross-compile), which makes things more interesting.
Let me know how it goes for you.
=> More informations about this toot | More toots from pilhuhn@mastodon.social
@pilhuhn I moved past this, and then hit a nasty hosted K8s gotcha. Using a NodePort service, I could not reach my deployment. Checked, double checked, guessed at a solution, then second guessed, triple checked... Tried Type: LoadBalancer, it worked. Back to NodePort, no joy.
Turns out you need to open the host port on the instances to be able to proxy to localhost. And how to do that is different for each CSP.
D'oh.
=> More informations about this toot | More toots from dneary@mastodon.ie This content has been proxied by September (ba2dc).Proxy Information
text/gemini