CINXE.COM
Multiarch container builds, version pinning, and package retention policies are fundamentally incompatible — alpinelinux lists
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf8"/> <meta content="width=device-width, initial-scale=1" name="viewport"/> <meta content="#ffffff" name="theme-color"/> <title> Multiarch container builds, version pinning, and package retention policies are fundamentally incompatible — alpinelinux lists </title> <link href="https://alpinelinux.org/alpine-logo.ico" rel="icon"/> <link href="/static/main.min.52f37db7.css" rel="stylesheet"/> <link href="/~alpine/users/%3C5d8a64d114f24495584cc4b646aa80d090631ffa.camel%40santalla.io%3E/mbox" rel="alternate" type="application/mbox"/> </head> <body> <nav class="container navbar navbar-light navbar-expand-sm"> <a class="navbar-brand" href="/"> <img height="30" src="https://alpinelinux.org/alpinelinux-logo.svg"/> </a> <ul class="navbar-nav mr-auto d-none d-sm-flex"> <li class="nav-item active"> <a class="nav-link" href="https://lists.alpinelinux.org">lists</a> </li> <li class="nav-item"> <a class="nav-link" href="https://meta.alpinelinux.org">meta</a> </li> </ul> <div class="login"> <span class="navbar-text"> <a href="https://meta.alpinelinux.org/oauth/authorize?client_id=e01433a6fb79949c&scopes=profile&state=%2F~alpine%2Fusers%2F%3C5d8a64d114f24495584cc4b646aa80d090631ffa.camel%40santalla.io%3E%3F">Log in</a> — <a href="https://meta.alpinelinux.org">Register</a> </span> </div> </nav> <div class="header-tabbed"> <div class="container"> <h2> <a href="/~alpine">~alpine</a>/<wbr/>users </h2> <ul class="nav nav-tabs"> <li class="nav-item"> <a class="nav-link active" href="/~alpine/users">archives</a> </li> <li class="nav-item"> <a class="nav-link" href="/~alpine/users/patches">patches</a> </li> <li class="nav-item"> <a class="nav-link" href="mailto:~alpine/users@lists.alpinelinux.org">new post</a> </li> </ul> </div> </div> <div class="container"> <div class="row"> <div class="col-md-12"> <small class="text-muted pull-right" style="line-height: 2"> <span><span aria-hidden="true" class="icon icon-reply"><svg viewbox="0 0 512 512" xmlns="http://www.w3.org/2000/svg"><path d="M8.309 189.836L184.313 37.851C199.719 24.546 224 35.347 224 56.015v80.053c160.629 1.839 288 34.032 288 186.258 0 61.441-39.581 122.309-83.333 154.132-13.653 9.931-33.111-2.533-28.077-18.631 45.344-145.012-21.507-183.51-176.59-185.742V360c0 20.7-24.3 31.453-39.687 18.164l-176.004-152c-11.071-9.562-11.086-26.753 0-36.328z"></path></svg> </span> 5</span> <span><span aria-hidden="true" class="icon icon-user"><svg viewbox="0 0 448 512" xmlns="http://www.w3.org/2000/svg"><path d="M224 256c70.7 0 128-57.3 128-128S294.7 0 224 0 96 57.3 96 128s57.3 128 128 128zm89.6 32h-16.7c-22.2 10.2-46.9 16-72.9 16s-50.6-5.8-72.9-16h-16.7C60.2 288 0 348.2 0 422.4V464c0 26.5 21.5 48 48 48h352c26.5 0 48-21.5 48-48v-41.6c0-74.2-60.2-134.4-134.4-134.4z"></path></svg> </span> 3</span> </small> <h3> Multiarch container builds, version pinning, and package retention policies are fundamentally incompatible </h3> <div> <div class="message-header"> <div class="from"> Nadia Santalla <a href="mailto:~alpine/users@lists.alpinelinux.org?cc=Nadia%20Santalla%20%3Cnadia%40santalla.io%3E&in-reply-to=%3C5d8a64d114f24495584cc4b646aa80d090631ffa.camel%40santalla.io%3E&subject=Re%3A%20Multiarch%20container%20builds%2C%20version%20pinning%2C%20and%20package%20retention%20policies%20are%20fundamentally%20incompatible"><nadia@santalla.io></a> </div> <details> <summary>Details</summary> <div> <dl> <div> <dt>Message ID</dt> <dd><code><5d8a64d114f24495584cc4b646aa80d090631ffa.camel@santalla.io></code></dd> </div> <div> <dt>DKIM signature</dt> <dd> missing </dd> </div> </dl> <a href="/~alpine/users/%3C5d8a64d114f24495584cc4b646aa80d090631ffa.camel%40santalla.io%3E/raw" target="_blank">Download raw message</a> </div> </details> <div class="date"> <a class="text-muted" href="#<5d8a64d114f24495584cc4b646aa80d090631ffa.camel@santalla.io>" id="<5d8a64d114f24495584cc4b646aa80d090631ffa.camel@santalla.io>"><span title="2024-11-18 17:21:44 UTC">5 days ago</span></a> </div> <div class="msg-info"> </div> </div> <pre class="message-body">Hello! I'm Nadia and I'm a long-since, happy user of alpine for containerized workloads. I have recently stumbled across a problem related to multiarch container images, version pinning, and alpine's package retention policy (where only the latest version of a package is available for a given alpine release). I have a Dockerfile that looks like this: ```Dockerfile FROM --platform=$TARGETPLATFORM alpine:3.20.3 RUN apk --no-cache add --repository community chromium- swiftshader=130.0.6723.116-r0 # ... ``` This Dockerfile is currently (2024-11-18) unbuildable in a multiarch context, because the chromium-swiftshader package has a version drift across architectures: For x86_64, the last (and only available) version is 131.0.6778.69-r0 <a href="https://pkgs.alpinelinux.org/packages?name=chromium-swiftshader&branch=v3.20&repo=&arch=x86_64&origin=&flagged=&maintainer=" rel="noopener nofollow">https://pkgs.alpinelinux.org/packages?name=chromium-swiftshader&branch=v3.20&repo=&arch=x86_64&origin=&flagged=&maintainer=</a> For aarch64, it is 130.0.6723.116-r0 <a href="https://pkgs.alpinelinux.org/packages?name=chromium-swiftshader&branch=v3.20&repo=&arch=aarch64&origin=&flagged=&maintainer=" rel="noopener nofollow">https://pkgs.alpinelinux.org/packages?name=chromium-swiftshader&branch=v3.20&repo=&arch=aarch64&origin=&flagged=&maintainer=</a> I'm sure the issue with chromium-swiftshader is being worked on, and that is not what I want to surface here. I would like to bring attention to the problem that when and if this happens for a given package, container build pipelines that try to build for multiple architectures will break without a fix: Updating to the latest version is not possible, as it is not present in the outdated architecture, and keeping the outdated version is not possible either, as it is no longer installable in the updated architecture. I believe version pinning is established as a good practice in the industry, and thus I think this use case needs some consideration from the Alpine side. Even if not very common, packages breaking for certain architecture is something that can definitely happen, and I think it would be great for alpine to handle this more gracefully. I'd love to hear if this has happened to someone else, and how they are handling it, and/or what's the point of view of alpine developers on this issue. BR, - N</pre> </div> <div> <div class="message-header"> <div class="from"> fossdd <a href="mailto:~alpine/users@lists.alpinelinux.org?cc=fossdd%20%3Cfossdd%40pwned.life%3E&in-reply-to=%3CD5POTZXFUK1H.2LUELCKMXTWER%40pwned.life%3E&subject=Re%3A%20Multiarch%20container%20builds%2C%20version%20pinning%2C%20and%20package%20retention%20policies%20are%20fundamentally%20incompatible"><fossdd@pwned.life></a> </div> <details> <summary>Details</summary> <div> <dl> <div> <dt>Message ID</dt> <dd><code><D5POTZXFUK1H.2LUELCKMXTWER@pwned.life></code></dd> </div> <div> <dt>In-Reply-To</dt> <dd> <code> <5d8a64d114f24495584cc4b646aa80d090631ffa.camel@santalla.io> </code> <a href="#<5d8a64d114f24495584cc4b646aa80d090631ffa.camel@santalla.io>">(view parent)</a> </dd> </div> <div> <dt>DKIM signature</dt> <dd> missing </dd> </div> </dl> <a href="/~alpine/users/%3CD5POTZXFUK1H.2LUELCKMXTWER%40pwned.life%3E/raw" target="_blank">Download raw message</a> </div> </details> <div class="date"> <a class="text-muted" href="#<D5POTZXFUK1H.2LUELCKMXTWER@pwned.life>" id="<D5POTZXFUK1H.2LUELCKMXTWER@pwned.life>"><span title="2024-11-18 23:20:03 UTC">5 days ago</span></a> </div> <div class="msg-info"> </div> </div> <pre class="message-body">On Mon Nov 18, 2024 at 6:21 PM CET, Nadia Santalla wrote: <span class="text-muted">> Hello!</span> Hi! <span class="text-muted">></span> <span class="text-muted">> I'm Nadia and I'm a long-since, happy user of alpine for containerized</span> <span class="text-muted">> workloads.</span> That's nice! <span class="text-muted">></span> <span class="text-muted">> I have recently stumbled across a problem related to multiarch</span> <span class="text-muted">> container images, version pinning, and alpine's package retention</span> <span class="text-muted">> policy (where only the latest version of a package is available for a</span> <span class="text-muted">> given alpine release).</span> <span class="text-muted">></span> <span class="text-muted">> I have a Dockerfile that looks like this:</span> <span class="text-muted">></span> <span class="text-muted">> ```Dockerfile</span> <span class="text-muted">> FROM --platform=$TARGETPLATFORM alpine:3.20.3</span> <span class="text-muted">></span> <span class="text-muted">> RUN apk --no-cache add --repository community chromium-</span> <span class="text-muted">> swiftshader=130.0.6723.116-r0</span> <span class="text-muted">> # ...</span> <span class="text-muted">> ```</span> <span class="text-muted">></span> <span class="text-muted">> This Dockerfile is currently (2024-11-18) unbuildable in a multiarch</span> <span class="text-muted">> context, because the chromium-swiftshader package has a version drift</span> <span class="text-muted">> across architectures:</span> <span class="text-muted">> For x86_64, the last (and only available) version is 131.0.6778.69-r0 </span> <span class="text-muted">> <a href="https://pkgs.alpinelinux.org/packages?name=chromium-swiftshader&branch=v3.20&repo=&arch=x86_64&origin=&flagged=&maintainer=" rel="noopener nofollow">https://pkgs.alpinelinux.org/packages?name=chromium-swiftshader&branch=v3.20&repo=&arch=x86_64&origin=&flagged=&maintainer=</a></span> <span class="text-muted">> For aarch64, it is 130.0.6723.116-r0</span> <span class="text-muted">> <a href="https://pkgs.alpinelinux.org/packages?name=chromium-swiftshader&branch=v3.20&repo=&arch=aarch64&origin=&flagged=&maintainer=" rel="noopener nofollow">https://pkgs.alpinelinux.org/packages?name=chromium-swiftshader&branch=v3.20&repo=&arch=aarch64&origin=&flagged=&maintainer=</a></span> <span class="text-muted">></span> Usually all architectures have the same version. Of course some inconsistencies may happen because builder some are faster than others. Particulary the last days the arm builders were failing, see this incident: <a href="https://gitlab.alpinelinux.org/alpine/infra/infra/-/issues/10832/" rel="noopener nofollow">https://gitlab.alpinelinux.org/alpine/infra/infra/-/issues/10832/</a> <span class="text-muted">> I'm sure the issue with chromium-swiftshader is being worked on, and</span> <span class="text-muted">> that is not what I want to surface here.</span> <span class="text-muted">></span> <span class="text-muted">> I would like to bring attention to the problem that when and if this</span> <span class="text-muted">> happens for a given package, container build pipelines that try to</span> <span class="text-muted">> build for multiple architectures will break without a fix: Updating to</span> <span class="text-muted">> the latest version is not possible, as it is not present in the</span> <span class="text-muted">> outdated architecture, and keeping the outdated version is not possible</span> <span class="text-muted">> either, as it is no longer installable in the updated architecture.</span> <span class="text-muted">></span> <span class="text-muted">> I believe version pinning is established as a good practice in the</span> <span class="text-muted">> industry, and thus I think this use case needs some consideration from</span> <span class="text-muted">> the Alpine side. Even if not very common, packages breaking for certain</span> <span class="text-muted">> architecture is something that can definitely happen, and I think it</span> <span class="text-muted">> would be great for alpine to handle this more gracefully.</span> If a version breaks, forcing a specific version isn't a fix, it's more like a temporary workaround. I wouldn't use this in production, most importantly you must remind you again to remove the constraint. Otherwise you end up with way more problems than you probably had before. Partial upgrades are always tricky and only lead to problems. It's a lot easier if we can safely assume for new commits that all systems are on that tree. For example if we update two packages in a single MR, and the former package depends on a new version of the latter package, we need to add the version constraint to the package dependency. This effort, plus reviewing bad constraints and removing old constraints, is A LOT and is not worth the effort. Addionally adding support for old versions in our repositories doubles (or more?) the disk size of the mirrors which is not doable for most mirrors. (we already disable most -dbg packages due to size issues). Arch Linux has a archive for old packages (<a href="https://wiki.archlinux.org/title/Arch_Linux_Archive" rel="noopener nofollow">https://wiki.archlinux.org/title/Arch_Linux_Archive</a>). Someone could do the same, but honestly this is just promoting bad behaviour and nobody should rely on it. <span class="text-muted">></span> <span class="text-muted">> I'd love to hear if this has happened to someone else, and how they are</span> <span class="text-muted">> handling it, and/or what's the point of view of alpine developers on</span> <span class="text-muted">> this issue.</span> <span class="text-muted">></span> <span class="text-muted">> BR,</span> <span class="text-muted">> - N</span></pre> </div> <div> <div class="message-header"> <div class="from"> Jakub Jirutka <a href="mailto:~alpine/users@lists.alpinelinux.org?cc=Jakub%20Jirutka%20%3Cjakub%40jirutka.cz%3E&in-reply-to=%3CE83940A1-CC1C-456C-9F38-539B051B260C%40jirutka.cz%3E&subject=Re%3A%20Multiarch%20container%20builds%2C%20version%20pinning%2C%20and%20package%20retention%20policies%20are%20fundamentally%20incompatible"><jakub@jirutka.cz></a> </div> <details> <summary>Details</summary> <div> <dl> <div> <dt>Message ID</dt> <dd><code><E83940A1-CC1C-456C-9F38-539B051B260C@jirutka.cz></code></dd> </div> <div> <dt>In-Reply-To</dt> <dd> <code> <D5POTZXFUK1H.2LUELCKMXTWER@pwned.life> </code> <a href="#<D5POTZXFUK1H.2LUELCKMXTWER@pwned.life>">(view parent)</a> </dd> </div> <div> <dt>DKIM signature</dt> <dd> missing </dd> </div> </dl> <a href="/~alpine/users/%3CE83940A1-CC1C-456C-9F38-539B051B260C%40jirutka.cz%3E/raw" target="_blank">Download raw message</a> </div> </details> <div class="date"> <a class="text-muted" href="#<E83940A1-CC1C-456C-9F38-539B051B260C@jirutka.cz>" id="<E83940A1-CC1C-456C-9F38-539B051B260C@jirutka.cz>"><span title="2024-11-19 01:09:23 UTC">5 days ago</span></a> </div> <div class="msg-info"> </div> </div> <pre class="message-body">Hi <span class="text-muted">> Arch Linux has a archive for old packages</span> <span class="text-muted">> (<a href="https://wiki.archlinux.org/title/Arch_Linux_Archive" rel="noopener nofollow">https://wiki.archlinux.org/title/Arch_Linux_Archive</a>). Someone could do</span> <span class="text-muted">> the same,</span> I’m doing weekly snapshots on <a href="https://mirror.fel.cvut.cz/alpine/snapshots" rel="noopener nofollow">https://mirror.fel.cvut.cz/alpine/snapshots</a> (for future scientific use), but only for x86_64. <span class="text-muted">> but honestly this is just promoting bad behaviour and nobody should rely on it.</span> Exactly. Your approach of pinning versions of packages in docker files is just not compatible with the release model of Linux distros. We have stable branches where you can rely on backward compatibility of package upgrades and get security fixes. Jakub Sent from mobile phone <span class="text-muted">> On 19. 11. 2024, at 0:20, fossdd <<a href="mailto:fossdd@pwned.life">fossdd@pwned.life</a>> wrote:</span> <span class="text-muted">> </span> <span class="text-muted">> On Mon Nov 18, 2024 at 6:21 PM CET, Nadia Santalla wrote:</span> <span class="text-muted">>> Hello!</span> <span class="text-muted">> </span> <span class="text-muted">> Hi!</span> <span class="text-muted">> </span> <span class="text-muted">>> </span> <span class="text-muted">>> I'm Nadia and I'm a long-since, happy user of alpine for containerized</span> <span class="text-muted">>> workloads.</span> <span class="text-muted">> </span> <span class="text-muted">> That's nice!</span> <span class="text-muted">> </span> <span class="text-muted">>> </span> <span class="text-muted">>> I have recently stumbled across a problem related to multiarch</span> <span class="text-muted">>> container images, version pinning, and alpine's package retention</span> <span class="text-muted">>> policy (where only the latest version of a package is available for a</span> <span class="text-muted">>> given alpine release).</span> <span class="text-muted">>> </span> <span class="text-muted">>> I have a Dockerfile that looks like this:</span> <span class="text-muted">>> </span> <span class="text-muted">>> ```Dockerfile</span> <span class="text-muted">>> FROM --platform=$TARGETPLATFORM alpine:3.20.3</span> <span class="text-muted">>> </span> <span class="text-muted">>> RUN apk --no-cache add --repository community chromium-</span> <span class="text-muted">>> swiftshader=130.0.6723.116-r0</span> <span class="text-muted">>> # ...</span> <span class="text-muted">>> ```</span> <span class="text-muted">>> </span> <span class="text-muted">>> This Dockerfile is currently (2024-11-18) unbuildable in a multiarch</span> <span class="text-muted">>> context, because the chromium-swiftshader package has a version drift</span> <span class="text-muted">>> across architectures:</span> <span class="text-muted">>> For x86_64, the last (and only available) version is 131.0.6778.69-r0</span> <span class="text-muted">>> <a href="https://pkgs.alpinelinux.org/packages?name=chromium-swiftshader&branch=v3.20&repo=&arch=x86_64&origin=&flagged=&maintainer=" rel="noopener nofollow">https://pkgs.alpinelinux.org/packages?name=chromium-swiftshader&branch=v3.20&repo=&arch=x86_64&origin=&flagged=&maintainer=</a></span> <span class="text-muted">>> For aarch64, it is 130.0.6723.116-r0</span> <span class="text-muted">>> <a href="https://pkgs.alpinelinux.org/packages?name=chromium-swiftshader&branch=v3.20&repo=&arch=aarch64&origin=&flagged=&maintainer=" rel="noopener nofollow">https://pkgs.alpinelinux.org/packages?name=chromium-swiftshader&branch=v3.20&repo=&arch=aarch64&origin=&flagged=&maintainer=</a></span> <span class="text-muted">>> </span> <span class="text-muted">> </span> <span class="text-muted">> Usually all architectures have the same version. Of course some</span> <span class="text-muted">> inconsistencies may happen because builder some are faster than others.</span> <span class="text-muted">> </span> <span class="text-muted">> Particulary the last days the arm builders were failing, see this</span> <span class="text-muted">> incident: <a href="https://gitlab.alpinelinux.org/alpine/infra/infra/-/issues/10832/" rel="noopener nofollow">https://gitlab.alpinelinux.org/alpine/infra/infra/-/issues/10832/</a></span> <span class="text-muted">> </span> <span class="text-muted">>> I'm sure the issue with chromium-swiftshader is being worked on, and</span> <span class="text-muted">>> that is not what I want to surface here.</span> <span class="text-muted">>> </span> <span class="text-muted">>> I would like to bring attention to the problem that when and if this</span> <span class="text-muted">>> happens for a given package, container build pipelines that try to</span> <span class="text-muted">>> build for multiple architectures will break without a fix: Updating to</span> <span class="text-muted">>> the latest version is not possible, as it is not present in the</span> <span class="text-muted">>> outdated architecture, and keeping the outdated version is not possible</span> <span class="text-muted">>> either, as it is no longer installable in the updated architecture.</span> <span class="text-muted">>> </span> <span class="text-muted">>> I believe version pinning is established as a good practice in the</span> <span class="text-muted">>> industry, and thus I think this use case needs some consideration from</span> <span class="text-muted">>> the Alpine side. Even if not very common, packages breaking for certain</span> <span class="text-muted">>> architecture is something that can definitely happen, and I think it</span> <span class="text-muted">>> would be great for alpine to handle this more gracefully.</span> <span class="text-muted">> </span> <span class="text-muted">> If a version breaks, forcing a specific version isn't a fix, it's more</span> <span class="text-muted">> like a temporary workaround. I wouldn't use this in production, most</span> <span class="text-muted">> importantly you must remind you again to remove the constraint.</span> <span class="text-muted">> Otherwise you end up with way more problems than you probably had</span> <span class="text-muted">> before.</span> <span class="text-muted">> </span> <span class="text-muted">> Partial upgrades are always tricky and only lead to problems. It's a lot</span> <span class="text-muted">> easier if we can safely assume for new commits that all systems are on</span> <span class="text-muted">> that tree.</span> <span class="text-muted">> </span> <span class="text-muted">> For example if we update two packages in a single MR, and the former</span> <span class="text-muted">> package depends on a new version of the latter package, we need to add</span> <span class="text-muted">> the version constraint to the package dependency. This effort, plus</span> <span class="text-muted">> reviewing bad constraints and removing old constraints, is A LOT and is</span> <span class="text-muted">> not worth the effort.</span> <span class="text-muted">> </span> <span class="text-muted">> Addionally adding support for old versions in our repositories doubles</span> <span class="text-muted">> (or more?) the disk size of the mirrors which is not doable for most</span> <span class="text-muted">> mirrors. (we already disable most -dbg packages due to size issues).</span> <span class="text-muted">> </span> <span class="text-muted">> Arch Linux has a archive for old packages</span> <span class="text-muted">> (<a href="https://wiki.archlinux.org/title/Arch_Linux_Archive" rel="noopener nofollow">https://wiki.archlinux.org/title/Arch_Linux_Archive</a>). Someone could do</span> <span class="text-muted">> the same, but honestly this is just promoting bad behaviour and nobody</span> <span class="text-muted">> should rely on it.</span> <span class="text-muted">> </span> <span class="text-muted">>> </span> <span class="text-muted">>> I'd love to hear if this has happened to someone else, and how they are</span> <span class="text-muted">>> handling it, and/or what's the point of view of alpine developers on</span> <span class="text-muted">>> this issue.</span> <span class="text-muted">>> </span> <span class="text-muted">>> BR,</span> <span class="text-muted">>> - N</span></pre> </div> <div> <div class="message-header"> <div class="from"> Nadia Santalla <a href="mailto:~alpine/users@lists.alpinelinux.org?cc=Nadia%20Santalla%20%3Cnadia%40santalla.io%3E&in-reply-to=%3Cc0ff727d04790e33f05f3904ab1c2f60381c84bb.camel%40santalla.io%3E&subject=Re%3A%20Multiarch%20container%20builds%2C%20version%20pinning%2C%20and%20package%20retention%20policies%20are%20fundamentally%20incompatible"><nadia@santalla.io></a> </div> <details> <summary>Details</summary> <div> <dl> <div> <dt>Message ID</dt> <dd><code><c0ff727d04790e33f05f3904ab1c2f60381c84bb.camel@santalla.io></code></dd> </div> <div> <dt>In-Reply-To</dt> <dd> <code> <E83940A1-CC1C-456C-9F38-539B051B260C@jirutka.cz> </code> <a href="#<E83940A1-CC1C-456C-9F38-539B051B260C@jirutka.cz>">(view parent)</a> </dd> </div> <div> <dt>DKIM signature</dt> <dd> missing </dd> </div> </dl> <a href="/~alpine/users/%3Cc0ff727d04790e33f05f3904ab1c2f60381c84bb.camel%40santalla.io%3E/raw" target="_blank">Download raw message</a> </div> </details> <div class="date"> <a class="text-muted" href="#<c0ff727d04790e33f05f3904ab1c2f60381c84bb.camel@santalla.io>" id="<c0ff727d04790e33f05f3904ab1c2f60381c84bb.camel@santalla.io>"><span title="2024-11-19 13:04:21 UTC">4 days ago</span></a> </div> <div class="msg-info"> </div> </div> <pre class="message-body">Hi Jakub, Fossdd, Thanks a lot for sharing your insights! I think it might be useful for me to elaborate more on why the container and non-container (so-called baremetal, but I'd rather not call it that) use case differs. In the container world, we care *a lot* about images being reproducible: This is one of the reasons why containers were born, after all. Allegedly, we're have still a lot of strides to make regarding byte-to-byte reproducibility, but I believe we'll eventually get there. Regardless of that, what is well-established and even assumed in the container world is the following: If someone builds a container image from a Dockerfile (+ a given set of source files) today, and someone else builds a container image using the same Dockerfile (and same set of source files) next week, the resulting image should _behave the same_. This assumption does not hold if in your Dockerfile, you install (or download, for that matter) binaries or other components without specified version. `apk` (or `apt`) `install` are very much likely to install one thing today and a different one next week. This assumption is heavily ingrained in the container world. To give some examples, Dockerfile liners that are shipped by default in the most popular editors, have rules that explicitly tell you to pin versions in `apk add`: <a href="https://github.com/hadolint/hadolint/blob/4ab5d7809325c8de23956a44ca5a1f3c25907faf/src/Hadolint/Rule/DL3018.hs#L20" rel="noopener nofollow">https://github.com/hadolint/hadolint/blob/4ab5d7809325c8de23956a44ca5a1f3c25907faf/src/Hadolint/Rule/DL3018.hs#L20</a> This is popping up by default on the most popular editors across the world. That being said, I understand this is definitely container-specific. I run arch linux (btw) on my workstations and servers, which has a similar policy as alpine, and I very rarely have had to pin version for a package. I think the "partial upgrades are bad and problematic" is very true for full system installations, but I see it being definitely at odds with container best practices. I think it is good if alpine kept using the same practices it is currently doing, and assume no partial upgrades to keep the maintainer effort required into reasonable levels. But I also think that the container community would appreciate a best-effort approach on helping towards reproducible container builds. <span class="text-muted">> Addionally adding support for old versions in our repositories</span> <span class="text-muted">> doubles (or more?) the disk size of the mirrors which is not</span> <span class="text-muted">> doable for most mirrors. (we already disable most -dbg packages</span> <span class="text-muted">> due to size issues).</span> I definitely see this cost problem and, while I do not have any idea to magically solve it, maybe I can try with a suggestion to contain it: If old versions are provided for specific, non-general use cases (containers), then perhaps not all mirrors need to support it, and perhaps a single one (like archlinux's archive) could suffice. People who are willing to trade reproducibility for potential dependency breakages in their containers would be able to use it. This email came out way longer than what I was planning, but at least I'm hoping I did a decent job explaining about why we pin version numbers in the container world and how I hope alpine could support that use case better. - Nad</pre> </div> <div> <div class="message-header"> <div class="from"> fossdd <a href="mailto:~alpine/users@lists.alpinelinux.org?cc=fossdd%20%3Cfossdd%40pwned.life%3E&in-reply-to=%3CD5Q93UHIXUAE.2TSDK67WSPG7T%40pwned.life%3E&subject=Re%3A%20Multiarch%20container%20builds%2C%20version%20pinning%2C%20and%20package%20retention%20policies%20are%20fundamentally%20incompatible"><fossdd@pwned.life></a> </div> <details> <summary>Details</summary> <div> <dl> <div> <dt>Message ID</dt> <dd><code><D5Q93UHIXUAE.2TSDK67WSPG7T@pwned.life></code></dd> </div> <div> <dt>In-Reply-To</dt> <dd> <code> <c0ff727d04790e33f05f3904ab1c2f60381c84bb.camel@santalla.io> </code> <a href="#<c0ff727d04790e33f05f3904ab1c2f60381c84bb.camel@santalla.io>">(view parent)</a> </dd> </div> <div> <dt>DKIM signature</dt> <dd> missing </dd> </div> </dl> <a href="/~alpine/users/%3CD5Q93UHIXUAE.2TSDK67WSPG7T%40pwned.life%3E/raw" target="_blank">Download raw message</a> </div> </details> <div class="date"> <a class="text-muted" href="#<D5Q93UHIXUAE.2TSDK67WSPG7T@pwned.life>" id="<D5Q93UHIXUAE.2TSDK67WSPG7T@pwned.life>"><span title="2024-11-19 15:13:19 UTC">4 days ago</span></a> </div> <div class="msg-info"> </div> </div> <pre class="message-body">On Tue Nov 19, 2024 at 2:04 PM CET, Nadia Santalla wrote: <span class="text-muted">> Hi Jakub, Fossdd,</span> <span class="text-muted">></span> <span class="text-muted">> Thanks a lot for sharing your insights!</span> <span class="text-muted">></span> <span class="text-muted">> I think it might be useful for me to elaborate more on why the</span> <span class="text-muted">> container and non-container (so-called baremetal, but I'd rather not</span> <span class="text-muted">> call it that) use case differs.</span> <span class="text-muted">></span> <span class="text-muted">> In the container world, we care *a lot* about images being</span> <span class="text-muted">> reproducible: This is one of the reasons why containers were born,</span> <span class="text-muted">> after all. Allegedly, we're have still a lot of strides to make</span> <span class="text-muted">> regarding byte-to-byte reproducibility, but I believe we'll eventually</span> <span class="text-muted">> get there. Regardless of that, what is well-established and even</span> <span class="text-muted">> assumed in the container world is the following: If someone builds a</span> <span class="text-muted">> container image from a Dockerfile (+ a given set of source files)</span> <span class="text-muted">> today, and someone else builds a container image using theFuxels same</span> <span class="text-muted">> Dockerfile (and same set of source files) next week, the resulting</span> <span class="text-muted">> image should _behave the same_.</span> That's why we have stable releases. Stable releases only contain security (and bug) fixes. <span class="text-muted">></span> <span class="text-muted">> This assumption does not hold if in your Dockerfile, you install (or</span> <span class="text-muted">> download, for that matter) binaries or other components without</span> <span class="text-muted">> specified version. `apk` (or `apt`) `install` are very much likely to</span> <span class="text-muted">> install one thing today and a different one next week.</span> <span class="text-muted">></span> <span class="text-muted">> This assumption is heavily ingrained in the container world. To give</span> <span class="text-muted">> some examples, Dockerfile liners that are shipped by default in the</span> <span class="text-muted">> most popular editors, have rules that explicitly tell you to pin</span> <span class="text-muted">> versions in `apk add`:</span> <span class="text-muted">> <a href="https://github.com/hadolint/hadolint/blob/4ab5d7809325c8de23956a44ca5a1f3c25907faf/src/Hadolint/Rule/DL3018.hs#L20" rel="noopener nofollow">https://github.com/hadolint/hadolint/blob/4ab5d7809325c8de23956a44ca5a1f3c25907faf/src/Hadolint/Rule/DL3018.hs#L20</a></span> <span class="text-muted">> This is popping up by default on the most popular editors across the</span> <span class="text-muted">> world.</span> I never experienced such behaviour, and I do not agree to it. For example, pinning versions does not fix security issues. You could use the tilde operator instead of the equal sign (see apk-world(5)). But often we fast-forward to a another stable version, which include bug and security fixes and still behaves the same. So again, use stable releases and everything should behave the same. That's our policy and it works pretty good imo! <span class="text-muted">></span> <span class="text-muted">> That being said, I understand this is definitely container-specific. I</span> <span class="text-muted">> run arch linux (btw) on my workstations and servers, which has a</span> <span class="text-muted">> similar policy as alpine, and I very rarely have had to pin version for</span> <span class="text-muted">> a package. I think the "partial upgrades are bad and problematic" is</span> <span class="text-muted">> very true for full system installations, but I see it being definitely</span> <span class="text-muted">> at odds with container best practices.</span> This also applies to every other Alpine installation, even containers. You still have linked libraries in Alpine containers, which WILL break. For example, I receive a lot of bug reports, because people are using containers from docker.io/library/docker, which uses Alpine as a base system. And they install system Python packages. However because Python fiddles their own python executable with a different version in the container, most (all?) system python packages will fail. That's also why we dont allow system pip package installations. <span class="text-muted">></span> <span class="text-muted">> I think it is good if alpine kept using the same practices it is</span> <span class="text-muted">> currently doing, and assume no partial upgrades to keep the maintainer</span> <span class="text-muted">> effort required into reasonable levels. But I also think that the</span> <span class="text-muted">> container community would appreciate a best-effort approach on helping</span> <span class="text-muted">> towards reproducible container builds. </span> We have a (currently mostly inactive) group which targets bit-per-bit reproduciblity for Alpine at #alpine-reproducible on OFTC. <span class="text-muted">></span> <span class="text-muted">> > Addionally adding support for old versions in our repositories</span> <span class="text-muted">> > doubles (or more?) the disk size of the mirrors which is not</span> <span class="text-muted">> > doable for most mirrors. (we already disable most -dbg packages</span> <span class="text-muted">> > due to size issues).</span> <span class="text-muted">></span> <span class="text-muted">> I definitely see this cost problem and, while I do not have any idea to</span> <span class="text-muted">> magically solve it, maybe I can try with a suggestion to contain it: If</span> <span class="text-muted">> old versions are provided for specific, non-general use cases</span> <span class="text-muted">> (containers), then perhaps not all mirrors need to support it, and</span> <span class="text-muted">> perhaps a single one (like archlinux's archive) could suffice. People</span> <span class="text-muted">> who are willing to trade reproducibility for potential dependency</span> <span class="text-muted">> breakages in their containers would be able to use it.</span> <span class="text-muted">></span> <span class="text-muted">> This email came out way longer than what I was planning, but at least</span> <span class="text-muted">> I'm hoping I did a decent job explaining about why we pin version </span> <span class="text-muted">> numbers in the container world and how I hope alpine could support that</span> <span class="text-muted">> use case better.</span> <span class="text-muted">></span> <span class="text-muted">> - Nad</span></pre> </div> <div> <div class="message-header"> <div class="from"> Nadia Santalla <a href="mailto:~alpine/users@lists.alpinelinux.org?cc=Nadia%20Santalla%20%3Cnadia%40santalla.io%3E&in-reply-to=%3C248ffd64e14d87a42d686c73b3293f607aa22571.camel%40santalla.io%3E&subject=Re%3A%20Multiarch%20container%20builds%2C%20version%20pinning%2C%20and%20package%20retention%20policies%20are%20fundamentally%20incompatible"><nadia@santalla.io></a> </div> <details> <summary>Details</summary> <div> <dl> <div> <dt>Message ID</dt> <dd><code><248ffd64e14d87a42d686c73b3293f607aa22571.camel@santalla.io></code></dd> </div> <div> <dt>In-Reply-To</dt> <dd> <code> <D5Q93UHIXUAE.2TSDK67WSPG7T@pwned.life> </code> <a href="#<D5Q93UHIXUAE.2TSDK67WSPG7T@pwned.life>">(view parent)</a> </dd> </div> <div> <dt>DKIM signature</dt> <dd> missing </dd> </div> </dl> <a href="/~alpine/users/%3C248ffd64e14d87a42d686c73b3293f607aa22571.camel%40santalla.io%3E/raw" target="_blank">Download raw message</a> </div> </details> <div class="date"> <a class="text-muted" href="#<248ffd64e14d87a42d686c73b3293f607aa22571.camel@santalla.io>" id="<248ffd64e14d87a42d686c73b3293f607aa22571.camel@santalla.io>"><span title="2024-11-19 15:32:51 UTC">4 days ago</span></a> </div> <div class="msg-info"> </div> </div> <pre class="message-body">Hi, thanks againf or your insights! Answering inline below: On Tue, 2024-11-19 at 16:13 +0100, fossdd wrote: <span class="text-muted">> That's why we have stable releases. Stable releases only contain</span> <span class="text-muted">> security (and bug) fixes. </span> <span class="text-muted">> I never experienced such behaviour, and I do not agree to it. For</span> <span class="text-muted">> example, pinning versions does not fix security issues.</span> I think there might be a misunderstanding here. We don't keep pinned versions forever, or for a long time. The container ecosystem has toolin that aids with that. An example of this can be: <a href="https://github.com/roobre/renovate-alpine/pull/13" rel="noopener nofollow">https://github.com/roobre/renovate-alpine/pull/13</a> I think getting updates like this is highly beneficial for containers, for a number of reasons: - Security. Maintainers *notice* when packages they depend on release updates and security fixes, and can react by updating and releasing a new version of their image containing the fix. - Testing. Some versions break stuff. Receiving updates in PRs allows for a testing suite to run and ensure the updated dependency still works. This also applies to new alpine releases, which may contain breaking changes. - Reproducibility. What I said earlier, the same container image will contain the same version. If a maintainer needs to reproduce a problem that happened one version ago, they can. <span class="text-muted">> This also applies to every other Alpine installation, even</span> <span class="text-muted">> containers.</span> <span class="text-muted">> You still have linked libraries in Alpine containers, which WILL</span> <span class="text-muted">> break.</span> I think this is reasonably understood in the container world, while definitely not on desktop. It is possible that I let versions drift, or that two packages depend on one another. If that happens, I will adapt my rules so the bot bumps those dependencies together. If something breaks badly, the test suite (or even the build) will fail and I'll know that I need to update the base image first. That's what I think that retaining old versions for a while (on a dedicated repo potentially, to save costs and to avoid unaware users using it) can likely work. There are definitely tradeoffs but I don't think it's something that people who work with containers daily cannot expect or work around, with the benefits outweighting that effort. - N</pre> </div> </div> </div> <div class="row"> <div class="col-md-6 offset-md-6"> <div class="form-group"> <a class="btn btn-primary btn-block" href="mailto:~alpine/users@lists.alpinelinux.org?cc=fossdd%20%3Cfossdd%40pwned.life%3E&in-reply-to=%3CD5POTZXFUK1H.2LUELCKMXTWER%40pwned.life%3E&subject=Re%3A%20Multiarch%20container%20builds%2C%20version%20pinning%2C%20and%20package%20retention%20policies%20are%20fundamentally%20incompatible">Reply to thread <span aria-hidden="true" class="icon icon-caret-right"><svg viewbox="0 0 192 512" xmlns="http://www.w3.org/2000/svg"><path d="M0 384.662V127.338c0-17.818 21.543-26.741 34.142-14.142l128.662 128.662c7.81 7.81 7.81 20.474 0 28.284L34.142 398.804C21.543 411.404 0 402.48 0 384.662z"></path></svg> </span></a> <a class="btn btn-default btn-block" href="/~alpine/users/%3C5d8a64d114f24495584cc4b646aa80d090631ffa.camel%40santalla.io%3E/mbox">Export thread (mbox) <span aria-hidden="true" class="icon icon-caret-right"><svg viewbox="0 0 192 512" xmlns="http://www.w3.org/2000/svg"><path d="M0 384.662V127.338c0-17.818 21.543-26.741 34.142-14.142l128.662 128.662c7.81 7.81 7.81 20.474 0 28.284L34.142 398.804C21.543 411.404 0 402.48 0 384.662z"></path></svg> </span></a> <form action="/~alpine/users/forward/%3C5d8a64d114f24495584cc4b646aa80d090631ffa.camel%40santalla.io%3E" method="POST" style="margin-top: 0.5rem"> <input dir="auto" name="_csrf_token" type="hidden" value="2ab0de0b82a1b409a93823248ed9c07df9a16484527c1b6498c51c6d486fed8ddb4571d307fb690067de6bbb1f0667c21e4b4a7219bfd8a3ffbc21bc77e30576"/> <button class="btn btn-default btn-block">Forward this thread to me <span aria-hidden="true" class="icon icon-caret-right"><svg viewbox="0 0 192 512" xmlns="http://www.w3.org/2000/svg"><path d="M0 384.662V127.338c0-17.818 21.543-26.741 34.142-14.142l128.662 128.662c7.81 7.81 7.81 20.474 0 28.284L34.142 398.804C21.543 411.404 0 402.48 0 384.662z"></path></svg> </span></button> </form> </div> </div> </div> </div> </body> </html>