phone

    • chevron_right

      Andrea Veri: 2024 GNOME Infrastructure Annual Review

      news.movim.eu / PlanetGnome · Friday, 13 December - 21:29 · 6 minutes

    Table of Contents

    1. Introduction

    Time is passing by very quickly and another year will go as we approach the end of 2024. This year has been fundamental in shaping the present and the future of GNOME’s Infrastructure with its major highlight being a completely revamped platform and a migration of all GNOME services over to AWS. In this post I’ll try to highlight what the major achievements have been throughout the past 12 months.

    2. Achievements

    In the below is a list of individual tasks and projects we were able to fulfill in 2024. This section will be particularly long but I want to stress the importance of each of these items and the efforts we put in to make sure they were delivered in a timely manner.

    2.1. Major achievements

    1. All the applications (except for ego, which we expect to handle as soon as next week or in January) were migrated to our new AWS platform (see GNOME Infrastructure migration to AWS )
    2. During each of the apps migrations we made sure to: 2a. Migrate to sso.gnome.org and make 2FA mandatory 2b. Make sure database connections are handled via connection poolers 2c. Double check the container images in use were up-to-date and GitLab CI/CD pipeline schedules were turned on for weekly rebuilds (security updates) 2d. For GitLab, we made sure repositories were migrated to an EBS volume to increase IO throughput and bandwidth
    3. Migrated away our backup mechanism away from rdiff-backup into AWS Backup service (which handles both our AWS EFS and EBS snapshots)
    4. Retired our NSD install and migrated our authoritative name servers to CloudNS (it comes with multiple redundant authoritative servers, DDOS protection and automated DNSSEC keys rotation and management)
    5. We moved away from Ceph and the need to maintain our own storage solution and started leveraging AWS EFS and EBS
    6. We deprecated Splunk and built a solution around promtail and Loki in order to handle our logging requirements
    7. We deprecated Prometheus blackbox and started leveraging CloudNS monitoring service which we interact with using an API and a set of CI/CD jobs we host in GitHub
    8. We archived GNOME’s wiki and turned it into a static HTML copy
    9. We replaced ftpadmin with the GNOME Release Services, thanks speknik! More information around what steps should GNOME Maintainers now follow when doing a module release are available here . The service uses JWT tokens to verify and authorize specific CI/CD jobs and only allows new releases when the process is initiated by a project CI living within the GNOME GitLab namespace and a protected tag. With master.gnome.org and ftpadmin being in production for literally ages, we wanted to find a better mechanism to release GNOME software and avoid a single maintainer SSH key leak to allow a possible attacker to tamper tarballs and potentially compromise milions of computers running GNOME around the globe. With this change we don’t leverage SSH anymore and most importantly we don’t allow maintainers to generate GNOME modules tarballs on their personal computers rather we force them to use CI/CD in order to achieve the same result. We’ll be coming up shortly with a dedicated and isolated runner that will only build jobs tagged as releasing GNOME software.
    10. We retired our mirroring infrastructure based on Mirrorbits and replaced it with our CDN partner, CDN77
    11. We decoupled GIMP mirroring service from GNOME’s one, GIMP now hosts its tarballs (and associated rsync daemon) on top of a different master node, thanks OSUOSL for sponsoring the VM that makes this possible!

    2.2. Minor achievements

    1. Retired multiple VMs: splunk, nsd0{1,2}, master, ceph-metrics, gitaly
    2. We started managing our DNS using an API and CI/CD jobs hosted in GitHub (this to avoid relying on GNOME’s GitLab which in case of unavailability would prevent us to update DNS entries)
    3. We migrated smtp.gnome.org to OSCI in order to not lose IP reputations and various whitelists we received throughout the years by multiple organizations
    4. We deprecated our former internal DNS authoritatives based on FreeIPA. We are now leveraging internal VPC resolvers and Route53 Private zones
    5. We deprecated all our OSUOSL GitLab runners due to particularly slow IO and high steal time and replaced them with a new Heztner EX44 instance, kindly sponsored by GIMP. OSUOSL is working on coming up with local storage on their Openstack platform. We are looking forward to test that and introduce new runners as soon as the solution will be made available
    6. Retired idm0{1,2} and redirected them to a new FreeIPA load balanced service at https://idm.gnome.org
    7. We retired services which weren’t relevant for the community anymore: surveys.gnome.org, roundcube (aka webmail.gnome.org)
    8. We migrated nmcheck.gnome.org to Fastly and are using Synthetic responses to handle HTTP responses to clients
    9. We upgraded to Ansible Automation Platform (AAP) 2.5
    10. As part of the migration to our new AWS based platform, we upgrade Openshift to release 4.17
    11. We received a 2k grant from Microsoft which we are using for an Azure ARM64 GitLab runner
    12. All of our GitLab runners fleet are now hourly kept in sync using AAP (Ansible roles were built to make this happen)
    13. We upgraded Cachet to 3.x series and fixed dynamic status.gnome.org updates (via a customized version of cachet-monitor)
    14. OS Currency: we upgraded all our systems to RHEL 9
    15. We converted all our Openshift images that were using a web server to Nginx for consistency/simplicity
    16. Replaced NRPE with Prometheus metrics based logging, checks such as IDM replication and status are now handled via the Node Exporter textfile plugin
    17. Migrated download.qemu.org (yes, we also host some components of QEMU’s Infrastructure) to use nginx-s3-gateway, downloads are then served via CDN77

    2.3 Minor annoyances/bugs that were also fixed in 2024

    1. Invalid OCSP responses from CDN77, https://gitlab.gnome.org/Infrastructure/Infrastructure/-/issues/1511
    2. With the migration to USE_TINI for GitLab, no gpg zombie processes are being generated anymore

    2.3. Our brand new and renewed partnerships

    1. From November 2024 and ongoing, AWS will provide sponsorship and funding to the GNOME Project to sustain the majority of its infrastructure needs
    2. Red Hat kindly sponsored subscriptions for RHEL, Openshift, AAP as well as hosting, bandwidth for the GNOME Infrastructure throughout 2024
    3. CDN77 provided unlimited bandwidth / traffic on their CDN offering
    4. Fastly renewed their unlimited bandwidth / traffic plan on their Delivery/Compute offerings
    5. and thanks to OSUOSL, Packet, DigitalOcean, Microsoft for the continued hosting and sponsorship of a set of GitLab runners, virtual machines and ARM builders!

    Expressing my gratitude

    As I’m used to do at the end of each calendar year, I want to express my gratitude to Bartłomiej Piotrowski for our continued cooperation and also to Stefan Peknik for his continued efforts in developing the GNOME Release Service. We started this journey together many months ago when Stefan was trying to find a topic to base his CS bachelor thesis on. With this in mind I went straight into the argument of replacing ftpadmin with a better technology also in light of what happened with the xz case. Stefan put all his enthusiasm and professionality into making this happen and with the service going into production on the 11th of December 2024 history was made.

    That being said, we’re closing this year being extremely close to retire our presence from RAL3 which we expect to happen in January 2025. The GNOME Infrastructure will also send in a proposal to talk at GUADEC 2025, in Italy, to present and discuss all these changes with the community.

    • wifi_tethering open_in_new

      This post is public

      www.dragonsreach.it /2024/12/14/gnome-infrastructure-annual-review/

    • chevron_right

      Christian Hergert: Layered Settings

      news.movim.eu / PlanetGnome · Friday, 13 December - 17:22 · 1 minute

    Early on Builder had the concept of layered settings. You had an application default layer the user could control. You also had a project layer which allowed the user to change settings just for that project. But that was about the extent of it. Additionally, these settings were just stored in your normal GSettings data repository so there is no sharing of settings with other project collaborators. Boo!

    With Foundry, I’d like to have a bit more flexibility and control. Specifically, I want three layers. One layer for the user’s preferences at the application level. Then project settings which can be bundled with the project by the maintainer for needs specific to the project. Lastly, a layer of user overrides which takes maximum preference.

    Of course, it should still continue to use GSettings under the hood because that makes writing application UI rather easy. As mentioned previously , we’ll have a .foundry directory we place within the project with storage for both user and project data. That means we can use a GKeyFile back-end to GSettings and place the data there.

    You can git commit your project settings if you’re the maintainer and ensure that your projects conventions are shared to your collaborators.

    Of course, since this is all command-line based right now, there are tab-completable commands for this which again, makes unit testing this stuff easier.

    # Reads the app.devsuite.foundry.project config-id gsetting
    # taking into account all layers
    $ foundry settings get project config-id

    # Sets the config-id setting for just this user
    $ foundry settings set project config-id "'org.example.app.json'"

    # Sets the config-id for the project default which might
    # be useful if you ship multiple flatpak manifest like GTK does
    $ foundry settings set --project project config-id "'org.example.app.json'"

    # Or maybe set a default for the app
    $ foundry settings set --global project stop-signal SIGKILL

    That code is now wired up to the FoundryContext via foundry_context_load_settings() .

    Next time I hope to cover the various sub-systems you might need in an IDE and how those services are broken down in Foundry.

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /chergert/2024/12/13/layered-settings/

    • chevron_right

      Jussi Pakkanen: CMYK me baby one more time!

      news.movim.eu / PlanetGnome · Friday, 13 December - 17:22 · 1 minute

    Did you know that Jpeg supports images in the CMYK colorspace? And that people are actually using them in the wild? This being the case I needed to add support to them into CapyPDF. The development steps are quite simple, first you create a CMYK Jpeg file, then you create a test document that embeds it and finally look at the result in a PDF renderer.

    Off to a painter application then. This is what the test image looks like.

    Then we update the Jpeg parsing code to detect cmyk images and write the corresponding metadata to the output PDF. What does then end result look like then?

    Aaaaand now we have a problem. Specifically one of an arbitrary color remapping. It might seem this is just a case of inverted colors. It's not (I checked), something weirder is going on. For reference Acrobat Reader's output looks identical.

    At this point rather than poke things at random and hoping for the best, a good strategy is to get more test data. Since Scribus is pretty much a gold standard on print quality PDF production I went about recreating the test document in it.

    Which failed immediately on loading the image.

    Here we have Gwenview and Scribus presenting their interpretations of the exact same image. If you use Scribus to generate a PDF, it will convert the Jpeg into some three channel (i.e. RGB) ICC profile.

    Take-home exercise

    Where is the bug (or a hole in the spec) in this case:

    • The original CMYK jpeg is correct, but Scribus and PDF renderers read it in incorrectly?
    • The original image is incorrect and Gwenview has a separate inverse bug that cancel each other out?
    • The image is correct but the metadata written in the file by CapyPDF is incorrect?
    • The PDF spec has a big chunk of UB here and the final result can be anything?
    • Aliens?
    I don't know the correct answer. If someone out there does, do let me know.

    • chevron_right

      Sam Thursfield: Status update, 13/12/24

      news.movim.eu / PlanetGnome · Friday, 13 December - 11:39 · 3 minutes

    Its been an interesting and cold month so far. I made a successful trip to the UK, one of the first times I’ve been back in winter and avoided being exposed to COVID19 since the pandemic, so that’s a step forwards.

    I’ve been thinking a lot about documentation recently in a few different places where I work or contribute as a volunteer. One such place is within openQA and the GNOME QA initiative, so here’s what’s been happening there recently.

    The monthly Linux QA call is one of my 2024 success stories. The goal of the call is to foster collaboration between distros and upstreams, so that we share testing effort rather than duplicating it, and we get issue reports upstream as soon as things break. Through this cll I’ve met many of the key people who are do automated testing of GNOME downstream, and we are starting to share ideas for the future.

    What I want for GNOME is to be able to run QA tests for any open merge request, so we can spot regressions before they even land. As part of the STF+GNOME+Codethink collaboration we got a working prototype of upstream QA for GNOME Shell, but to move beyond a prototype, we need to build a more solid foundation. The current GNOME Shell prototype has about 100 lines of copy-pasted openQA code to set up the VM, and this would need to be copied into every other GNOME module where we might run QA tests. I very much do not want so many copies of one piece of code.

    Screenshot of openQA web UI showing GNOME Tour

    I mentioned this in the QA call and Oli Kurz, who is the openQA product owner at openSUSE, proposed that we put the setup logic directly into os-autoinst, which is openQA’s test runner. The os-autoinst code has a bare ‘ basetest ‘ module which must be customed for the OS under test. Each distro maintains their own infrastructure on top of that to wait for the desktop to start, log in as a user, and so on.

    Since most of us test Linux, we can reasonably add a more specific base class specific to Linux , and some further helpers for systemd-based OSes. I love this idea as we could now share improvements between all the different QA teams.

    So the base test class can be extended, but how do we document its capabilities? I find openQA’s existing documentation pretty overwhelming as a single 50,000 word document . It’s not feasible for me to totally rework the documentation, but if we’re going to collaborate upstream then we need to have some way to document the new base classes.

    Of course I wrote some GNOME specific documentation for QA; but hidden docs like this are doomed to become obsolete. I began adding a section on testing to the GNOME developer guide, but I’ve had no feedback at all on the pul, so this effort seems like a dead end.

    Ideas welcome for where we go from here.

    Swans on a canal with the sun setting

    Looking at the problem from another angle, we still lack a collective understanding of what what openQA is and why you might use it. As a small step towards making this clearer, I wrote a comparison of four testing tools which you can read here . And at Oli’s suggestion I proposed a new Wikipedia page for openQA .

    Screenshot of Draft:OpenQA page from Wikipedia

    Please suggest changes here or in the openQA matrix channel . If you’re reading this and are a Wikipedia reviewer, then I would greatly appreciate a review so we can publish the new page. We could then also add openQA to the Wikipedia “Comparison of GUI testing tools” . Through small efforts like this we can hopefully reduce how much documentation is needed on the GNOME side, as we won’t need to start at “what even is openQA”.

    I have a lot more to say about documentation but that will have to wait for next month. Enjoy the festive season and I hope your 2025 gets off to a good start!

    • chevron_right

      Matthew Garrett: When should we require that firmware be free?

      news.movim.eu / PlanetGnome · Thursday, 12 December - 15:57 · 7 minutes

    The distinction between hardware and software has historically been relatively easy to understand - hardware is the physical object that software runs on. This is made more complicated by the existence of programmable logic like FPGAs, but by and large things tend to fall into fairly neat categories if we're drawing that distinction.

    Conversations usually become more complicated when we introduce firmware, but should they? According to Wikipedia, Firmware is software that provides low-level control of computing device hardware , and basically anything that's generally described as firmware certainly fits into the "software" side of the above hardware/software binary. From a software freedom perspective, this seems like something where the obvious answer to "Should this be free" is "yes", but it's worth thinking about why the answer is yes - the goal of free software isn't freedom for freedom's sake, but because the freedoms embodied in the Free Software Definition (and by proxy the DFSG ) are grounded in real world practicalities.

    How do these line up for firmware? Firmware can fit into two main classes - it can be something that's responsible for initialisation of the hardware (such as, historically, BIOS, which is involved in initialisation and boot and then largely irrelevant for runtime[1]) or it can be something that makes the hardware work at runtime (wifi card firmware being an obvious example). The role of free software in the latter case feels fairly intuitive, since the interface and functionality the hardware offers to the operating system is frequently largely defined by the firmware running on it. Your wifi chipset is, these days, largely a software defined radio, and what you can do with it is determined by what the firmware it's running allows you to do. Sometimes those restrictions may be required by law, but other times they're simply because the people writing the firmware aren't interested in supporting a feature - they may see no reason to allow raw radio packets to be provided to the OS, for instance. We also shouldn't ignore the fact that sufficiently complicated firmware exposed to untrusted input (as is the case in most wifi scenarios) may contain exploitable vulnerabilities allowing attackers to gain arbitrary code execution on the wifi chipset - and potentially use that as a way to gain control of the host OS (see this writeup for an example). Vendors being in a unique position to update that firmware means users may never receive security updates, leaving them with a choice between discarding hardware that otherwise works perfectly or leaving themselves vulnerable to known security issues.

    But even the cases where firmware does nothing other than initialise the hardware cause problems. A lot of hardware has functionality controlled by registers that can be locked during the boot process. Vendor firmware may choose to disable (or, rather, never to enable) functionality that may be beneficial to a user, and then lock out the ability to reconfigure the hardware later. Without any ability to modify that firmware, the user lacks the freedom to choose what functionality their hardware makes available to them. Again, the ability to inspect this firmware and modify it has a distinct benefit to the user.

    So, from a practical perspective, I think there's a strong argument that users would benefit from most (if not all) firmware being free software, and I don't think that's an especially controversial argument. So I think this is less of a philosophical discussion, and more of a strategic one - is spending time focused on ensuring firmware is free worthwhile, and if so what's an appropriate way of achieving this?

    I think there's two consistent ways to view this. One is to view free firmware as desirable but not necessary . This approach basically argues that code that's running on hardware that isn't the main CPU would benefit from being free, in the same way that code running on a remote network service would benefit from being free, but that this is much less important than ensuring that all the code running in the context of the OS on the primary CPU is free. The maximalist position is not to compromise at all - all software on a system, whether it's running at boot or during runtime, and whether it's running on the primary CPU or any other component on the board, should be free.

    Personally, I lean towards the former and think there's a reasonably coherent argument here. I think users would benefit from the ability to modify the code running on hardware that their OS talks to, in the same way that I think users would benefit from the ability to modify the code running on hardware the other side of a network link that their browser talks to. I also think that there's enough that remains to be done in terms of what's running on the host CPU that it's not worth having that fight yet. But I think the latter is absolutely intellectually consistent, and while I don't agree with it from a pragmatic perspective I think things would undeniably be better if we lived in that world.

    This feels like a thing you'd expect the Free Software Foundation to have opinions on, and it does! There are two primarily relevant things - the Respects your Freedoms campaign focused on ensuring that certified hardware meets certain requirements (including around firmware), and the Free System Distribution Guidelines , which define a baseline for an OS to be considered free by the FSF (including requirements around firmware).

    RYF requires that all software on a piece of hardware be free other than under one specific set of circumstances. If software runs on (a) a secondary processor and (b) within which software installation is not intended after the user obtains the product , then the software does not need to be free. (b) effectively means that the firmware has to be in ROM, since any runtime interface that allows the firmware to be loaded or updated is intended to allow software installation after the user obtains the product.

    The Free System Distribution Guidelines require that all non-free firmware be removed from the OS before it can be considered free. The recommended mechanism to achieve this is via linux-libre , a project that produces tooling to remove anything that looks plausibly like a non-free firmware blob from the Linux source code, along with any incitement to the user to load firmware - including even removing suggestions to update CPU microcode in order to mitigate CPU vulnerabilities.

    For hardware that requires non-free firmware to be loaded at runtime in order to work, linux-libre doesn't do anything to work around this - the hardware will simply not work. In this respect, linux-libre reduces the amount of non-free firmware running on a system in the same way that removing the hardware would. This presumably encourages users to purchase RYF compliant hardware.

    But does that actually improve things? RYF doesn't require that a piece of hardware have no non-free firmware, it simply requires that any non-free firmware be hidden from the user. CPU microcode is an instructive example here. At the time of writing, every laptop listed here has an Intel CPU. Every Intel CPU has microcode in ROM, typically an early revision that is known to have many bugs. The expectation is that this microcode is updated in the field by either the firmware or the OS at boot time - the updated version is loaded into RAM on the CPU, and vanishes if power is cut. The combination of RYF and linux-libre doesn't reduce the amount of non-free code running inside the CPU, it just means that the user (a) is more likely to hit since-fixed bugs (including security ones!), and (b) has less guidance on how to avoid them.

    As long as RYF permits hardware that makes use of non-free firmware I think it hurts more than it helps. In many cases users aren't guided away from non-free firmware - instead it's hidden away from them, leaving them less aware that their freedom is constrained. Linux-libre goes further, refusing to even inform the user that the non-free firmware that their hardware depends on can be upgraded to improve their security.

    Out of sight shouldn't mean out of mind. If non-free firmware is a threat to user freedom then allowing it to exist in ROM doesn't do anything to solve that problem. And if it isn't a threat to user freedom, then what's the point of requiring linux-libre for a Linux distribution to be considered free by the FSF? We seem to have ended up in the worst case scenario, where nothing is being done to actually replace any of the non-free firmware running on people's systems and where users may even end up with a reduced awareness that the non-free firmware even exists.

    [1] Yes yes SMM

    comment count unavailable comments
    • chevron_right

      Hans de Goede: IPU6 camera support is broken in kernel 6.11.11 / 6.12.2-6.12.4

      news.movim.eu / PlanetGnome · Thursday, 12 December - 13:52

    Unfortunately an incomplete backport of IPU6 DMA handling changes has landed in kernel 6.11.11.

    This not only causes IPU6 cameras to not work, this causes the kernel to (often?) crash on boot on systems where the IPU6 is in use and thus enabled by the BIOS.

    Kernels 6.12.2 - 6.12.4 are also affected by this. A fix for this is pending for the upcoming 6.12.5 release.

    6.11.11 is the last stable release in the 6.11.y series, so there will be no new stable 6.11.y release with a fix.

    As a workaround users affected by this can stay with 6.11.10 or 6.12.1 until 6.12.5 is available in your distributions updates(-testing) repository.



    comment count unavailable comments
    • chevron_right

      Matthew Garrett: Android privacy improvements break key attestation

      news.movim.eu / PlanetGnome · Thursday, 12 December - 12:16 · 5 minutes

    Sometimes you want to restrict access to something to a specific set of devices - for instance, you might want your corporate VPN to only be reachable from devices owned by your company. You can't really trust a device that self attests to its identity, for instance by reporting its MAC address or serial number, for a couple of reasons:
    • These aren't fixed - MAC addresses are trivially reprogrammable, and serial numbers are typically stored in reprogrammable flash at their most protected
    • A malicious device could simply lie about them
    If we want a high degree of confidence that the device we're talking to really is the device it claims to be, we need something that's much harder to spoof. For devices with a TPM this is the TPM itself. Every TPM has an Endorsement Key (EK) that's associated with a certificate that chains back to the TPM manufacturer. By verifying that certificate path and having the TPM prove that it's in posession of the private half of the EK, we know that we're communicating with a genuine TPM[1].

    Android has a broadly equivalent thing called ID Attestation. Android devices can generate a signed attestation that they have certain characteristics and identifiers, and this can be chained back to the manufacturer. Obviously providing signed proof of the device identifier is kind of problematic from a privacy perspective, so the short version[2] is that only apps installed using a corporate account rather than a normal user account are able to do this.

    But that's still not ideal - the device identifiers involved included the IMEI and serial number of the device, and those could potentially be used to correlate devices across privacy boundaries since they're static[3] identifiers that are the same both inside a corporate work profile and in the normal user profile, and also remains static if you move between different employers and use the same phone[4]. So, since Android 12, ID Attestation includes an "Enterprise Specific ID" or ESID. The ESID is based on a hash of device-specific data plus the enterprise that the corporate work profile is associated with. If a device is enrolled with the same enterprise then this ID will remain static, if it's enrolled with a different enterprise it'll change, and it just doesn't exist outside the work profile at all. The other device identifiers are no longer exposed.

    But device ID verification isn't enough to solve the underlying problem here. When we receive a device ID attestation we know that someone at the far end has posession of a device with that ID, but we don't know that that device is where the packets are originating. If our VPN simply has an API that asks for an attestation from a trusted device before routing packets, we could pass that on to said trusted device and then simply forward the attestation to the VPN server[5]. We need some way to prove that the the device trying to authenticate is actually that device.

    The answer to this is key provenance attestation. If we can prove that an encryption key was generated on a trusted device, and that the private half of that key is stored in hardware and can't be exported, then using that key to establish a connection proves that we're actually communicating with a trusted device. TPMs are able to do this using the attestation keys generated in the Credential Activation process, giving us proof that a specific keypair was generated on a TPM that we've previously established is trusted.

    Android again has an equivalent called Key Attestation. This doesn't quite work the same way as the TPM process - rather than being tied back to the same unique cryptographic identity, Android key attestation chains back through a separate cryptographic certificate chain but contains a statement about the device identity - including the IMEI and serial number. By comparing those to the values in the device ID attestation we know that the key is associated with a trusted device and we can now establish trust in that key.

    "But Matthew", those of you who've been paying close attention may be saying, "Didn't Android 12 remove the IMEI and serial number from the device ID attestation?" And, well, congratulations, you were apparently paying more attention than Google. The key attestation no longer contains enough information to tie back to the device ID attestation, making it impossible to prove that a hardware-backed key is associated with a specific device ID attestation and its enterprise enrollment.

    I don't think this was any sort of deliberate breakage, and it's probably more an example of shipping the org chart - my understanding is that device ID attestation and key attestation are implemented by different parts of the Android organisation and the impact of the ESID change (something that appears to be a legitimate improvement in privacy!) on key attestation was probably just not realised. But it's still a pain.

    [1] Those of you paying attention may realise that what we're doing here is proving the identity of the TPM, not the identity of device it's associated with. Typically the TPM identity won't vary over the lifetime of the device, so having a one-time binding of those two identities (such as when a device is initially being provisioned) is sufficient. There's actually a spec for distributing Platform Certificates that allows device manufacturers to bind these together during manufacturing, but I last worked on those a few years back and don't know what the current state of the art there is

    [2] Android has a bewildering array of different profile mechanisms, some of which are apparently deprecated, and I can never remember how any of this works, so you're not getting the long version

    [3] Nominally, anyway. Cough.

    [4] I wholeheartedly encourage people not to put work accounts on their personal phones, but I am a filthy hypocrite here

    [5] Obviously if we have the ability to ask for attestation from a trusted device, we have access to a trusted device. Why not simply use the trusted device? The answer there may be that we've compromised one and want to do as little as possible on it in order to reduce the probability of triggering any sort of endpoint detection agent, or it may be because we want to run on a device with different security properties than those enforced on the trusted device.

    comment count unavailable comments
    • chevron_right

      Aryan Kaushik: GNOME Asia India 2024

      news.movim.eu / PlanetGnome · Wednesday, 11 December - 20:32 · 5 minutes

    Namaste Everyone!

    Hi everyone, it was that time of the year again when we had our beloved GNOME Asia happening.

    Last year GNOME Asia happened in Kathmandu Nepal from December 1 - 3 and this time it happened in my country in Bengaluru from 6th to 8th.

    Btw, a disclaimer - I was there on behalf of Ubuntu but the opinions over here are my own :)

    Also, this one might not be that interesting due to well... reasons.

    Day 0 (Because indexing starts with 0 ;))

    Before departing from India... oh, I forgot this one was in India only haha.

    This GNOME Asia had a lot of drama, with the local team requiring an NDA to sign which we got to know only hours before the event and we also got to know we couldn't host an Ubuntu release party there even when it was agreed to months and again a few weeks ago and even on the same day as well in advance... So yeah... it was no less than an India Daily soap episode, which is quite ironic lol.

    But, in the end, I believe the GNOME team would have not known about it as well, and felt like local team problems.

    Enough with the rant, it was not all bad, I got to meet some of my GNOMEies and Ubunties (is that even a word?) friends upon arriving, and man did we had a blast.

    We hijacked a cafe and sat there till around 1 A.M. and laughed so hard we might have been termed Psychopaths by the watchers.

    But what do we care, we were there for the sole purpose of having as much fun as we could.

    After returning, I let my inner urge win and dived into the swimming pool on the hotel rooftop, at 2 A.M. in winter. Talk about the will to do anything ;)

    Day 1

    Upon proceeding to the venue we were asked for corporate ID cards as the event was in the Red Hat office inside a corporate park. We didn't know this and thus had to travel 2 more K.M. to the main entrance and get a visitor pass. Had to give an extra tip to the cab so that he wouldn't give me the look haha.

    Upon entering the tech park, I got to witness why Bengaluru is often termed India's Silicon Valley. It was just filled with companies of every type and size so that was a sight to behold.

    The talk I loved that day was "Build A GNOME Community? Yes You Can." by Aaditya Singh, full of insights and fun, we term each other as Bhai (Hindi for Brother) so it was fun to attend his talk.

    This time I wasn't able to attend many of the talks as I now had the responsibility to explore a new venue for our release party.

    Later I and my friends took a detour to find the new venue, and we did it quite quickly about 400 metres away from the office.

    This venue had everything we needed, a great environment, the right "vibe", and tons of freedom, which we FOSS lovers of course love and cherish.

    It also gave us the freedom to no longer be restricted to event end, but to shift it up to the Lunch break.

    At night me, Fenris and Syazwan went to "The Rameshwaram Cafe" which is very famous in Bengaluru, and rightly so, the taste was really good and for the fame not that expensive either.

    Fenris didn't eat much as he still has to sober up to Indian dishes xD.

    Day 2

    The first talk was by Syazwan and boy did I have to rush to the venue to attend it.

    Waking up early is not easy for me hehe but his talks are always so funny, engaging and insightful that you just can't miss attending it live.

    After a few talks came my time to present on the topic “Linux in India: A perspective of how it is and what we can do to improve it.”

    Where we discussed all the challenges faced by us in boosting the market share of Linux and open source in India and what measures we could take to improve the situation.

    We also glimpsed over the state of Ubuntu India LoCo and the actions we are taking to reboot it, with multiple events like the one we just conducted.

    My talk can be viewed at - YouTube - Linux in India: A perspective of how it is and what we can do to improve it.

    And that was quite fun, I loved the awesome feedback I got and it is just amazing to see people loving your content. We then quickly rushed to the venue of the party, track 1 was already there and with us, we took track 2 peeps as well.

    To celebrate we cut cake and gave out some Ubuntu flavours stickers, Ubuntu 24.10 Oracular Oriole stickers and UbuCon Asia 2024 stickers followed by a delicious mix of vegetarian and non-vegetarian pizzas.

    Despite the short duration of just one hour during lunch, the event created a warm and welcoming space for attendees, encapsulating Ubuntu’s philosophy: “Making technology human” and “Linux for human beings.”

    The event was then again followed by GNOME Asia proceedings.

    At night we all Ubunties and GNOMEies and Debian grouped for Biryani dinner. We first hijacked the Biryani place and then moved on to hijacking another Cafe. The best thing was that none of them kicked us out, I seriously believed they would considering our activities lol. I for the first time played Jenga and we had a lot of jokes which I can't say in public for good reasons.

    At that place, the GNOME CoC wasn't considered haha.

    Day 3

    Day 3 was a social visit, the UbuCon Asia 2025 organising team members conducted our own day trip, exploring the Technology Museum, Beautiful Cubbon Park, and the magnificent Vidhana Soudha of Karnataka.

    I met my friend Aman for the first time since GNOME Asia Malaysia which was Awesome! And I also met my Outreachy mentee in person, which was just beautiful.

    The 3-day event was made extremely joyful due to meeting old friends and colleagues. It reminded me of why we have such events so that we can group the community more than ever and celebrate the very ethos of FOSS.

    As many of us got tired and some had flights, the day trip didn't last long, but it was nice.

    At night I had one of my best coffees ever and tried "Plain Dosa with Mushroom curry" a weird but incredibly tasty combo.

    End

    Special thanks to Canonical for their CDA funding, which made it possible for me to attend in person and handle all arrangements on very short notice. :smiley:

    Looking forward to meeting many of them again at GUADEC or GNOME Asia 2025 :D

    • wifi_tethering open_in_new

      This post is public

      www.aryank.in /posts/2024-12-11-gnome-asia-india-2024/

    • chevron_right

      Christian Hergert: CLI Command Tree

      news.movim.eu / PlanetGnome · Wednesday, 11 December - 16:58 · 1 minute

    A core tenant of Foundry is a pleasurable command-line experience. And one of the most creature-comforts there is tab-completion.

    But how you go about doing that is pretty different across every shell. In Flatpak , they use a hidden internal command called “ complete ” which takes a few arguments and then does magic to figure out what you wanted.

    Implementing that when you have one layer of commands is not too difficult even to brute force. But imagine for a second that every command may have sub-commands and it can get much more difficult. Especially if each of those sub-commands have options that must be applied before diving into the next sub-command.

    Such is the case with foundry, because I much prefer foundry config switch over foundry config-switch . Particularly because you may have other commands like foundry config list . It feels much more spatially aware to me.

    There will be a large number of commands implemented over time, so keeping the code at the call-site rather small is necessary. Even more so when the commands could be getting proxied from another process or awaiting for futures to complete.

    With all those requirements in mind, I came up with FoundryCliCommandTree . The tree is built as an n-ary tree using GNode where you register a command vtable with the command parts like ["foundry", "config", "switch"] .

    At each layer you can have GOptionEntry like you normally use with GLib-based projects but in this case they will end up in a FoundryCliOptions very similar to what GApplicationClass.local_command_line() does.

    So now foundry has a builtin “ complete ” command like Flatpak and works fairly similarly though with the added complexity to support my ideal ergonomics.

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /chergert/2024/12/11/cli-command-tree/