phone

    • chevron_right

      Philip Withnall: Tip for debugging refcounting issues: change ref calls to copy

      news.movim.eu / PlanetGnome · Thursday, 23 February, 2023 - 13:21 · 1 minute

    Over the last couple of days I’ve been looking at a refcounting issue in GLib’s D-Bus implementation .

    As with many things in GLib, the allocations in this code use refcounting, rather than new/free, to manage object lifecycles. This makes it quite hard to debug lifecycle issues, because the location of a bug (a ref or unref call which isn’t correct) can be quite far removed (in time and code) from where the effects of that bug become visible. This is because the effects of refcounting problems only become visible when an object’s refcount reaches zero, or when the program ends and its refcount still hasn’t reached zero.

    While debugging this code, I tried an approach I haven’t before: changing some of the ref calls on the buggy object to be copy calls instead . (Specifically, changing g_object_ref() to g_dbus_message_copy() .) That split up the lifecycle of the object into smaller pieces , narrowing down the sets of ref/unref calls which could be buggy. Ultimately, this allowed me to find some bugs in the code, and hopefully those are the bugs causing the refcounting issue. Since the issue is intermittent, it’s a bit hard to be sure.

    This debugging approach was possible in this case because the object I was debugging is immutable, so passing around copies of it doesn’t affect the behaviour of other bits of code vs passing around the original. Hence this approach is only applicable in some situations. But it’s another good reason why using immutable objects is quite helpful when writing code, and it’s certainly an approach I’m going to be using again when I can.

    • wifi_tethering open_in_new

      This post is public

      tecnocode.co.uk /2023/02/23/tip-for-debugging-refcounting-issues-change-ref-calls-to-copy/

    • chevron_right

      Emmanuele Bassi: Writing Bindable API, 2023 Edition

      news.movim.eu / PlanetGnome · Monday, 20 February, 2023 - 00:35 · 12 minutes

    First of all, you should go on the gobject-introspection website and read the page on how to write bindable API . What I’m going to write here is going to build upon what’s already documented, or will update the best practices, so if you maintain a GObject/C library, or you’re writing one, you must be familiar with the basics of gobject-introspection. It’s 2023: it’s already too bad we’re still writing C libraries, we should at the very least be responsible about it.

    A specific note for people maintaining an existing GObject/C library with an API designed before the mainstream establishment of gobject-introspection (basically, anything written prior to 2011): you should really consider writing all new types and entry points with gobject-introspection in mind, and you should also consider phasing out older API and replacing it piecemeal with a bindable one. You should have done this 10 years ago, and I can already hear the objections, but: too bad . Just because you made an effort 10 years ago it doesn’t mean things are frozen in time, and you don’t get to fix things. Maintenance means constantly tending to your code, and that doubly applies if you’re exposing an API to other people.


    Let’s take the “how to write bindable API ” recommendations, and elaborate them a bit.

    Structures with custom memory management

    The recommendation is to use GBoxed as a way to specify a copy and a free function, in order to clearly define the memory management semantics of a type.

    The important caveat is that boxed types are necessary for:

    • opaque types that can only be heap allocated
    • using a type as a GObject property
    • using a type as an argument or return value for a GObject signal

    You don’t need a boxed type for the following cases:

    • your type is an argument or return value for a method, function, or virtual function
    • your type can be placed on the stack, or can be allocated with malloc() / free()

    Additionally, starting with gobject-introspection 1.76, you can specify the copy and free function of a type without necessarily registering a boxed type, which leaves boxed types for the thing they were created: signals and properties.

    Addendum: object types

    Boxed types should only ever be used for plain old data types; if you need inheritance, then the strong recommendation is to use GObject . You can use GTypeInstance , but only if you know what you’re doing ; for more information on that, see my old blog post about typed instances .

    Functionality only accessible through a C macro

    This ought to be fairly uncontroversial. C pre-processor symbols don’t exist at the ABI level, and gobject-introspection is a mechanism to describe a C ABI . Never, ever expose API only through C macros; those are for C developers. C macros can be used to create convenience wrappers, but remember that anything they call must be public API , and that other people will need to re-implement the convenience wrappers themselves, so don’t overdo it. C developers deserve some convenience, but not at the expense of everyone else.

    Addendum: inline functions

    Static inline functions are also not part of the ABI of a library, because they cannot be used with dlsym() ; you can provide inlined functions for performance reasons, but remember to always provide their non-inlined equivalent.

    Direct C structure access for objects

    Again, another fairly uncontroversial rule. You shouldn’t be putting anything into an instance structure, as it makes your API harder to future-proof, and direct access cannot do things like change notification, or memoization .

    Always provide accessor functions.

    va_list

    Variadic argument functions are mainly C convenience. Yes, some languages can support them, but it’s a bad idea to have this kind of API exposed as the only way to do things.

    Any variadic argument function should have two additional variants:

    • a vector based version, using C arrays (zero terminated, or with an explicit length)
    • a va_list version, to be used when creating wrappers with variadic arguments themselves

    The va_list variant is kind of optional, since not many people go around writing variadic argument C wrappers, these days, but at the end of the day you might be going to write an internal function that takes a va_list anyway, so it’s not particularly strange to expose it as part of your public API .

    The vector-based variant, on the other hand, is fundamental.

    Incidentally, if you’re using variadic arguments as a way to collect similarly typed values, e.g.:

    // void
    // some_object_method (SomeObject *self,
    //                     ...) G_GNUC_NULL_TERMINATED
    
    some_object_method (obj, "foo", "bar", "baz", NULL);
    

    there’s very little difference to using a vector and C99’s compound literals:

    // void
    // some_object_method (SomeObject *self,
    //                     const char *args[])
    
    some_object_method (obj, (const char *[]) {
                          "foo",
                          "bar",
                          "baz",
                          NULL,
                        });
    

    Except that now the compiler will be able to do some basic type check and scream at you if you’re doing something egregiously bad.

    Compound literals and designated initialisers also help when dealing with key/value pairs:

    typedef struct {
      int column;
      union {
        const char *v_str;
        int v_int;
      } value;
    } ColumnValue;
    
    enum {
      COLUMN_NAME,
      COLUMN_AGE,
      N_COLUMNS
    };
    
    // void
    // some_object_method (SomeObject *self,
    //                     size_t n_columns,
    //                     const ColumnValue values[])
    
    some_object_method (obj, 2,
      (ColumnValue []) {
        { .column = COLUMN_NAME, .data = { .v_str = "Emmanuele" } },
        { .column = COLUMN_AGE, .data = { .v_int = 42 } },
      });
    

    So you should seriously reconsider the amount of variadic arguments convenience functions you expose.

    Multiple out parameters

    Using a structured type with a out direction is a good recommendation as a way to both limit the amount of out arguments and provide some future-proofing for your API . It’s easy to expand an opaque pointer type with accessors, whereas adding more out arguments requires an ABI break.

    Addendum: inout arguments

    Don’t use in-out arguments. Just don’t.

    Pass an in argument to the callable for its input, and take an out argument or a return value for the output.

    Memory management and ownership of inout arguments is incredibly hard to capture with static annotations; it mainly works for scalar values, so:

    void
    some_object_update_matrix (SomeObject *self,
                               double *xx,
                               double *yy,
                               double *xy,
                               double *yx)
    

    can work with xx , yy , xy , yx as inout arguments, because there’s no ownership transfer; but as soon as you start throwing things in like pointers to structures, or vectors of string, you open yourself to questions like:

    • who allocates the argument when it goes in?
    • who is responsible for freeing the argument when it comes out?
    • what happens if the function frees the argument in the in direction and then re-allocates the out ?
    • what happens if the function uses a different allocator than the one used by the caller?
    • what happens if the function has to allocate more memory?
    • what happens if the function modifies the argument and frees memory?

    Even if gobject-introspection nailed down the rules, they could not be enforced, or validated, and could lead to leaks or, worse, crashes.

    So, once again: don’t use inout arguments. If your API already exposes inout arguments, especially for non-scalar types, consider deprecations and adding new entry points.

    Addendum: GValue

    Sadly, GValue is one of the most notable cases of inout abuse. The oldest parts of the GNOME stack use GValue in a way that requires inout annotations because they expect the caller to:

    • initialise a GValue with the desired type
    • pass the address of the value
    • let the function fill the value

    The caller is then left with calling g_value_unset() in order to free the resources associated with a GValue . This means that you’re passing an initialised value to a callable, the callable will do something to it (which may or may not even entail re-allocating the value) and then you’re going to get it back at the same address.

    It would be a lot easier if the API left the job of initialising the GValue to the callee; then functions could annotate the GValue argument with out and caller-allocates=1 . This would leave the ownership to the caller, and remove a whole lot of uncertainty.

    Various new (comparatively speaking) API allow the caller to pass an unitialised GValue , and will leave initialisation to the caller, which is how it should be, but this kind of change isn’t always possible in a backward compatible way.

    Arrays

    You can use three types of C arrays in your API :

    • zero-terminated arrays, which are the easiest to use, especially for pointers and strings
    • fixed-size arrays
    • arrays with length arguments

    Addendum: strings and byte arrays

    A const char* argument for C strings with a length argument is not an array:

    /**
     * some_object_load_data:
     * @self: ...
     * @str: the data to load
     * @len: length of @str in bytes, or -1
     *
     * ...
     */
    void
    some_object_load_data (SomeObject *self,
                           const char *str,
                           ssize_t len)
    

    Never annotate the str argument with array length=len . Ideally, this kind of function should not exist in the first place . You should always use const char* for NUL -terminated strings, possibly UTF -8 encoded; if you allow embedded NUL characters then use a bytes array:

    /**
     * some_object_load_data:
     * @self: ...
     * @data: (array length=len) (element-type uint8): the data to load
     * @len: the length of the data in bytes
     *
     * ...
     */
    void
    some_object_load_data (SomeObject *self,
                           const unsigned char *data,
                           size_t len)
    

    Instead of unsigned char you can also use uint8_t , just to drive the point home.

    Yes, it’s slightly nicer to have a single entry point for strings and byte arrays, but that’s just a C convenience: decent languages will have a proper string type, which always comes with a length; and string types are not binary data.

    Addendum: GArray , GPtrArray , GByteArray

    Whatever you do, however low you feel on the day, whatever particular tragedy befell your family at some point, please: never use GLib array types in your API . Nothing good will ever come of it, and you’ll just spend your days regretting this choice.

    Yes: gobject-introspection transparently converts between GLib array types and C types, to the point of allowing you to annotate the contents of the array. The problem is that that information is static, and only exists at the introspection level. There’s nothing that prevents you from putting other random data into a GPtrArray , as long as it’s pointer-sized. There’s nothing that prevents a version of a library from saying that you own the data inside a GArray , and have the next version assign a clear function to the array to avoid leaking it all over the place on error conditions, or when using g_autoptr .

    Adding support for GLib array types in the introspection was a well-intentioned mistake that worked in very specific cases—for instance, in a library that is private to an application. Any well-behaved, well-designed general purpose library should not expose this kind of API to its consumers.

    You should use GArray , GPtrArray , and GByteArray internally; they are good types, and remove a lot of the pain of dealing with C arrays. Those types should never be exposed at the API boundary: always convert them to C arrays, or wrap them into your own data types, with proper argument validation and ownership rules.

    Addendum: GHashTable

    What’s worse than a type that contains data with unclear ownership rules decided at run time? A type that contains twice the amount of data with unclear ownership rules decided at run time.

    Just like the GLib array types, hash tables should be used but never directly exposed to consumers of an API .

    Addendum: GList , GSList , GQueue

    See above, re: pain and misery. On top of that, linked lists are a terrible data type that people should rarely, if ever, use in the first place.

    Callbacks

    Your callbacks should always be in the form of a simple callable with a data argument:

    typedef void (* SomeCallback) (SomeObject *obj,
                                   gpointer data);
    

    Any function that takes a callback should also take a “user data” argument that will be passed as is to the callback:

    // scope: call; the callback data is valid until the
    // function returns
    void
    some_object_do_stuff_immediately (SomeObject *self,
                                      SomeCallback callback,
                                      gpointer data);
    
    // scope: notify; the callback data is valid until the
    // notify function gets called
    void
    some_object_do_stuff_with_a_delay (SomeObject *self,
                                       SomeCallback callback,
                                       gpointer data,
                                       GDestroyNotify notify);
    
    // scope: async; the callback data is valid until the async
    // callback is called
    void
    some_object_do_stuff_but_async (SomeObject *self,
                                    GCancellable *cancellable,
                                    GAsyncReadyCallback callback,
                                    gpointer data);
    
    // not pictured here: scope forever; the data is valid fori
    // the entirety of the process lifetime
    

    If your function takes more than one callback argument, you should make sure that it also takes a different user data for each callback, and that the lifetime of the callbacks are well defined. The alternative is to use GClosure instead of a simple C function pointer—but that comes at a cost of GValue marshalling, so the recommendation is to stick with one callback per function.

    Addendum: the closure annotation

    It seems that many people are unclear about the closure annotation.

    Whenever you’re describing a function that takes a callback, you should always annotate the callback argument with the argument that contains the user data using the (closure argument) annotation, e.g.

    /**
     * some_object_do_stuff_immediately:
     * @self: ...
     * @callback: (scope call) (closure data): the callback
     * @data: the data to be passed to the @callback
     *
     * ...
     */
    

    You should not annotate the data argument with a unary (closure) .

    The unary (closure) is meant to be used when annotating the callback type :

    /**
     * SomeCallback:
     * @self: ...
     * @data: (closure): ...
     *
     * ...
     */
    typedef void (* SomeCallback) (SomeObject *self,
                                   gpointer data);
    

    Yes, it’s confusing, I know.

    Sadly, the introspection parser isn’t very clear about this, but in the future it will emit a warning if it finds a unary closure on anything that isn’t a callback type.

    Ideally, you don’t really need to annotate anything when you call your argument user_data , but it does not hurt to be explicit.


    A cleaned up version of this blog post will go up on the gobject-introspection website, and we should really have a proper set of best API design practices on the Developer Documentation website by now; nevertheless, I do hope people will actually follow these recommendations at some point, and that they will be prepared for new recommendations in the future. Only dead and unmaintained projects don’t change, after all, and I expect the GNOME stack to last a bit longer than the 25 years it already spans today.

    • wifi_tethering open_in_new

      This post is public

      www.bassi.io /articles/2023/02/20/bindable-api-2023/

    • chevron_right

      Alberto Ruiz: Dilemma’s in Rust Land: porting a GNOME library to Rust

      news.movim.eu / PlanetGnome · Sunday, 19 February, 2023 - 17:31 · 2 minutes

    It has been a while since my last post, so I figured I just picked up a topic that has been around my mind lately.

    After I ported the RSVG Pixbuf Loader to Rust (although I gave up the meson-fu to Federico and Bilal) I decided that maybe I should give a try at porting the WebP Pixbuf Loader .

    webp-pixbuf-loader is probably the only FOSS project I have started on my own that I have managed to maintain in the long run without orphaning or giving it up to someone else. I wrote it out of curiosity and it turns out plenty of people find it rather useful as webp is pretty popular for storing comic book backups and other media.

    The WebP Pixbuf Loader is relatively small although ever since animation support was contributed it grew quite a bit. I’ve been handling a couple of issues ranging from endianness to memory leaks, I thought it was probably worth the while to give it some Rusty love.

    Porting the static image support was relatively quick, there’s, but it took me a while to understand how animation works in GDK-Pixbuf as the original animation support in C was contributed by alanhaw .

    I suspect I am prolly the first person to use the GdkPixbufLoader APIs to implement a new pixbuf loader, I had request a few fixes upstream, kudos to Sebastian Dröge and Bilal for handling those swiftly and shipping them in the last release.

    Anyhow, last month I finally made it all work :

    Hesitations

    Now comes the hesitation part, regardless of integrating the former tests in the build system (Cargo is great at embedded unit testing but meson is better at running an integration test plan), my main gripe is that it turns out that there’s quite a few people packaging this, not just for Linux distros but also BSDs, Illumos, Windows and brew/macOS…

    I really don’t know what the impact would be for anyone packaging outside of the Linux world, I have a few CI pipelines for Windows but obviously I am about to break them if I switch.

    I am pondering the idea of releasing a bunch of beta releases and hoping package maintainers will start taking notice that I’m moving on, but I am trying to be mindful of how much time they need to sink for the Rust move to happen and weight that against the net benefit.

    The other part that makes me hesitate over flipping the switch is the measure of the overall benefit. Sure Rust is nicer to maintain, but it still is a small codebase, Rust adds a bunch of runtime dependencies (bindings) and it is not clear to me how healthy the webp bindings are going to be long term, there are two similarly named bindings one has more frequent releases and the other is more complete which is annoying. These bindings bring an issue of size: the Rust port is 4MB in size vs. 76KB for the C implementation.

    Not sure what to do, feel free to add your thoughts in the comment section.

    • chevron_right

      Jussi Pakkanen: PDF output in images

      news.movim.eu / PlanetGnome · Friday, 17 February, 2023 - 17:58 · 2 minutes

    Generating PDF files is mostly ( but not entirely ) a serialization problem where you keep repeating the following loop:

    • Find out what functionality PDF has
    • Read the specification to find out how it is expressed using PDF document syntax
    • Come up with some sort of an API to express same
    • Serialize the latter into the former
    • Debug

    This means that you have to spend a fair bit of time without much to show for it apart from documents with various black boxes in them. However once you have enough foundational code, then suddenly you can generate all sorts of fun images. Let's look at some now.

    Paths are easy to define with lines, beziers and the like, as are path paint styles like line caps and joints. Choosing between nonzero and even-odd winding rules is just a question of choosing a different paint operator.

    PDF allows you to set any draw object as a "clipping path" which behaves like a stencil. Subsequent drawing operations are only applied to those pixels that are inside the specified clipping area. The painting model is uniform, text and paths are mostly interchangeable so text can be used as a clipping path. The gradient is a PNG image, not a vector object.

    This color wheel looks fairly average, but it is defined in L*a*b* color space . Did you know that PDF has native support for L*a*b* colors without needing any ICC profiles? I sure didn't until I read the spec.

    And finally here are some shadings and patterns. The first two are your standard linear and spherical gradients, but the latter two are more interesting. In PDF you can specify a pattern, which is basically just a rectangular area. You can draw on it with almost all the same operators as on a page (you can't use patterns within patterns, though). You can then use said pattern to paint other objects and the PDF renderer will fill the space by tiling the pattern (yes, of course there is a transformation matrix you can specify). As text is not special you can draw a single character and fill it with a repeated instance of a different character.

    Using it in Python

    The code needed to generate an empty PDF document looks approximately like this:

    import a4pdf
    o = a4pdf.Options()
    g = a4pdf.Generator('out.pdf', o)
    with g.page_draw_context() as ctx:
    # Drawing commands would go here.
    pass

    This snippet utilizes almost 100% of all available API thus far. So there's not much you can do with it yet.

    • chevron_right

      Jiri Eischmann: How is Linux used by FIT BUT students

      news.movim.eu / PlanetGnome · Friday, 17 February, 2023 - 13:54 · 2 minutes

    The Faculty of Information Technology of Brno University of Technology is one of the two top computer science schools in Brno, Czech Republic. Our development office of Red Hat has intensive cooperation with them including educating students about Linux and open source software. To find out more about how they use Linux, we ran a survey that collected answers from 176 students which is a pretty good sample. I promised to share results publicly, so here they are:

    The following chart shows the distribution of responders by year of school. The survey was primarily targeting students in the first year which is why they make up over 50% of the responses.

    The following chart shows how many students had experience with a Linux distribution prior their studies at the university. 46% did which shows a pretty good exposure to Linux at high schools.

    And now what desktop OS students use primarily . Windows are dominating, but Linux is used as a primary OS by roughly one third of students. macOS is only at 10%. Although we gave responders an option to specify other OSes, no one submitted, for example, BSD.

    The following chart shows in what form students use Linux primarily (as either a primary or secondary OS). 44% of students have it installed on their desktop/laptop. 31% use Windows Subsystem for Linux. School programming assignments have to run on Linux, so if they want to stick with Windows, WSL is the easiest way for them. Virtualization is at 9% and remote server at 13% (I suspect it’s mostly uni servers where students can test their assignments before submission).

    And here come shares of Linux distributions. Responders could pick multiple options, so the total is over 100%. Basically the only relevant distributions among FIT BUT students are Ubuntu, Fedora, Arch Linux and Debian.

    Ubuntu has a clear lead. It’s the default option for WSL where it is on vast majority of installations, so I wondered what the share would be without WSL.

    Without WSL the gap between Ubuntu and the rest of the pack is smaller. And since I’m from the Red Hat desktop team I also wondered what are the shares among students who indicated they use Linux primarily on desktop/laptop.

    When it comes to desktop computers and laptops the shares of Fedora and Ubuntu are almost the same. That shows two things: 1. Fedora is strong on the desktop among local students, 2. being the default option in WSL gives Ubuntu an advantage in mindshare. Fedora is not even officially available for WSL, but even if it was, it wouldn’t probably change much because other distros are available in the Microsoft Store and only one student of out 50+ who primarily use WSL responded that they use something else than Ubuntu. WSL is probably used by users who want some Linux in their Windows and don’t care much which one it is, so they stay with the default.

    We also asked students what prevents them from using Linux primarily. By far the most frequent answer (80%) was “Software I use is not available for Linux”, followed by “I don’t like the UX and logic of the OS” (28%) and “Compatibility with my hardware” (11%). Some students also responded that they simply hadn’t had enough time to get familiar with Linux and are staying with what they know. Other reasons were marginal.

    • wifi_tethering open_in_new

      This post is public

      eischmann.wordpress.com /2023/02/17/how-is-linux-used-by-fit-but-students/

    • chevron_right

      Sam Thursfield: Status update, 17/02/2022

      news.movim.eu / PlanetGnome · Friday, 17 February, 2023 - 10:09 · 2 minutes

    This month I attended FOSDEM for the first time since 2017. In addition to eating 4 delicious waffles, I had the honour of presenting two talks, the first in the Testing & Automation devroom on Setting up OpenQA testing for GNOME .

    GNOME’s initial OpenQA testing is mostly implemented now and it’s already found its first real bug . The next step is getting more folk interested within GNOME, so we can ensure ongoing maintenance of the tests and infra, and ensure a bus factor of > 1. If you see me at GUADEC then I will probably talk to you about OpenQA, be prepared!! 🙂

    My second talk was in the Python devroom, on DIY music recommendations . I intermittently develop a set of playlist generation tools named Calliope , and this talk was mostly aiming to inspire people to start similar fun & small projects, using simple AI techniques that you can learn in a weekend, and taking advantage of the amazing resource that is Musicbrainz. It seemed to indeed inspire some of the audience and led to an interesting chat with Rob Kaye of the Metabrainz Foundation – there is more cool stuff on the way from them.

    Here’s a fantastic sketch of the talk by Jeroen Heijmans :

    Talk summary sketch, CC BY-SA 4.0

    I didn’t link to this in the talk, but apropos of nothing here’s an interesting video entitled Why Spotify Will Eventually Fail .

    On the Saturday I met up with Carlos Garnacho and gatecrashed the GNOME docs hackfest , discussing various improvements around search in GNOME. Most of these are now waiting for developer time as they are too large to be done in occasional moments of evening and weekend downtime, get in touch if you want to find out more!

    I must also shout out Marco Trevisan for showing me where to get a decent meal near Madrid Chamartín station on the way home.

    Meanwhile at Codethink I have been getting more involved in marketing. Its a company that exists in two worlds, commercial software services on one side and community-driven open source software on the other, often trying our best to build bridges between the two. There aren’t many marketing graduates who are experts in open source, and neither many experience software developers who want to work fulltime on managing social media, so we are still figuring out the details…

    Anyway, the initial outcome is that Codethink is now on the Fediverse – follow us here! @codethink@social.codethink.co.uk

    • chevron_right

      Philippe Normand: WebRTC in WebKitGTK and WPE, status updates, part I

      news.movim.eu / PlanetGnome · Thursday, 16 February, 2023 - 20:30 · 5 minutes

    Some time ago we at Igalia embarked on the journey to ship a GStreamer -powered WebRTC backend. This is a long journey, it is not over, but we made some progress. This post is the first of a series providing some insights of the challenges we are facing and the plans for the next release cycle(s).

    Most web-engines nowadays bundle a version of LibWebRTC , it is indeed a pragmatic approach. WebRTC is a huge spec, spanning across many protocols, RFCs and codecs. LibWebRTC is in fact a multimedia framework on its own, and it’s a very big code-base. I still remember the suprised face of Emilio at the 2022 WebEngines conference when I told him we had unusual plans regarding WebRTC support in GStreamer WebKit ports. There are several reasons for this plan, explained in the WPE FAQ . We worked on a LibWebRTC backend for the WebKit GStreamer ports, my colleague Thibault Saunier blogged about it but unfortunately this backend has remained disabled by default and not shipped in the tarballs, for the reasons explained in the WPE FAQ .

    The GStreamer project nowadays provides a library and a plugin allowing applications to interact with third-party WebRTC actors. This is in my opinion a paradigm shift, because it enables new ways of interoperability between the so-called Web and traditional native applications. Since the GstWebRTC announcement back in late 2017 I’ve been experimenting with the idea of shipping an alternative to LibWebRTC in WebKitGTK and WPE . The initial GstWebRTC WebKit backend was merged upstream on March 18, 2022.

    As you might already know, before any audio/video call, your browser might ask permission to access your webcam/microphone and even during the call you can now share your screen. From the WebKitGTK/ WPE perspective the procedure is the same of course. Let’s dive in.

    WebCam/Microphone capture

    Back in 2018 for the LibWebRTC backend, Thibault added support for GStreamer-powered media capture to WebKit, meaning that capture devices such as microphones and webcams would be accessible from WebKit applications using the getUserMedia spec. Under the hood, a GStreamer source element is created, using the GstDevice API . This implementation is now re-used for the GstWebRTC backend, it works fine, still has room for improvements but that’s a topic for a follow-up post.

    A MediaStream can be rendered in a <audio> or <video> element, through a custom GStreamer source element that we also provide in WebKit, this is all internally wired up so that the following JS code will trigger the WebView in natively capturing and rendering a WebCam device using GStreamer:

    <html>
      <head>
        <script>
        navigator.mediaDevices.getUserMedia({video: true, audio: false }).then((mediaStream) => {
            const video = document.querySelector('video');
            video.srcObject = mediaStream;
            video.onloadedmetadata = () => {
                video.play();
            };
        });
        </script>
      </head>
      <body>
        <video/>
      </body>
    </html>
    

    When this WebPage is rendered and after the user has granted access to capture devices, the GStreamer backends will create not one, but two pipelines.

    flowchart LR
        pipewiresrc-->videoscale-->videoconvert-->videorate-->valve-->appsink
    
    flowchart LR
        subgraph mediastreamsrc
        appsrc-->srcghost[src]
        end
        subgraph playbin3
           subgraph decodebin3
           end
           subgraph webkitglsink
           end
           decodebin3-->webkitglsink
        end
        srcghost-->decodebin3
    

    The first pipeline routes video frames from the capture device using pipewiresrc to an appsink . From the appsink our capturer leveraging the Observer design pattern notifies its observers. In this case there is only one observer which is a GStreamer source element internal to WebKit called mediastreamsrc . The playback pipeline shown above is heavily simplified, in reality more elements are involved, but what matters most is that thanks to the flexibility of Gstreamer, we can leverage the existing MediaPlayer backend that we at Igalia have been maintaining for more than 10 years, to render MediaStreams. All we needed was a custom source element, the rest of our MediaPlayer didn’t need much changes to support this use-case.

    One notable change we did since the initial implementation though is that for us a MediaStream can be either raw, encoded or even encapsulated in a RTP payload. So depending on which component is going to render the MediaStream, we have enough flexibility to allow zero-copy, in most scenarios. In the example above, typically the stream will be raw from source to renderer. However, some webcams can provide encoded streams. WPE and WebKitGTK will be able to internally leverage these and in some cases allow for direct streaming from hardware device to outgoing PeerConnection without third-party encoding.

    Desktop capture

    There is another JS API , allowing to capture from your screen or a window, called getDisplayMedia , and yes, we also support it! Thanks to these recent years ground-breaking progress of the Linux Desktop such as PipeWire and xdg-desktop-portal we can now stream your favorite desktop environment over WebRTC. Under the hood when the WebView is granted access to the desktop capture through the portal, our backend creates a pipewiresrc GStreamer element, configured to source from the file descriptor provided by the portal, and we have a healthy raw video stream.

    Here’s a demo :

    WebAudio capture

    What more, yes you can also create a MediaStream from a WebAudio node . On the backend side, the GStreamerMediaStreamAudioSource fills GstBuffers from the audio bus channels and notifies third-parties internally observing the MediaStream, such as outgoing media sources, or simply an <audio> element that was configured to source from the given MediaStream. I have no demo for this, you’ll have to take my word.

    Canvas capture

    But wait there is more. Did I hear canvas? Yes we can feed your favorite <canvas> to a MediaStream. The JS API is called captureStream , its code is actually cross-platform but defers to the HTMLCanvasElement::toVideoFrame() method which has a GStreamer implementation. The code is not the most optimal yet though due to shortcomings of our current graphics pipeline implementation. Here is a demo of Canvas to WebRTC running in the WebKitGTK MiniBrowser:

    Wrap-up

    So we’ve got MediaStream support covered. This is only one part of the puzzle though. We are facing challenges now on the PeerConnection implementation. MediaStreams are cool but it’s even better when you can share them with your friends on the fancy A/V conferencing websites, but we’re not entirely ready for this yet in WebKitGTK and WPE . For this reason, WebRTC is not yet enabled by default in the upcoming WebKitGTK and WPE 2.40 releases. We’re just not there yet. In the next part of these series I’ll tackle the PeerConnection backend on which we’re working hard on these days, both in WebKit and in GStreamer.

    Happy hacking and as always, all my gratitude goes to my fellow Igalia comrades for allowing me to keep working on these domains and to Metrological for funding some of this work. Is your organization or company interested in leveraging modern WebRTC APIs from WebKitGTK and/or WPE ? If so please get in touch with us in order to help us speed-up the implementation work.

    • wifi_tethering open_in_new

      This post is public

      base-art.net /Articles/webrtc-in-webkitgtk-and-wpe-status-updates-part-i/

    • chevron_right

      Jussi Pakkanen: Plain C API design, the real world Kobayashi Maru test

      news.movim.eu / PlanetGnome · Monday, 13 February, 2023 - 19:31 · 4 minutes

    Designing APIs is hard. Designing good APIs that future people will not instantly classify as "total crap" is even harder. There are typically many competing requirements such as:

    • API stability
    • ABI stability (if you are into that sort of thing, some are not)
    • Maximize the amount of functionality supported
    • Minimize the number of functions exposed
    • Make the API as easy as possible to use
    • Make the API as difficult as possible to use incorrectly (preferably it should be impossible)
    • Make the API as easy as possible to use from scripting languages

    Recently I have been trying to create a proper API for PDF generation so let's use that as an example.

    Cairo, simple but limited

    The API that Cairo exposes is on the whole pretty good. It has a fair bit of functions, but only one main "painter", the Cairo context . Cairo is a general drawing library with many backends, but the drawing commands map very closely to the ones in PDF. This is probably because Cairo's drawing model is patterned after PostScript, which is almost the same as PDF. Having only one context type means that the users do not have to manually keep track of life times between different object types, which is the source of many C bugs.

    This approach works nicely with Cairo but not so well if you want to expose the full functionality of PDF directly, specifically patterns . In PDF you can specify a "pattern object". The basic use case for it is if you need to draw a repeating shape, like a brick wall, by specifying how to draw a single tile and then telling the PDF interpreter to "fill in" the area you specify with this pattern. (Cairo also has pattern support which behaves mostly the same but is ideologically slightly different. We'll ignore those for the rest of this text.)

    When defining a pattern you can use almost but not exactly the same drawing commands as when doing regular painting on page surfaces. There are also at least two different pattern types with slightly varying semantics. Since we want to expose PDF functionality directly, we need to have one function for each command, like pdf_draw_cmd_l(ctx, x, y) to draw a line. The question then becomes how does one expose all this as types and functions.

    Keep everything in a single object

    The simplest thing objectwise would be to keep everything in a single god object and have functions like pdf_draw_page_cmd_l , pdf_draw_pattern1_cmd_l and pdf_draw_pattern2_cmd_l . This is a terrible API because everything is smooshed together and you need to remember to finish patterns before using them. Don't do this.

    Fully separate object types

    Another approach is to make each concept their own separate type. Then you can have functions like pdf_page_cmd_l(page, x, y) , pdf_pattern_cmd_l(pattern, x, y) and so on. This also makes it easy to prevent using commands that are not supported. If, say, a command called bob is not supported on patterns, then all you have to do is to not implement the corresponding function pdf_pattern_cmd_bob .

    The big downside is that there are a lot of drawing commands in PDF and in this approach almost all of them need to be defined three times, once for each context type. Their implementations are identical, so they all need to call a fourth function or the code needs to be triplicated.

    A common context class

    One approach is to abstract this have a PaintContext class that internally knows whether it is used for page or pattern painting. This reduces the number of functions back to one. pdf_ctx_cmd_l(ctx, x, y) . The main downside is that now it is possible to accidentally call a function that requires a page drawing context with a pattern drawing context and the type system will not stop you.

    A second problem is that you can call the aforementioned bob command with a pattern context. The library needs to detect that and return an error code if it happens. What this means is that a bunch of functions that previously could not fail , can now return error codes. For consistency you might want to change all paint commands to return error codes instead, but then >90% of them never return anything except success.

    A common base class

    The "object oriented" way of doing this would be to have a common base class for the painting functionality and then inherit that. In this approach functions that can take any context would have names like pdf_ctx_cmd_l(ctx, x, y) wheres functions that don't get specializations like pdf_page_cmd_bob . Since C does not have any OO functionality this would need to be reimplemented from scratch, probably using some Gobject-style preprocessor macro hackery like pdf_ctx_cmd_l(PDF_CTX(page), x, y) or alternatively pdf_ctx_cmd_l(pdf_page_get_ctx(page), x, y) . This works, but means a lot of typing for end users and macros are type unsafe even by C standards. If you use the wrong type, woe is you. Macros make providing wrappers harder because they require you to always compile some glue code rather than using something simple like Python's ctypes .

    Is there a way to cheat?

    I have not managed to come up with a way. Do let me know if you do.

    • wifi_tethering open_in_new

      This post is public

      nibblestew.blogspot.com /2023/02/plain-c-api-design-real-world-kobayashi.html

    • chevron_right

      Federico Mena-Quintero: Los cajones - Dos mesitas de noche, parte 3

      news.movim.eu / PlanetGnome · Monday, 13 February, 2023 - 01:24 · 4 minutes

    Disculpen la pausa tan larga en escribir este blog. Comencé a poner mis hilos de carpintería en Mastodon y dejé olvidado este lugar. Pero bien, prosigamos.

    Dos mesitas de noche:

    En la parte 2 hicimos las patas para una de las mesitas. Ahora va el cajón.

    Estoy viendo que no tengo tantas fotos del proceso como quisiera, pero ojalá lo siguiente sirva.

    Las cuatro tablas del cuerpo del cajón tienen que quedar muy bien escuadradas. Aquí me fui un poco chueco con el serrucho, según la marca del cuchillo:

    Corte del extremo de la tabla un poco chueco

    La arreglé con la chancla, el cepillo pequeñito. Primero se lubrica (yo uso grasa de tocino clarificada), y se cepilla con cuidado hasta llegar a la línea:

    Lubricando la chancla con grasa de tocino clarificada Cepillando el extremo con la chancla hasta la línea

    Hasta que queda bien escuadrado.

    Extremo ya bien escuadrado

    Con el gramil se marca el largo de las colas de milano en uno de los extremos de la tabla que va a ser el frente del cajón, y con escuadra falsa se trazan las colas. Luego, se marca el largo de las colas con el gramil sobre las tablas laterales.

    Se marca el largo de las colas con el gramil Se marca la misma distancia en las tablas laterales

    Ahora, al revés. Se transfiere el grosor de las tablas laterales a la tabla del frente del cajón.

    Transfiriendo el grosor de las tablas laterales a la tabla frontal

    Ahora, con la escuadra, se marca el largo de las colas a partir de las diagonales marcadas en el extremo.

    Marcando con la escuadra

    Marcamos las partes que van a ser basura y podemos empezar a cortar. Como siempre, se empieza el corte del serrucho siguiendo las líneas en dos planos: ambas líneas se intersectan y determinan un plano en el espacio, que es el plano de la hoja del serrucho.

    Las partes de desecho marcadas, para no confundirse Comenzando el corte con serrucho Los cortes ya hechos, y cortes intermedios para formonear

    En la última foto, los cortes en las diagonales se pasan de la línea. No importa; no se van a ver, pues quedan por detrás de la parte frontal, y así se facilita después el corte con formón.

    Ahora vamos a escarbar las colas de milano con un formón. Se hace poco a poco, desde el extremo hacia adentro, con cuidado para no romper la parte más estrecha de la tabla.

    Comenzando el corte con formón vertical Sacamos un pedacito... Poco a poco se escarba la cola completa Espacio terminado para una cola Las tres colas escarbadas

    Hay que transferir la forma de las colas a las tablas laterales. Para facilitar el trabajo, conviene tener un tope en las tablas laterales para acomodar la tabla frontal encima. Ese tope lo hago sacando un pequeñísimo escalón en el extremo de las tablas laterales. Se pone una tira de madera, se marca con cuchillo, se quitan las esquinas y se rebaja el escalón con un guillame.

    Tira puesta en la posición del escalón Quitando las esquinas con formón para que no se astille Escalón rebajado con guillame

    Aquí es donde me faltan fotos. No tengo imágenes de cómo se hace descansar la tabla frontal sobre ese escalón de las laterales, pero es ahí donde se marca con cuidado la forma de las colas de milano. Igual, se serruchan y luego se saca el desecho con formón.

    Todas las tablas requieren una ranura para la tabla delgadita del fondo del dajón. Esa ranura se puede acomodar adentro de la cola de milano de hasta abajo.

    Ranurando la tabla frontal:

    Ranurando la tabla frontal, con la ranura dentro de la cola de milano

    Ranurando una tabla lateral:

    Ranurando una tabla lateral

    Cómo sostener la tabla para ranurarla, con un tope y un clavito:

    Un tope puesto a lo largo, y un clavito para sostener el extremo de la tabla

    La tabla del fondo del cajón casi lista para entrar en la ranura:

    En esa ranura va la tabla del fondo del cajón

    Ya se puede armar el cajón, pero luego hay que escuadrarlo. Primero lo armamos y pegamos:

    Uno de los ensambles ya armado

    Y con las reglas de enderezar, se ve si el cajón está torcido. Lo está un poco; lo cepillamos con cuidado para no astillar las tablas transversales.

    Uno de los ensambles ya armado

    Ya podemos poner el cajón en sus rieles y probar la altura. Donde le sobre, se rebaja con el cepillo.

    Probando la altura del cajón metido en su lugar

    Al fondo de los rieles se pegan bloquecitos de madera que va a servir como tope. Cada tope se ajusta de forma independiente, rebajándolo poco a poco con formón, hasta que el frente del cajón quede alineado con el frente de la mesa.

    Un bloquecito al fondo como tope

    Fíjate en los rieles; abarcan el ancho interno de las patas y no le dejan nada de juego al cajón. El chiste es que entre lo más derecho posible, sin espacio para que entre sesgado — si sólo puede entrar derechito, no se va a atorar.

    Sobre los topes... aquí está el cajón metido hasta el fondo, y sobresale del frente de la mesita:

    Un bloquecito al fondo como tope

    Rebajamos el bloquecito de cada lado...

    Un bloquecito al fondo como tope Un bloquecito al fondo como tope

    ... hasta que todo el frente del cajón queda alineado y al ras.

    Un bloquecito al fondo como tope

    Ahora hay que preparar el fondo del cajón, para que quepa en las ranuras. Queremos que quede así:

    El fondo ya preparado y metido en sus ranuras

    Esos rebajes inclinados se hacen inclinando el cepillo: Se inclina el cepillo al rebajar

    Con un poco de cuidado, queda derecho. O primero se puede marcar con lápiz paralelo al borde. No tiene mucha importancia, porque es un lado que no se ve.

    Cepillando el borde en diagonal

    La parte de atrás sobresale. La marcamos y cepillamos al ras. La parte de atrás sobresale de la parte trasera

    Marcando con lápiz

    • wifi_tethering open_in_new

      This post is public

      viruta.org /dos-mesitas-de-noche-3.html