phone

    • chevron_right

      Isode: M-Guard 1.4 New Capabilities

      news.movim.eu / PlanetJabber · Tuesday, 7 March, 2023 - 12:15

    M-Guard 1.4 is a platform support update release for M-Guard Console and M-Guard Appliance. M-Guard Appliance has been updated to use UEFI instead of BIOS for key system services.

    Platform Support

    The M-Guard Appliance now supports running on Netgate 6100 and 6100 MAX appliance systems.

    M-Guard Appliance on Hyper-V now uses Generation 2 virtual machines.

    M-Guard Appliance on VirtualBox now uses EFI.

    Use of BIOS for booting is deprecated in favor of UEFI.

    Base Operation System Upgraded

    The M-Guard Appliance operating system is now powered by FreeBSD 13.1.

    Notice

    Upgrading earlier installations requires special steps.  Contact Isode support for assistance.

    • wifi_tethering open_in_new

      This post is public

      www.isode.com /company/wordpress/m-guard-1-4-new-capabilities/

    • chevron_right

      Erlang Solutions: Se explican las colas de Quorum de RabbitMQ: lo que necesita saber.

      news.movim.eu / PlanetJabber · Tuesday, 7 March, 2023 - 11:18 · 9 minutes

    Este tipo de cola es importante cuando RabbitMQ se usa en una instalación de clúster. Descubre más en este blog.


    Introducción a las Colas de Quorum

    En RabbitMQ 3.8.0 , una de las nuevas características más significativas fue la introducción de las Colas de Quorum. La Cola de Quorum es un nuevo tipo de cola que se espera que reemplace la cola por defecto (que ahora se llama classic) en el futuro, para algunos casos de uso. Este tipo de cola es importante cuando RabbitMQ se utiliza en una instalación en clúster, ya que proporciona una replicación de mensajes menos intensiva en la red mediante el protocolo Raft.

    Uso de las Colas de Quorum

    Una cola clásica tiene un maestro en algún lugar de un nodo en el clúster, mientras que los espejos se ejecutan en otros nodos. Esto funciona de la misma manera para las Colas de Quorum, donde el líder, por defecto, se ejecuta en el nodo al que estaba conectada la aplicación cliente que la creó, y los seguidores se crean en el resto de los nodos del clúster.

    En el pasado, la replicación de colas se especificaba mediante políticas en conjunción con las Colas Clásicas. Las colas de Quorum se crean de manera diferente, pero deberían ser compatibles con todas las aplicaciones cliente que permiten proporcionar argumentos al declarar una cola. Se debe proporcionar el argumento  x-queue-type  con el valor de quorum al crear la cola.

    Por ejemplo, utilizando el cliente AMQP de Elixir1, la declaración de una Cola de Quorum es la siguiente:

    Queue.declare(publisher_chan, "my-quorum-queue", durable: true, arguments: [ "x-queue-type": "quorum" ])
    

    Una diferencia importante entre las Colas Clásicas y las de Quorum es que las Colas de Quorum solo pueden declararse duraderas, de lo contrario, se generará el siguiente mensaje de error:

    :server_initiated_close, 406, "PRECONDITION_FAILED - invalid property 'non-durable' for queue 'my-quorum-queue'

    Después de declarar la cola, podemos observar que es de tipo quorum en la Interfaz de Administración:

    Podemos ver que una cola de Quorum tiene un líder, que sirve aproximadamente para el mismo propósito que el Maestro de la Cola Clásica. Toda la comunicación se enruta al Líder de la Cola, lo que significa que la localidad del líder de la cola tiene un efecto en la latencia y el ancho de banda de los mensajes, sin embargo, el efecto debería ser menor que en las Colas Clásicas.

    El consumo de una Cola de Quorum se hace de la misma manera que otros tipos de colas.

    Nuevas características de las Colas de Quorum

    Las Colas de Quorum vienen con algunas características y restricciones especiales. No pueden ser no duraderas, porque el registro de Raft siempre se escribe en el disco, por lo que nunca se pueden declarar como transitorias. Tampoco admiten, a partir de la versión 3.8.2, TTL de mensajes y prioridades de mensajes2.

    Dado que el caso de uso para las Colas de Quorum es la seguridad de los datos, tampoco se pueden declarar como exclusivas, lo que significaría que se eliminan tan pronto como el consumidor se desconecta.

    Como todos los mensajes en las Colas de Quorum son persistentes, la opción ‘delivery-mode’ de AMQP no tiene efecto en su funcionamiento.

    Consumidor Único Activo

    Esto no es exclusivo de las Colas de Quorum, pero es importante mencionarlo: aunque se perdió la función de Cola Exclusiva, ganamos una nueva función que es aún mejor en muchos aspectos y que se solicitaba con frecuencia.

    El Consumidor Único Activo te permite adjuntar múltiples consumidores a una cola, mientras que solo uno de ellos está activo. Esto te permite crear consumidores altamente disponibles al tiempo que te aseguras de que en cualquier momento solo uno de ellos recibe mensajes, algo que antes no era posible lograr con RabbitMQ.

    Un ejemplo de cómo declarar una cola con la función de Consumidor Único Activo en Elixir:

    Queue.declare(publisher_chan, "single-active-queue", durable: true, arguments: [ "x-queue-type": "quorum", "x-single-active-consumer": true ])
    
    
    

    La cola con la configuración de Consumidor Único Activo habilitada se marca como SAC. En la imagen anterior, podemos ver que dos consumidores están adjuntos a ella (dos canales ejecutaron Basic.consume en la cola). Al publicar en la cola, solo uno de los consumidores recibirá el mensaje. Cuando ese consumidor se desconecte, el otro debería tomar la propiedad exclusiva de la secuencia de mensajes.

    ' Basic.get'

    o la inspección del mensaje en la Interfaz de Gestión no se puede hacer con colas de Consumidor Único Activo.

    Haciendo un seguimiento de los reintentos, los mensajes envenenados

    Llevar un recuento de cuántas veces se rechazó un mensaje es una de las funciones más solicitadas para RabbitMQ , y finalmente ha llegado con las Colas de Quorum. Esto te permite manejar los llamados mensajes envenenados de manera más efectiva que antes, ya que las implementaciones anteriores a menudo sufrían por la incapacidad de renunciar a los reintentos en caso de que un mensaje se quedara atascado o tenían que llevar un registro de cuántas veces se entregó un mensaje en una base de datos externa.

    NOTA : Para las Colas de Quorum, es mejor práctica tener siempre algún límite en el número de veces que se puede rechazar un mensaje. Dejar que este recuento de rechazos de mensajes crezca para siempre puede llevar a un comportamiento erróneo de la cola debido a la implementación Raft.

    Cuando se usan las Colas Clásicas y se vuelve a encolar un mensaje por cualquier motivo, con la marca 'redelivered' establecida, lo que esta marca significa esencialmente es ‘el mensaje puede haberse procesado ya’. Esto te ayuda a verificar si el mensaje es un duplicado o no. La misma marca existe, pero se amplió con la cabecera 'x-delivery-count' , que lleva un registro de cuántas veces se ha vuelto a encolar.

    Podemos observar esta cabecera en la Interfaz de Gestión:

    Como podemos ver, la marca 'redelivered' está establecida y la cabecera 'x-delivery-count' es 2.

    Ahora tu aplicación está mejor equipada para decidir cuándo renunciar a los reintentos.

    Si eso no es suficiente, ahora puedes definir las reglas basadas en el recuento de entregas para enviar el mensaje a un intercambio diferente en lugar de volver a encolarlo. Esto se puede hacer directamente desde RabbitMQ, tu aplicación no tiene que saber acerca de la reintentación. ¡Permíteme ilustrarlo con un ejemplo!

    Ejemplo: ¡Re-enrutamiento de mensajes rechazados! Nuestro caso de uso es que recibimos mensajes que necesitamos procesar, de una aplicación que, sin embargo, puede enviarnos mensajes que no se pueden procesar. La razón podría ser porque los mensajes están mal formados, o porque la propia aplicación no puede procesarlos por alguna razón u otra, pero no tenemos una forma de notificar a la aplicación emisora de estos errores. Estos errores son comunes cuando RabbitMQ sirve como bus de mensajes en el sistema y la aplicación emisora no está bajo el control del equipo de la aplicación receptora.

    Luego declaramos una cola para los mensajes que no pudimos procesar:

    Y también declaramos un intercambio de fanout , que usaremos como intercambio de cola muerta:

    Y unimos la cola de unprocessable-messages a ella.

    Creamos la cola de aplicaciones llamada my-app-queue y la política correspondiente:

    Podemos usar asic.reject o Basic.nack para rechazar el mensaje, debemos usar la propiedad requeue establecida en verdadero.

    Aquí hay un ejemplo simplificado en Elixir:

    def get_delivery_count(headers) do case headers do :undefined -> 0 headers -> { _ , _, delivery_cnt } = List.keyfind(headers, "x-delivery-count", 0, {:_, :_, 0} ) delivery_cnt end end receive do {:basic_deliver, msg, %{ delivery_tag: tag, headers: headers} = meta } -> delivery_count = get_delivery_count(headers) Logger.info("Received message: '#{msg}' delivered: #{delivery_count} times") case msg do "reject me" -> Logger.info("Rejected message") :ok = Basic.reject(consumer_chan, tag) _ -> \ Logger.info("Acked message") :ok = Basic.ack(consumer_chan, tag) end end
    
    

    Primero publicamos el mensaje, “este es un buen mensaje”:

    13:10:15.717 [info] Received message: 'this is a good message' delivered: 0 times 13:10:15.717 [info] Acked message
    
    

    Luego publicamos un mensaje que rechazamos:

    13:10:20.423 [info] Received message: 'reject me' delivered: 0 times 13:10:20.423 [info] Rejected message 13:10:20.447 [info] Received message: 'reject me' delivered: 1 times 13:10:20.447 [info] Rejected message 13:10:20.470 [info] Received message: 'reject me' delivered: 2 times 13:10:20.470 [info] Rejected message
    

    Y después de ser entregado tres veces, se enruta a la cola de unprocessed-messages .

    Podemos ver en la Interfaz de gestión que el mensaje se enruta a la cola:

    Controlando los miembros del quórum

    Las colas de quórum no cambian automáticamente el grupo de seguidores / líderes. Esto significa que agregar un nuevo nodo al clúster no garantizará automáticamente que el nuevo nodo se esté utilizando para alojar colas de quórum. Las colas clásicas en versiones anteriores manejaban la adición de colas en nuevos nodos de clúster a través de la interfaz de políticas, sin embargo, esto podría plantear problemas a medida que se escalaban o reducían los tamaños de clúster. Una nueva característica importante en la serie 3.8.x para colas de quórum y colas clásicas, son las operaciones de reequilibrio de maestros de cola integradas. Anteriormente, esto solo era posible mediante scripts y complementos externos.

    Agregar un nuevo miembro al quórum se puede lograr usando el comando grow:

    rabbitmq-queues grow rabbit@$NEW_HOST all

    Eliminar un host obsoleto, por ejemplo, eliminado, de los miembros se puede hacer a través del comando shrink:

    rabbitmq-queues shrink rabbit@$OLD_HOST
    
    

    También podemos reequilibrar los maestros de la cola para que la carga sea equitativa en los nodos:

    rabbitmq-queues rebalance all

    Lo cual (en bash) mostrará una tabla agradable con estadísticas sobre el número de maestros en los nodos. En Windows, use la bandera --formatter json para obtener una salida legible.

    Resumen

    RabbitMQ 3.8.x viene con muchas características nuevas. Las Colas de Quórum son solo una de ellas. Proporcionan una implementación nueva y más comprensible, en algunos casos menos intensiva en recursos, para lograr colas replicadas y alta disponibilidad. Están construidos sobre Raft y admiten características diferentes a las Colas Clásicas, que fundamentalmente se basan en el protocolo de multidifusión garantizada personalizado3 (una variante de Paxos). Como este tipo y clase de colas todavía son bastante nuevos, solo el tiempo dirá si se convierten en el tipo de cola más utilizado y preferido para la mayoría de las instalaciones distribuidas de RabbitMQ en comparación con sus contrapartes, las Colas Espejadas Clásicas. Hasta entonces, use ambos según lo mejor se adapte a sus necesidades de Rabbit. 🙂

    ¿Necesitas ayuda con tu RabbitMQ?

    Nuestro equipo líder mundial en RabbitMQ ofrece una variedad de opciones para satisfacer sus necesidades. Tenemos todo, desde chequeos de salud hasta soporte y monitoreo, para ayudarlo a garantizar un sistema RabbitMQ eficiente y confiable.

    O, si desea tener una visibilidad completa de su sistema RabbitMQ desde un panel fácil de leer, ¿por qué no aprovechar nuestra prueba gratuita de WombatOAM ?”

    The post Se explican las colas de Quorum de RabbitMQ: lo que necesita saber. appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/se-explican-las-colas-de-quorum-de-rabbitmq-lo-que-necesita-saber/

    • chevron_right

      Ignite Realtime Blog: HTTP File Upload v1.2.2 released!

      news.movim.eu / PlanetJabber · Sunday, 5 March, 2023 - 19:04

    We’ve just released version 1.2.2 of the HTTP File Upload plugin for Openfire. This release includes Ukrainian language support, thanks to Yurii Savchuk (svais) and his son Vladislav Savchuk (Bruhmozavr), as well as a few updated translations for Portuguese, Russian and English.

    Grab it from the plugins page in your Openfire Admin Console, or download manually from the HTTP File Upload archive page, here .

    1 post - 1 participant

    Read full topic

    • chevron_right

      Ignite Realtime Blog: Translations everywhere!

      news.movim.eu / PlanetJabber · Thursday, 2 March, 2023 - 13:46

    Two months ago, we started using Transifex as a platform that can be easily used by anyone to provide projects for our projects, like Openfire and Spark.

    It is great to see that new translations are pouring in! In the last few months, more than 20,000 translated words have been provided by our community!

    We’ve enabled the Transifex platform for most of the Openfire plugins (that require translations) today. If you are proficient in a non-English language, please join the translation effort !

    1 post - 1 participant

    Read full topic

    • chevron_right

      Erlang Solutions: Getting started with RabbitMQ: A beginner’s guide for your business

      news.movim.eu / PlanetJabber · Thursday, 2 March, 2023 - 10:28 · 4 minutes

    RabbitMQ is one of the world’s most popular open-source message brokers. With its tens of thousands of users (and growing), its lightweight and easy-to-deploy nature makes it a worldwide success across small startups and large enterprises across the globe.

    But how do you know if it’s best for your business?

    Read on and get the rundown on the reliable messaging software that delivers every time.

    So, what exactly is RabbitMQ?

    RabbitMQ is an open-source message broker software that implements the Advanced Message Queuing Protocol (AMQP). It is used to facilitate communication between applications or microservices, by allowing them to send and receive messages in a reliable and scalable way.

    Simply put, RabbitMQ acts as a mediator between applications that need to exchange messages. It acts as a message queue, where producers can send messages, and then consumers can receive and process them. It ensures that messages are delivered in order, without loss, and provides features such as routing, failover, and message persistence.

    RabbitMQ is a highly powerful tool for building complex, scalable, and reliable communication systems between applications.

    What is a Message Broker?

    A message broker is an intermediary component that sits between applications and helps them communicate with each other.

    Basic set-up of a message queue: CloudAMP

    In short, applications send messages to the broker. The broker then sends the message to the intended receiver. This separates sending and receiving applications, allowing them to scale independently.

    The message broker also acts as a buffer between sending and receiving applications. It ensures that messages are delivered in the most timely and efficient manner possible.

    In RabbitMQ, messages that are stored in queues and applications can also post and consume messages from them, too. It supports multiple messaging models including point-to-point, publish/subscribe, and request/reply, making it a flexible solution for many use cases.

    By using RabbitMQ as a message broker, developers can decouple the components of their system, allowing them to build more resilient, scalable, and resilient applications.

    So why should I choose RabbitMQ?

    We’ve already touched on this slightly but, there are several reasons why RabbitMQ is a popular choice for implementing message-based systems for your business:

    It’s scalable: RabbitMQ can handle large amounts of messages and can be easily scaled up.

    It’s flexible: RabbitMQ supports multiple messaging models, including point-to-point, publish/subscribe and request/reply.

    It’s reliable: RabbitMQ provides many features to ensure reliable message delivery, including message confirmation, message persistence, and auto-recovery.

    Its Interoperability: RabbitMQ implements the AMQP standard, making it interoperable with multiple platforms and languages.

    To learn more about RabbitMQ’s impressive problem-solving capabilities, you can delve into our technical deep dive detailing its delivery.

    What are the benefits of using RabbitMQ for my business?

    RabbitMQ’s popularity because of its range of benefits, including:

    Decoupled architecture: RabbitMQ allows applications to communicate with each other through a centralised message queue, decoupling- sending and receiving applications. This allows for a flexible and extensible architecture, in which components can scale independently.

    Performance improvement: RabbitMQ can handle large volumes of messages. It also has low latency, which improves overall system performance.

    Reliable messaging: RabbitMQ provides many features to ensure reliable messaging, including message confirmation, message retention, and auto-recovery.

    Flexible Messaging Model: RabbitMQ supports a variety of messaging models, including point-to-point, publish/subscribe, and request/reply, enabling a flexible and adaptable messaging system response.

    Interoperability: RabbitMQ implements the AMQP standard, making it interoperable with multiple platforms and languages.

    But don’t just take our word for it.

    Erlang’s world- leading RabbitMQ experts have been trusted with implementing RabbitMQ for some of the world’s biggest brands.

    You can read more about their experience and the success of RabbitMQ in their business.

    When should I start to consider using RabbitMQ?

    Wondering when the right time is to start implementing RabbitMQ as your messaging system? If you’re ready for reliable, scalable, and flexible communication between your applications, it might be time to consider.

    Here are some common use cases for RabbitMQ:

    Decoupled Architecture: RabbitMQ allows you to build a decoupled architecture, in which different components of your system can communicate together- without the need for a tight coupling. This makes your system more flexible, extensible and resilient.

    Asynchronous communication: When you need to implement asynchronous communication between applications, RabbitMQ can help. For example, do you have a system that needs to process large amounts of data? RabbitMQ can be used to offload that processing to a separate component, allowing the parent component to continue processing requests, meanwhile, the data is processed in the background.

    Microservices: RabbitMQ is well-suited to a microservices architecture, where different components of your system are implemented as separate services. It provides a communication infrastructure, allowing these services to communicate with each other.

    Integrating with legacy systems: Do you have legacy systems that need to communicate with each other? RabbitMQ can provide a common messaging infrastructure that allows those systems to exchange messages.

    High Availability and Reliability: RabbitMQ provides features such as message persistence, automatic failover, and replication, making it a reliable solution for mission-critical applications.

    Multi-Protocol Support: RabbitMQ supports multiple messaging protocols, including AMQP, MQTT, and STOMP, making it a flexible solution for different types of applications.

    Ultimately, the choice is yours to use RabbitMQ or any other messaging system, as it all comes down to your specific business needs.

    I would like to get started with RabbitMQ!

    Whether you are building a small application or a large-scale system, RabbitMQ is a great solution to enable inter-component communication.

    We appreciate that you might have further questions, and our team of expert consultants are on hand and ready to talk you through the process. Just head to our contact page .

    The post Getting started with RabbitMQ: A beginner’s guide for your business appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/getting-started-with-rabbitmq-a-beginners-guide-for-your-business/

    • chevron_right

      JMP: Cheogram Android: Stickers

      news.movim.eu / PlanetJabber · Wednesday, 1 March, 2023 - 17:55 · 3 minutes

    One feature people ask about from time to time is stickers.  Now, “stickers” isn’t really a feature, nor is it even universally agreed what it means, but we’ve been working on some improvements to Cheogram Android (and the Cheogram service) to make some sticker workflows better, released today in 2.12.1-3 .  This post will mostly talk about those changes and the technical implications; if you just want to see a demo of some UI you may want to skip to the video demo .

    Many Android users already have pretty good support for inserting stickers (or GIFs) into Cheogram Android via their keyboard.  However, as the app existed at the time, this would result in the sender re-uploading and the recipient re-downloading the sticker image every time, and fill up the sending server and receiving device with many copies of the same image.  The first step to mitigating this was to switch local media storage in the app to content-addressed, which in this case means that the file is named after the hash of its contents .  This prevents filling up the device when receiving the same image many times.

    Now that we know the hashes of our stored media, we can use SIMS to transmit this hash when sending.  If the app sees an image that it already has, it can display it without downloading at all, saving not only space but bandwidth and time as well.  The Cheogram service also uses SIMS to transmit hashes of incoming MMS images for this purpose as well.

    An existing Jabber client which uses the word “stickers” is Movim .  It wouldn’t make sense to add the word to our UI without supporting what they already have.  So we added support for XHTML-IM including Bits of Binary images.  This also relies on hash-based storage or caching, which by now we had.  This tech will also be useful in the future to extend beyond stickers into custom emoji.

    Some stickers are animated, and users want to be able to send GIFs as well, so the app was updated to support inline playback of animated images (both GIF and WebP format).

    Some users don’t have any sticker support in their keyboard or OS, so we want to provide some tools for these users as well.  We have added the option to download some default sticker packs (mostly curated from the default set from Movim for now) so that users start with some options.  We also built a small proxy to allow easily importing stickers intended for signal by clicking the regular “add to signal” links on eg signalstickers.com .  Any sticker selected from these will get sent without even uploading, saving time and space on the server, and then will be received by any user of the app who has the default packs installed with no need for downloading, with fallbacks for other clients and situations of course.

    If a user receives a sticker that they’d like to save for easily sending out again later, they can long-press any image they receive and choose “Save as sticker” which will prompt them to choose or create a sticker pack to keep it in, then save it there.  Pointing a sticker sheet app or keyboard at this directory also allows re-using other sticker selection UIs with custom stickers saved in this way.

    Taken together we hope these features produce real benefits for users of stickers, both with and without existing keyboard support, and also provide foundational work that we can build upon to provide custom emoji, thumbnails before downloading, URL previews, and other rich media features in the future.  If you’d like to see some of these features in action, check out this short video .

    • wifi_tethering open_in_new

      This post is public

      blog.jmp.chat /b/cheogram-android-stickers-2023

    • chevron_right

      Ignite Realtime Blog: inVerse Openfire plugin 10.1.2-1 released!

      news.movim.eu / PlanetJabber · Friday, 24 February, 2023 - 21:11

    Earlier today, version 10.1.2 release 1 of the Openfire inVerse plugin was released. This plugin allows you to easily deploy the third-party Converse client in Openfire. In this release, the version of the client that is bundled in the plugin is updated to 10.1.2!

    The updated plugin should become available for download in your Openfire admin console in the course of the next few hours. Alternatively, you can download the plugin directly, from the plugin’s archive page .

    For other release announcements and news follow us on Twitter

    1 post - 1 participant

    Read full topic

    • chevron_right

      Ignite Realtime Blog: New: Openfire MUC Real-Time Block List plugin!

      news.movim.eu / PlanetJabber · Thursday, 23 February, 2023 - 20:30 · 1 minute

    A new plugin has been made available for Openfire, our cross-platform real-time collaboration server based on the XMPP protocol. We have named this new plugin the MUC Real-Time Block List plugin.

    This plugin can help you moderate your chat rooms, especially when your service is part of a larger network of federated XMPP domains. From experience, the XMPP community has learned that bad actors tend to spam a wide range of public chat rooms on an equally wide range of different domains. Prior to the functionality provided by this plugin, the administrator of each MUC service had to manually adjust permissions, to keep unwanted entities out. With this new plugin, that process is automated.

    This plugin can be used to subscribe to a Publish/Subscribe node (as defined in XEP-0060 ), that can live on a remote XMPP domain, but curated by a trusted (group of) administrators). It is expected that this node contains a list of banned entities. When Openfire, through the plugin, is notified that the list has received a new banned entity, it will prevent that entity from joining a chat room in Openfire (if they’re already in, they will be kicked out automatically). Using this mechanism, moderation efforts centralized in one federated Pub/Sub service can be used by any server that uses this plugin.

    This plugin is heavily inspired, and aspires to be compatible with, Prosody’s mod_muc_rtbl and the pub/sub services that it uses.

    The first version of this plugin is now available on our website and should become available in the list of installable plugins in your instance of Openfire in the next few hours. Please give it a test! We are interested in hearing back from you!

    For other release announcements and news follow us on Twitter

    1 post - 1 participant

    Read full topic

    • chevron_right

      Erlang Solutions: Can’t Live `with` It, Can’t Live `with`out It

      news.movim.eu / PlanetJabber · Thursday, 23 February, 2023 - 12:29 · 8 minutes

    I’d like to share some thoughts about Elixir’s with keyword. with is a wonderful tool, but in my experience it is a bit overused.  To use it best, we must understand how it behaves in all cases.  So, let’s briefly cover the basics, starting with pipes in Elixir.

    Pipes are a wonderful abstraction

    But like all tools, you should think about when it is best used…

    Pipes are at their best when you expect your functions to accept and return basic values. But often we don’t have only simple values because we need to deal with error cases . For example:

    region 
    
    |> Module.fetch_companies() 
    
    |> Module.fetch_departments() 
    
    |> Enum.map(& &1.employee_count) 
    
    |> calculate_average()

    If our fetch_* methods return list values there isn’t a problem. But often we fetch data from an external source, which means we introduce the possibility of an error . Generally in Elixir this means {:ok, _} tuples for success and {:error, _} tuples for failure. Using pipes that might become:

    region
    
    |> Module.fetch_companies()
    
    |> case do
    
      {:ok, companies} -> Module.fetch_departments(companies)
    
      {:error, _} = error -> error
    
    end
    
    |> case do
    
      {:ok, departments} ->
    
        departments
    
        |> Enum.map(& &1.employee_count)
    
        |> calculate_average()
    
      {:error, _} = error -> error
    
    end

    Not horrible, but certainly not beautiful. Fortunately, Elixir has with !

    `with` is a wonderful abstraction

    But like all tools, you should think about when it’s best used…

    with is at it’s best when dealing with the happy paths of a set of calls which all return similar things . What do I mean by that? Let’s look at what this code might look like using with ?

    with {:ok, companies} <- Module.fetch_companies(region),
    
         {:ok, departments} <- Module.fetch_departments(companies) do
    
      departments
    
      |> Enum.map(& &1.employee_count)
    
      |> calculate_average()
    
    end

    That’s definitely better!

    • We separated out the parts of our code which might fail (remember that failure is a sign of a side-effect and in functional programming we want to isolate side-effects).
    • The body is only the things that we don’t expect to fail.
    • We don’t need to explicitly deal with the {:error, _} cases (in this case with will return any clause values which don’t match the pattern before <-) .

    But this is a great example of a happy path where the set of calls all return similar things . But where are some examples of where we might go wrong with with ?

    Non-standard failure

    What if Module.fetch_companies returns {:error, _} but `Module.fetch_departments` returns just :error ? That means your with is going to return two different error results. If your with is the end of your function call then that complexity is now the caller’s responsibility. You might not think that’s a big deal because we can do this:

    else
    
      :error -> {:error, "Error fetching departments"}

    But this breaks to more-or-less important degrees because:

    • … once you add an else clause, you need to take care of every non-happy path case (e.g. above we should match the {:error, _} returned by Module.fetch_companies which we didn’t need to explicitly match before) 😤
    • … if either function is later refactored to return another pattern (e.g. {:error, _, _} ) – there will be a WithClauseError exception (again, because once you add an else the fallback behavior of non-matching <- patterns doesn’t work) 🤷‍♂️
    • … if Module.fetch_departments is later refactored to return {:error, _} – we’ll then have an unused handler 🤷‍♂️
    • … if another clause is added which also returns :error the message Error fetching departments probably won’t be the right error 🙈
    • … if you want to refactor this code later, you need to understand *everything* that the called functions might potentially return, leading to code which is hard to refactor.  If there are just two clauses and we’re just calling simple functions, that’s not as big of a deal.  But with many with clauses which call complex functions, it can become a nightmare 🙀

    So the first major thing to know when using with is what happens when a clause doesn’t match it’s pattern :

    • If else is not specified then the non-matching clause is returned.
    • If else is specified then the code for the first matching else pattern is evaluated. If no else pattern matches , a WithClauseError is raised.

    As Stratus3D excellently put it: “ with blocks are the only Elixir construct that implicitly uses the same else clauses to handle return values from different expressions. The lack of a one-to-one correspondence between an expression in the head of the with block and the clauses that handle its return values makes it impossible to know when each else clause will be used”. There are a couple of well known solutions to address this.  One is using “tagged tuples”:

    with {:fetch_companies, {:ok, companies} <- {:fetch_companies, Module.fetch_companies(region)},
    
         {:fetch_departments, {:ok, departments} <- {:fetch_departments, Module.fetch_departments(companies)},
    
      departments
    
      |> Enum.map(& &1.employee_count)
    
      |> calculate_average()
    
    else
    
      {:fetch_companies, {:error, reason}} -> ...
    
      {:fetch_departments, :error} -> ...
    
    end

    Though tagged tuples should be avoided for various reasons:

    • They make the code a lot more verbose
    • else is now being used, so we need to match all patterns that might occur
    • We need to keep the clauses and else in sync when adding/removing/modifying clauses, leaving room for bugs.
    • Most importantly: the value in an abstraction like {:ok, _} / {:error, _} tuples is that you can handle things generically without needing to worry about the source

    A generally better solution is to create functions which normalize the values matched in the patterns.  This is covered well in a note in the docs for with and I recommend checking it out.  One addition I would make: in the above case you could leave the Module.fetch_companies alone and just surround the Module.fetch_departments with a local fetch_departments to turn the :error into an {:error, reason} .

    Non-standard success

    We can even get unexpected results when with succeeds! To start let’s look at the parse/1 function from the excellent decimal library. It’s typespec tells us that it can return {Decimal.t(), binary()} or :error . If we want to match a decimal value without extra characters, we could have a with clause like this:

    with {:ok, value} <- fetch_value(),
    
         {decimal, ""} <- Decimal.parse(value) do
    
      {:ok, decimal}

    But if value is given as "1.23 " (with a space at the end), then Decimal.parse/1 will return {#Decimal<1.23>, " "} . Since that doesn’t match our pattern (string with a space vs. an empty string), the body of the with will be skipped. If we don’t have an else then instead of returning a {:ok, _} value, we return {#Decimal<1.23>, " "} .

    The solution may seem simple: match on {decimal, _} ! But then we match strings like “1.23a” which is what we were trying to avoid. Again, we’re likely better off defining a local parse_decimal function which returns {:ok, _} or {:error, _} .

    There are other, similar, situations:

    • {:ok, %{"key" => value}} <- fetch_data(...) – the value inside of the {:ok, _} tuple may not have a "key" key.
    • [%{id: value}] <- fetch_data(...) – the list returned may have more or less than one item, or if it does only have one item it may not have the :id key
    • value when length(value) > 2 <- fetch_data(...) – the when might not match. There are two cases where this might surprise you:
      • If value is a list, the length of the list being 2 or below will return the list.
      • If value is a string, length isn’t a valid function (you’d probably want byte_size ). Instead of an exception, the guard simply fails and the pattern doesn’t match.

    The problem in all of these cases is that the intermediate value from fetch_data will be returned, not what the body of the with would return. This means that our with returns “uneven” results. We can handle these cases in the else , but again, once we introduce else we need to take care of all potential cases.

    I might even go to the extent of recommending that you don’t define with clause patterns which are at all deep in their pattern matching unless you are very sure the success case will be able to match the whole pattern .  One example where you might take a risk is when matching %MyStruct{key: value} <- … where you know that a MyStruct value is going to be returned and you know that key is one of the keys defined for the struct. No matter the case, dialyzer is one tool to gain confidence that you will be able to match on the pattern (at least for your own code or libraries which also use dialyzer).

    One of the simplest and most standard ways to avoid these issues is to make sure the functions that you are calling return {:ok, variable} or {:error, reason} tuples. Then with can fall through cleanly ( definitely check out Chris Keathley’s discussion of “Avoid else in with blocks” in his post “Good and Bad Elixir” ).

    With all that said, I recommend using with statements whenever you can! Just make sure that you think about fallback cases that might happen. Even better: write tests to cover all of your potential cases! If you can strike a balance and use with carefully, your code can be both cleaner and more reliable.

    Need help with Elixir?

    We’ve helped 100’s of the world’s biggest companies achieve success with Elixir. From digital transformation, developing fit-for-purposes software for your business logic, to proof-of-concepts, right through to staff augmentation development and support. We’re here to make sure your system makes the most of Elixir to be scalable, reliable and easy to maintain. Talk to us to learn more.

    Training

    Want to improve your Elixir skills? Our world-leading experts are here to help. Learn from the same team who architect, manage and develop some of the biggest in-production systems available. Head to our training page to learn more about our courses and tutorials.

    The post Can’t Live `with` It, Can’t Live `with`out It appeared first on Erlang Solutions .