phone

    • chevron_right

      Erlang Solutions: Meet the team: Erik Schön

      news.movim.eu / PlanetJabber · Tuesday, 10 December, 2024 - 13:37 · 2 minutes

    In our final “Meet the Team” of 2024, we’d like to introduce you to Erik Schön, Managing Director at Erlang Solutions.

    Erik shares his journey with Erlang, Elixir, and the BEAM ecosystem, from his work at Ericsson to joining Erlang Solutions in 2019. He also reflects on a key professional highlight in 2024 and looks ahead to his goals for 2025. Erik also reveals his festive traditions, including a Swedish-Japanese twist.

    About Erik

    So tell us about yourself and your role at Erlang Solutions .

    Hello, I’m Erik! I’ve been a big fan of all things Erlang/Elixir/BEAM since the 90s, having seen many successful applications of it when working at Ericsson as an R&D manager for many years.

    Since 2019, I’ve been part of the Erlang Solutions Nordic Fjällrävens (“Arctic Foxes”) team based in Stockholm, Sweden. I love helping our customers succeed by delivering faster, safer, and more efficient solutions.

    What has been a professional highlight of yours in 2024?

    The highlight of 2024 for me was our successful collaboration with BoardClic, a startup that helps its customers with digital board and C-suite level performance evaluations.

    We started our collaboration with a comprehensive code-/architecture review of their Elixir codebase, using our 25 years of experience in delivering software for societal infrastructure, including all the do’s and don’ts for future-proof, secure, resilient, and scalable solutions.

    Based on this, we boosted their development of new functionality for a strategically important customer—from idea to live, commercial operation. Two of our curious, competent collaborators, with 10+ years of practical, hands-on Elixir/Erlang/BEAM expertise, worked closely with BoardClic on-site to deliver on time and with quality.

    What professional and personal achievements are you looking forward to achieving in 2025?

    Professionally, I look forward to continued success with our customers. This includes strengthening our long-standing partnerships with TV4, Telia, Ericsson, and Cisco . I’m also excited about the start of new partnerships, both inside and outside the BEAM community where we will continue to deliver more team-based, full-stack, end-to-end solutions.

    Personally, I look forward to continuing to talk about my trilogy of books – The Art of Change, The Art of Leadership and The Art of Strategy – in podcasts, meetups and conferences.

    Do you have any festive traditions that you’re looking forward to this holiday season?

    In Sweden, julbord (a buffet-style table of small dishes including different kinds of marinated fish like herring and salmon, meatballs, ham, porridge, etc)  is a very important tradition to look forward to. Since my wife is from Japan, we always try to spice things up a bit by including suitable dishes from the Japanese kitchen, like different kinds of sushi.

    Final thoughts

    As we wrap up our 2024 meet-the-team series, a big thank you to Erik and all the incredible team members we’ve highlighted this year. Their passion, expertise, and dedication continue to drive our success.

    Stay tuned for more insights and profiles in the new year as we introduce even more of the talented people who make Erlang Solutions what it is! if you’d like to speak more with our team, please get in touch .

    The post Meet the team: Erik Schön appeared first on Erlang Solutions .

    • chevron_right

      Erlang Solutions: Advent of Code 2024

      news.movim.eu / PlanetJabber · Wednesday, 4 December, 2024 - 08:12 · 3 minutes

    Welcome to Advent of Code 2024!

    Like every year, I start the challenge with the best attitude and love of being an Elixir programmer. Although I know that at some point, I will go to the “what is this? I hate it” phase, unlike other years, this time, I am committed to finishing Advent of Code and, more importantly, sharing it with you.

    I hope you enjoy this series of December posts, where we will discuss the approach for each exercise. But remember that it is not the only one, and the idea of ​​this initiative is to have a great time and share knowledge, so don’t forget to post your solutions and comments and tag us to continue the conversation.

    Let’s go for it!

    Day 1: Historian Hysteria

    Before starting any exercise, I suggest spending some time defining the structure that best fits the problem’s needs. If the structure is adequate, it will be easy to reuse it for the second part without further complications.

    In this case, the exercise itself describes lists as the input, so we can skip that step and instead consider which functions of the Enum or List modules can be helpful.

    We have this example input:

    3 4

    4 3

    2 5

    1 3

    3 9

    3   3

    The goal is to transform it into two separate lists and apply sorting, comparison, etc.

    List 1: [3, 4, 2, 1, 3, 3 ]

    List 2: [ 4, 3, 5, 3, 9, 3 ]

    Let’s define a function that reads a file with the input. Each line will initially be represented by a string, so use String . split to separate it at each line break.

     def get_input(path) do
       path
       |> File.read!()
       |> String.split("\n", trim: true)
     end
    
    
    ["3   4", "4   3", "2   5", "1   3", "3   9", "3   3"]
    

    We will still have each row represented by a string, but we can now modify this using the functions in the Enum module. Notice that the whitespace between characters is constant, and the pattern is that the first element should go into list one and the second element into list two. Use Enum.reduce to map the elements to the corresponding list and get the following output:


    %{
     first_list: [3, 3, 1, 2, 4, 3],
     second_list: [3, 9, 3, 5, 3, 4]
    }
    
    

    I’m using a map so that we can identify the lists and everything is clear. The function that creates them is as follows:

     @doc """
     This function takes a list where the elements are strings with two
     components separated by whitespace.
    
    
     Example: "3   4"
    
    
     It assigns the first element to list one and the second to list two,
     assuming both are numbers.
     """
     def define_separated_lists(input) do
       Enum.reduce(input, %{first_list: [], second_list: []}, fn row, map_with_lists ->
         [elem_first_list, elem_second_list] = String.split(row, "   ")
    
    
         %{
           first_list: [String.to_integer(elem_first_list) | map_with_lists.first_list],
           second_list: [String.to_integer(elem_second_list) | map_with_lists.second_list]
         }
       end)
     end
    

    Once we have this format, we can move on to the first part of the exercise.

    Part 1

    Use Enum.sort to sort the lists ascendingly and pass them to the Enum.zip_with function that will calculate the distance between the elements of both. Note that we are using abs to avoid negative values, and finally, Enum.reduce to sum all the distances.

    first_sorted_list = Enum.sort(first_list)
       second_sorted_list = Enum.sort(second_list)
    
    
       first_sorted_list
       |> Enum.zip_with(second_sorted_list, fn x, y -> abs(x-y) end)
       |> Enum.reduce(0, fn distance, acc -> distance + acc end)
    

    Part 2

    For the second part, you don’t need to sort the lists; use Enum. frequencies and Enum.reduce to get the multiplication of the elements.

     frequencies_second_list = Enum.frequencies(second_list)
    
    
       Enum.reduce(first_list, 0, fn elem, acc ->
         elem * Map.get(frequencies_second_list, elem, 0) + acc
       end)
    

    That’s it. As you can see, once we have a good structure, the corresponding module, in this case, Enum, makes the operations more straightforward, so it’s worth spending some time defining which input will make our life easier.

    You can see the full version of the exercise here .

    The post Advent of Code 2024 appeared first on Erlang Solutions .

    • chevron_right

      Erlang Solutions: MongooseIM 6.3: Prometheus, CockroachDB and more

      news.movim.eu / PlanetJabber · Thursday, 14 November, 2024 - 10:16 · 9 minutes

    MongooseIM is a scalable, efficient, high-performance instant messaging server using the proven, open, and extensible XMPP protocol. With each new version, we introduce new features and improvements. For example, version 6.2.0 introduced our new CETS in-memory storage, making setup and autoscaling in cloud environments easier than before (see the blog post for details). The latest release 6.3.0 is no exception. The main highlight is the complete instrumentation rework, allowing seamless integration with modern monitoring solutions like Prometheus.

    Additionally, we have added CockroachDB to the list of supported databases, so you can now let this highly scalable database grow with your applications while avoiding being locked into your cloud provider.

    Observability and instrumentation

    In software engineering, observability is the ability to gather data from a running system to figure out what is going inside: is it working as expected? Does it have any issues? How much load is it handling, and could it do more? There are many ways to improve the observability of a system, and one of the most important is instrumentation . Just like adding extra measuring equipment to a physical system, this means adding additional code to the software. It allows the system administrator to observe the internal state of the system. This comes with a price. There is more work for the developers, increased complexity, and potential performance degradation caused by the collection and processing of additional data.

    However, the benefits usually outweigh the costs, and the ability to inspect the system is often a critical requirement. It is also worth noting that the metrics and events gathered by instrumentation can be used for further automation, e.g. for autoscaling or sending alarms to the administrator.

    Instrumentation in MongooseIM

    Even before our latest release of MongooseIM, there have been multiple means to observe its behaviour:

    Metrics provide numerical values of measured system properties. The values change over time, and the metric can present current value, sum from a sliding window, or a statistic (histogram) of values from a given time period. Prior to version 6.3, MongooseIM used to store such metrics with the help of the exometer library. To view the metrics, one had to configure an Exometer exporter, which would periodically send the metrics to an external service using the Graphite protocol. Because of the protocol, the metrics would be exported to Graphite or InfluxDB version 1 . One could also query a limited subset of metrics using our GraphQL API (or the legacy REST API) or with the command line interface. Alternatively, metrics could be retrieved from the Erlang shell of a running MongooseIM node.

    Logs are another type of instrumentation present in the code. They inform about events occurring in the system and since version 4, they are events with extensible map-like structure and can be formatted e.g. as plain text or JSON. Subsequently, they can be shown in the console or stored in files. You can also set up a log management system like the Elastic (ELK) Stack or Splunk – see the documentation for more details.

    The diagram below shows how these two types of instrumentation can work together:

    The first observation is that the instrumented code needs to separately call the log and metric API. Updating a metric and logging an event requires two distinct function calls. Moreover, if there are multiple metrics (e.g. execution time and total number of calls), there would be multiple function calls required. There is potential for inconsistency between metrics, or between metrics and logs, because an error could happen between the function calls. The main issue of this solution is however the hardcoding of Exometer as the metric library and the limitation of the Graphite protocol used to push the metrics to external services.

    Instrumentation rework in MongooseIM 6.3

    The lack of support for the modern and widespread Prometheus protocol was one of the main reasons for the complete rework of instrumentation in version 6.3. Let’s see the updated diagram of MongooseIM instrumentation:

    The most noticeable difference is that in the instrumented code, there is just one event emitted. Such an event is identified by its name and a key-value map of labels and contains measurements (with optional metadata) organised in a key-value map. Each event has to be registered before its instances are emitted with particular measurements. The point of this preliminary step is not only to ensure that all events are handled but also to provide additional information about the event, e.g. the measurement keys that will be used to update metrics. Emitted events are then handled by configurable handlers . Currently, there are three such handlers. Exometer and Logger work similarly as before, but there is a new Prometheus handler as well, which stores the metrics internally in a format compatible with Prometheus and exposes them over an HTTP API. This means that any external service can now scrape the metrics using the Prometheus protocol. The primary case would be to use Prometheus for metrics collection, and a graphical tool like Grafana for display. If you however prefer InfluxDB version 2, you can easily configure a scraper , which would periodically put new data into InfluxDB.

    As you can see in the diagram, logs can be also emitted directly, bypassing the instrumentation API. This is the case for multiple logs in the system, because often there is no need for any metrics, and a log message is enough. In the future though, we might decide to fully replace logs with instrumentation events, because they are more extensible.

    Apart from supporting the Prometheus protocol, additional benefits of the new solution include easier configuration, extensibility, and the ability to add more handlers in the future. You can also have multiple handlers enabled simultaneously, allowing you to gradually change your metric backend from Exometer to Prometheus. Conversely, you can also disable all instrumentation, which was not possible prior to version 6.3. Although it might make little sense at first glance, because it can render the system a black box, it can be useful to gain extra performance in some cases, e.g. if the external metrics like CPU usage are enough, in case of an isolated embedded system, or if resources are very limited.

    The table below compares the legacy metrics solution with the new instrumentation framework:

    Solution Legacy: mongoose_metrics New: mongoose_instrument
    Intended use Metrics Metrics, logs, distributed tracing, alarms, …
    Coupling with handlers Tight: hardcoded Exometer logic, one metric update per function call Loose: events separated from configurable handlers
    Supported handlers Exometer is hardcoded Exometer, Prometheus, Log
    Events identified by Exometer metric name (a list) Event name, Labels (key-value map)
    Event value Single-dimensional numerical value Multi-dimensional measurements with metadata
    Consistency checks None – it is up to the implementer to verify that the correct metric is created and updated Prometheus HTTP endpoint, legacy GraphQL / CLI / REST for Exometer
    API GraphQL / CLI and REST Prometheus HTTP endpoint,legacy GraphQL / CLI / REST for Exometer

    There are about 140 events in total, and some of them have multiple dimensions. You can find an overview in the documentation . In terms of dashboards for tools like Grafana, we believe that each use case of MongooseIM deserves its own. If you are interested in getting one tailored to your needs, don’t hesitate to contact us .

    Using the instrumentation

    Let’s see the new instrumentation in action now. Starting with configuration, let’s examine the new additions to the default configuration file :

    [[listen.http]]
      port = 9090
      transport.num_acceptors = 10
    
      [[listen.http.handlers.mongoose_prometheus_handler]]
        host = "_"
        path = "/metrics"
    
    (...)
    
    [instrumentation.prometheus]
    
    [instrumentation.log]
    
    

    The first section, [[listen.http]] , specifies the Prometheus HTTP endpoint. The following [instrumentation.*] sections enable the Prometheus and Log handlers with the default settings – in general, instrumentation events are logged on the DEBUG level, but you can change it. This configuration is all you need to see the metrics at http://localhost:9091/metrics when you start MongooseIM.

    As a second example, let’s say that you want only the Graphite protocol integration. In this case, you might configure MongooseIM to use only the Exometer handler, which would push the metrics prefixed with mim to the influxdb1 host every 60 seconds:

    [[instrumentation.exometer.report.graphite]]
      interval = 60_000
      prefix = "mim"
      host = "influxdb1"
    


    There are more options possible, and you can find them in the documentation .

    Tracing – ad-hoc instrumentation

    There is one more type of observability available in Erlang systems, which is tracing . It enables a user to have a more in-depth look into the Erlang processes, including the functions being called and the internal messages being exchanged. It is meant to be used by Erlang developers, and should not be used in production environments because of the impact it can have on a running system. It is good to know, however, because it could be helpful to diagnose unusual issues. To make tracing more user-friendly, MongooseIM now includes erlang_doctor with some MongooseIM-specific utilities (see the tr_util module). This tool provides low-level ad-hoc instrumentation, allowing you to instrument functions in a running system, and gather the resulting data in an in-memory table, which can be then queried, processed, and – if needed – exported to a file. Think of it as a backup solution, which could help you diagnose hidden issues, should you ever experience one.

    CockroachDB – a database that scales with MongooseIM

    MongooseIM works best when paired with a relational database like PostgreSQL or MySQL, enabling easy cluster node discovery with CETS and persistent storage for users’ accounts, archived messages and other kinds of data. Although such databases are not horizontally scalable out of the box, you can use managed solutions like Amazon Aurora , AlloyDB or Azure Cosmos DB for PostgreSQL . The downsides are the possible vendor lock-in and the fact that you cannot host and manage the DB yourself. With version 6.3 however, the possibilities are extended to CockroachDB . This PostgreSQL-compatible distributed database can be used either as a provider-independent cloud-based solution or as an internally hosted cluster. You can instantly set it up in your local environment and take advantage of the horizontal scalability of both MongooseIM and CockroachDB. If you want to learn how to deploy both MongooseIM and CockroachDB in Kubernetes, see the documentation for CockroachDB and the Helm chart for MongooseIM, together with our recent blog post about setting up an auto-scalable cluster. If you are interested in having an auto-scalable solution deployed for you, please consider our MongooseIM Autoscaler .

    Summary

    MongooseIM 6.3.0 opens new possibilities for observability – the Prometheus protocol is supported instantly with a new reworked instrumentation layer underneath, guaranteeing ease of future extensions. Regarding database integration, you can now use CockroachDB to store all your persistent data. Apart from these changes, the latest version introduces a multitude of improvements and updates – see the release notes for more information. As the next step, we recommend visiting our product page to see the possible options of support and the services we offer. You can also try the server out at trymongoose.im . In any case, should you have any further questions, feel free to contact us .

    The post MongooseIM 6.3: Prometheus, CockroachDB and more appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/mongooseim-6-3-prometheus-cockroachdb-and-more/

    • chevron_right

      ProcessOne: Docker: set up ejabberd and keep it updated automagically with Watchtower

      news.movim.eu / PlanetJabber · Tuesday, 12 November, 2024 - 14:15 · 5 minutes

    This blog post will guide you through the process of setting up an ejabberd Community Server using Docker and Docker Compose , and will also introduce Watchtower for automatic updates. This approach ensures that your configuration remains secure and up to date.

    Furthermore, we will examine the potential risks associated with automatic updates and suggest Diun as an alternative tool for notification-based updates.

    1. Prerequisites

    Please ensure that Docker and Docker Compose are installed on your system.
    It would be beneficial to have a basic understanding of Docker concepts, including containers, volumes, and bind-mounts.

    2. Set up ejabberd in a docker container

    Let’s first create a minimal Docker Compose configuration to start an ejabberd instance.

    2.1: Prepare the directories

    For this setup, we will create a directory structure to store the configuration, database, and logs. This will assist in maintaining an organised setup, facilitating data management and backup.

    mkdir ejabberd-setup && cd ejabberd-setup
    touch docker-compose.yml
    mkdir conf
    touch conf/ejabberd.yml
    mkdir database
    mkdir logs
    

    This should give you the following structure:

    ejabberd-setup/
    ├── conf
    │   └── ejabberd.yml
    ├── database
    ├── docker-compose.yml
    └── logs
    

    To verify the structure, use the tree command. It is a very useful tool which we use on a daily basis.

    Set permissions

    Since we&aposll be using bind mounts in this example, it’s important to ensure that specific directories (like database and logs) have the correct permissions for the ejabberd user inside the container (UID 9000 , GID 9000 ).

    Customize or skip depending on your needs:

    sudo chown -R 9000:9000 database
    sudo chown -R 9000:9000 logs
    

    Based on this Issue .

    2.2: The docker-compose.yml file

    Now, create a docker-compose.yml file inside, containing:

    services:
      ejabberd:
        image: ejabberd/ecs:latest
        container_name: ejabberd
        ports:
          - "5222:5222"  # XMPP Client
          - "5280:5280"  # Web Admin Interface, optional
        volumes:
          - ./database:/home/ejabberd/database
          - ./ejabberd.yml:/home/ejabberd/conf/ejabberd.yml
          - ./logs:/home/ejabberd/logs
        restart: unless-stopped
    

    2.3: The ejabberd.yml file

    A basic configuration file for ejabberd will be required. we will name it conf/ejabberd.yml .

    loglevel: 4
    hosts:
    - "localhost"
    
    acl:
      admin:
        user:
          - "admin@localhost"
    
    access_rules:
      local:
        allow: all
    
    listen
      -
        port: 5222
        module: ejabberd_c2s
    
      -
        port: 5280                       # optional
        module: ejabberd_http            # optional
        request_handlers:                # optional
          "/admin": ejabberd_web_admin   # optional
    

    Did you know? Since 23.10 , ejabberd now offers users the option to create or update the relevant MySQL, PostgreSQL or SQLite tables automatically with each update. You can read more about it here .

    3: Starting ejabberd

    Finally, we&aposre set: you can run the following command to start your stack: docker-compose up -d

    Your ejabberd instance should now running in a Docker container! Good job! 🎉

    From there, customize ejabberd to your liking! Naturally, in this example we&aposre going to keep ejabberd in its barebones configuration, but we recommend that you configure it as you wish at this stage, to suit your needs (Domains, SSL, favorite modules, chosen database, admin accounts, etc.)

    Example: You could register your admin account at this stage

    To use the admin interface, you need to create an admin account. You can do so by running the following command:

    $ docker exec -it ejabberd bin/ejabberdctl register admin localhost very_secret_password
    > User admin@localhost successfully registered
    

    Once this step is complete, you will then be able to access the web admin interface at http://localhost:5280/admin .

    4. Set up automatic updates

    Finally, we come to the most interesting part: how do I keep my containers up to date?

    To keep your ejabberd instance up-to-date, you can use Watchtower , a Docker container that automatically updates other containers when new versions are available.

    Warning: Auto-updates are undoubtedly convenient, but they can occasionally cause issues if an update includes breaking changes. Always test updates in a staging environment and back up your data before enabling auto-updates. Further information can be found at the end of this post.

    If greater control over updates is required (for example, for mission-critical production servers or clusters), we recommend using Diun , which can notify you of available updates and allow you to decide when to apply them.

    4.1: Add Watchtower to your docker-compose.yml

    To include Watchtower , add it as a service in docker-compose.yml :

    services:
      ejabberd:
        image: ejabberd/ecs:latest
        container_name: ejabberd
        ports:
          - "5222:5222"  # XMPP Client
          - "5280:5280"  # Web Admin Interface, optional
        volumes:
          - ./database:/home/ejabberd/database
          - ./ejabberd.yml:/home/ejabberd/conf/ejabberd.yml
          - ./logs:/home/ejabberd/logs
        restart: unless-stopped
    
      watchtower:
        image: containrrr/watchtower
        container_name: watchtower
        volumes:
          - /var/run/docker.sock:/var/run/docker.sock
        environment:
          - WATCHTOWER_POLL_INTERVAL=3600 # Sets how often Watchtower checks for updates (in seconds).
          - WATCHTOWER_CLEANUP=true # Ensures old images are cleaned up after updating.
        restart: unless-stopped
    

    Watchtower offers a wide range of additional features, including the ability to set up notifications, exclude specific containers, and more. For further information, please refer to the Watchtower Docs .

    Once the docker-compose.yml has been updated, please bring it up using the following command: docker-compose up -d

    And.... here you go, you&aposre all set!

    5. Best Practices & closing words

    Now Watchtower will now perform periodic checks for updates to your ejabberd container and apply them automatically.

    Well to be fair, by default if other containers are running on the same server, Watchtower will also update them. This behaviour can be controlled with the help of environment variables (see Container Selection ), which will assist in excluding containers from updates.


    One important thing to understand is that Watchtower will only update containers tagged with the :latest tag.

    In an environment with numerous Docker containers, using the latest tag streamlines the process of automatic updates. However, it may introduce unanticipated changes with each new, disruptive update. Ideally, we recommend always setting a speficic version like ejabberd/ecs:24.10 and deciding how/when to update it manually (especially if you&aposre into infra-as-code ).

    However, we recognise that some users may prefer the convenience of automatic updates, personnally that&aposs what I do my homelab but I&aposm not scared to dig in if stuff breaks.


    tl;dr: For a small community server/homelab/personnal instance, Watchtower will help keep things up to date with minimal effort. However, for bigger production environments, it is advisable to tag specific versions to ensure greater control and resilience and update them manually.

    With this setup, you now have a fully functioning XMPP server using ejabberd, with automatic updates. You can now start building your chat applications or integrate it with your existing services! 🚀

    • wifi_tethering open_in_new

      This post is public

      www.process-one.net /blog/docker-ejabberd-watchtower/

    • chevron_right

      ProcessOne: Thoughts on Improving Messaging Protocols — Part 2, Matrix

      news.movim.eu / PlanetJabber · Tuesday, 5 November, 2024 - 13:53 · 2 minutes

    Thoughts on Improving Messaging Protocols — Part 2, Matrix

    In the first part of this blog post , I explained how the Matrix protocol works, contrasted its design philosophy with XMPP, and discussed why these differences lead to performance costs in Matrix. Matrix processes each conversation as a graph of events, merged in real-time [1] .

    Merge operations can be costly in Matrix for large rooms, affecting both database storage and load and disk usage when memory is exhausted, reaching swap level .

    That said, there is still room for improvement in the protocol. We have designed and tested slight changes that could make Matrix much more efficient for large rooms.

    A Proposal to Simplify and Speed Up Merge Operations

    Here is the rationale behind a proposal we have made to simplify and speed up merge operations:

    State resolution v2 uses certain graph algorithms, which can result in at least linear processing time for the number of state events in a room’s DAG, creating a significant load on servers.

    The goal of this issue is to discuss and develop changes to state resolution to achieve O(n log ⁡ n) total processing time when handling a room with n state events (i.e., O(log ⁡ n) on average) in realistic scenarios, while maintaining a good user experience.

    The approach described below is closer to state resolution v1 but seeks to address state resets in a different way.

    For more detail, you can read our proposal on the Matrix spec tracker: Make state resolution faster .

    In simpler terms, we propose adding a version associated with each event_id to simplify conflict management and introduce a heuristic that skips traversal of large parts of the graph.

    Impact of the Proposal

    From our initial assessment, in a very large room — such as one with 100,000 members — our approach could improve processing performance by 100x to 1000x, as the current processing cost scales with the number of users in the room. This improvement would enable smoother conversations, reduced lag, and more responsive interactions for end-users, while also reducing server infrastructure load and resource usage.

    While our primary goal is to improve performance in very large rooms, these changes benefit all users by reducing overall server load and improving processing times across various room sizes.

    We plan to implement this improvement in our own code to evaluate its real-world effectiveness while the Matrix team considers its potential value for the reference protocol.


    1. For those who remember, a conversation in Matrix is similar to the collaborative editing protocol built on top of XMPP for the Google Wave platform.
    • wifi_tethering open_in_new

      This post is public

      www.process-one.net /blog/thoughts-on-improving-messaging-protocols-part-2-matrix/

    • chevron_right

      Ignite Realtime Blog: Openfire 4.9.1 release

      news.movim.eu / PlanetJabber · Friday, 1 November, 2024 - 19:54 · 1 minute

    The Ignite Realtime community is happy to be able to announce the immediate availability of version 4.9.1 of Openfire , its cross-platform real-time collaboration server based on the XMPP protocol!

    4.9.1 is a bugfix and maintenance release. Among its most important fixes is one for a memory leak that affected all recent versions of Openfire (but was likely noticeable only on those servers that see high volume of users logging in and out). The complete list of changes that have gone into this release can be seen in the change log .

    Please give this version a try! You can download installers of Openfire here . Our documentation contains an upgrade guide that helps you update from an older version.

    The integrity of these artifacts can be checked with the following sha256sum values:

    8c489503f24e35003e2930873037950a4a08bc276be1338b6a0928db0f0eb37d  openfire-4.9.1-1.noarch.rpm
    1e80a119c4e1d0b57d79aa83cbdbccf138a1dc8a4086ac10ae851dec4f78742d  openfire_4.9.1_all.deb
    69a946dacd5e4f515aa4d935c05978b5a60279119379bcfe0df477023e7a6f05  openfire_4_9_1.dmg
    c4d7b15ab6814086ce5e8a1d6b243a442b8743a21282a1a4c5b7d615f9e52638  openfire_4_9_1.exe
    d9f0dd50600ee726802bba8bc8415bf9f0f427be54933e6c987cef7cca012bb4  openfire_4_9_1.tar.gz
    de45aaf1ad01235f2b812db5127af7d3dc4bc63984a9e4852f1f3d5332df7659  openfire_4_9_1_x64.exe
    89b61cbdab265981fad4ab4562066222a2c3a9a68f83b6597ab2cb5609b2b1d7  openfire_4_9_1.zip
    

    We would love to hear from you! If you have any questions, please stop by our community forum or our live groupchat . We are always looking for volunteers interested in helping out with Openfire development!

    For other release announcements and news follow us on Mastodon or X

    1 post - 1 participant

    Read full topic

    • chevron_right

      Erlang Solutions: Why you should consider machine learning for business

      news.movim.eu / PlanetJabber · Thursday, 31 October, 2024 - 10:30 · 10 minutes

    Adopting machine learning for business is necessary for companies that want to sharpen their competitive industries. With the global market for machine learning projected to reach an impressive $210 billion by 2030 , businesses are keen to seek active solutions that streamline processes and improve customer interactions.

    While organisations may already employ some form of data analysis, traditional methods can need more sophistication to address the complexities of today’s market. Businesses that consider optimising machines unlock valuable data insights, make accurate predictions and deliver personalised experiences that truly resonate with customers, ultimately driving growth and efficiency.

    What is Machine Learning?

    Machine learning (ML) is a subset of artificial intelligence (AI). It uses machine learning algorithms , designed to learn from data, identify patterns, and make predictions or decisions, without explicit programming. By analysing patterns in the data, a machine learning algorithm identifies key features that define a particular data point, allowing it to apply this knowledge to new, unseen information.

    Fundamentally data-driven, machine learning relies on vast information to learn, adapt, and improve over time. Its predictive capabilities allow models to forecast future outcomes based on the patterns they uncover. These models are generalisable, so they can apply insights from existing data to make decisions or predictions in unfamiliar situations.

    You can read more about machine learning and AI in our previous post .

    Approaches to Machine Learning

    Machine learning for business typically involves two key approaches: supervised and unsupervised learning , each suited to different types of problems. Below, we explain each approach and provide examples of machine learning use cases where these techniques are applied effectively.

    • Supervised Machine Learning: This approach demands labelled data, where the input is matched with the correct output. The algorithms learn to map inputs to outputs based on this training set, honing their accuracy over time.
    • Unsupervised Machine Learning: In contrast, unsupervised learning tackles unlabelled data, compelling the algorithm to uncover patterns and structures independently. This method can involve tasks like clustering and dimensionality reduction. While unsupervised techniques are powerful, interpreting their results can be tricky, leading to challenges in assessing whether the model is truly on the right track.
    Machine learning for business Supervised vs unsupervised learning

    Example of Supervised vs unsupervised learning

    Supervised learning uses historical data to make predictions, helping businesses optimise performance based on past outcomes. For example, a retailer might use supervised learning to predict customer churn . By feeding the algorithm data such as customer purchase history and engagement metrics, it learns to identify patterns that indicate a high risk of churn, allowing the business to implement proactive retention strategies.

    Unsupervised learning , on the other hand, uncovers hidden patterns within data. It is particularly useful for discovering new customer segments without prior labels. For instance, an e-commerce platform might use unsupervised learning to group customers by their browsing habits, discovering niche audiences that were previously overlooked.

    The Impact of Machine Learning on Business

    A recent survey by McKinsey revealed that 56% of organisations surveyed are using machine learning in at least one business function to optimise their operations. This growing trend shows how machine learning for business is becoming integral to staying competitive.

    The AI market as a whole is also on an impressive growth trajectory, projected to reach USD 407.0 billion by 2027 .

    Machine learning for business AI Global Market Forecast to 2030

    AI Global Market Forecast to 2030

    We’re expected to see an astounding compound growth rate (CAGR) of 35.7% by 2030, proving that business analytics is no longer just a trend; it’s moving into a core component of modern enterprises.

    Machine Learning for Business Use Cases

    Machine learning can be used in numerous ways across industries to enhance workflows. From image recognition to fraud detection , businesses are actively using AI to streamline operations.

    Image Recognition

    Image recognition, or image classification is a powerful machine learning technique used to identify and classify objects or features in digital images.

    Artificial intelligence (AI) and machine learning (ML) are revolutionising image recognition systems by uncovering hidden patterns in images that may not be visible to the human eye. This technology allows these systems to make independent and informed decisions, significantly reducing the reliance on human input and feedback.

    As a result, visual data streams can be processed automatically at an ever-increasing scale, streamlining operations and enhancing efficiency. By harnessing the power of AI, businesses can leverage these insights to improve their decision-making processes and gain a competitive edge in their respective markets.

    It plays a crucial role in tasks like pattern recognition, face detection, and facial recognition, making it indispensable in security and social media sectors.

    Fraud Detection

    With financial institutions handling millions of transactions daily, distinguishing between legitimate and fraudulent activity can be a challenge. As online banking and cashless payments grow, so too has the volume of fraud. A 2023 report from TransUnion revealed a 122% increase in digital fraud attempts in the US between 2019 and 2022.

    Machine learning helps businesses by flagging suspicious transactions in real-time, with companies like Mastercard using AI to predict and prevent fraud before it occurs, protecting consumers from potential theft.

    Speech Recognition

    Voice commands have become a common feature in smart devices, from setting timers to searching for shows.

    Thanks to machine learning, devices like Google Nest speakers and Amazon Blink security systems can recognise and act on voice inputs, making hands-free operation more convenient for users in everyday situations.

    Improved Healthcare

    Machine learning in healthcare has led to major improvements in patient care and medical discoveries. By analysing vast amounts of healthcare data, machine learning enhances the accuracy of diagnoses, optimises treatments, and accelerates research outcomes.

    For instance, AI systems are already employed in radiology to detect diseases in medical images, such as identifying cancerous growths. Additionally, machine learning is playing a crucial role in genomic research by uncovering patterns linked to genetic disorders and potential therapies. These advancements are paving the way for improved diagnostics and faster medical research, offering tremendous potential for the future of healthcare.

    Key applications of machine learning in healthcare include:

    • Developing predictive modelling
    • Improving diagnostic accuracy
    • Personalising patient care
    • Automating clinical workflows
    • Enhancing patient interaction

    Machine learning in healthcare utilises algorithms and statistical models to analyse large medical datasets, facilitating better decision-making and personalised care. As a subset of AI, machine learning identifies patterns, makes predictions, and continuously improves by learning from data. Different types of learning, including supervised and unsupervised learning, find applications in disease classification and personalised treatment recommendations.

    Chatbots

    Many businesses rely on customer support to maintain satisfaction. However, staffing trained specialists can be expensive and inefficient. AI-powered chatbots, equipped with natural language processing (NLP), assist by handling basic customer queries. This frees up human agents to focus on more complicated issues. Companies can provide more efficient and effective support without overburdening their teams.

    Each of these applications offers businesses the chance to streamline operations and improve customer experiences.

    Machine Learning Case Studies

    Machine learning for business is transforming industries by enabling companies to enhance their operations, improve customer experiences, and drive innovation.

    Here are a few machine learning case studies showing how leading organisations have integrated machine learning into their business strategies.

    PayPal

    PayPal, a worldwide payment platform, faced huge challenges in identifying and preventing fraudulent transactions.

    Machine learning for business PayPal case study


    To tackle this issue, the company implemented machine learning algorithms designed for fraud detection . These algorithms analyse various aspects of each transaction, including the transaction location, the device used, and the user’s historical behaviour. This approach has significantly enhanced PayPal’s ability to protect users and maintain the integrity of its payment platform.

    YouTube

    YouTube has long employed machine learning to optimise its operations, particularly through its recommendation algorithms . By analysing vast amounts of historical data, YouTube suggests videos to its viewers based on their preferences. Currently, the platform processes over 80 billion data points for each user, requiring large-scale neural networks that have been in use since 2008 to effectively manage this immense dataset.

    Machine learning for business YouTube case study

    Dell

    Recognising the importance of data in marketing, Dell’s marketing team sought a data-driven solution to enhance response rates and understand the effectiveness of various words and phrases. Dell partnered with Persado, a firm that leverages AI to create compelling marketing content. This collab led to an overhaul of Dell’s email marketing strategy, resulting in a 22% average increase in page visits and a 50% boost in click-through rates (CTR). Dell now utilises machine learning methods to refine its marketing strategies across emails, banners, direct mail, Facebook ads, and radio content.

    Machine learning for business case study Dell

    Tesla

    Tesla employs machine learning to enhance the performance and features of its electric vehicles. A key application is its Autopilot system , which combines cameras, sensors, and machine learning algorithms to provide advanced driver assistance features such as lane centring, adaptive cruise control, and automatic emergency braking.

    case study Tesla

    The Autopilot system uses deep neural networks to process vast amounts of real-world driving data, enabling it to predict driving behaviour and identify potential hazards. Additionally, Tesla leverages machine learning in its battery management systems to optimise battery performance and longevity by predicting behaviour under various conditions.

    Netflix

    Netflix is a leader in personalised content recommendations. It uses machine learning to analyse user viewing habits and suggest shows and movies tailored to individual preferences. This feature has proven essential for improving customer satisfaction and increasing subscription renewals. To develop this system, Netflix utilises viewing data—including viewing durations, metadata, release dates, timestamps etc. Netflix then employs collaborative filtering, matrix factorisation, and deep learning techniques to accurately predict user preferences.

    case study Netflix

    Benefits of Machine Learning in Business

    If you’re still contemplating the value of machine learning for your business, consider the following key benefits:

    Automation Across Business Processes Machine learning automates key business functions, from marketing to manufacturing, boosting yield by up to 30%, reducing scrap, and cutting testing costs. This frees employees from more creative, strategic tasks.
    Efficient Predictive Maintenance
    ML helps manufacturing predict equipment failures, reducing downtime and extending machinery lifespan, ensuring operational continuity.
    Enhanced Customer Experience and Accurate Sales Forecasts Retailers use machine learning to analyse consumer behaviour, accurately forecast demand, and personalise offers, greatly improving customer experience.
    Data-Driven Decision-Making ML algorithms quickly extract insights from data, enabling faster, more informed decision-making and helping businesses develop effective strategies.
    Error Reduction By automating tasks, machine learning reduces human error, so employees to focus on complex tasks, significantly minimising mistakes.
    Increased Operational Efficiency Automation and error reduction from ML lead to efficiency gains. AI systems like chatbots boost productivity by up to 54%, operating 24/7 without fatigue.
    Enhanced Decision-Making ML processes large data sets swiftly, turning information into objective, data-driven decisions, removing human bias and improving trend analysis.
    Addressing Complex Business Issues Machine learning tackles complex challenges by streamlining operations and boosting performance, enhancing productivity and scalability.


    As organisations increasingly adopt machine learning, they position themselves to not only meet current demands but poise them for future innovation.

    Elixir and Erlang in Machine Learning

    As organisations explore machine learning tools, many are turning to Erlang and Elixir programming languages to develop customised solutions that cater to their needs. Erlang’s fault tolerance and scalability make it ideal for AI applications, as described in our blog on adopting AI and machine learning for business . Additionally, Elixir’s concurrency features and simplicity enable businesses to build high-performance AI applications.

    Learn more about how to build a machine-learning project in Elixir here .

    As organisations become more familiar with AI and machine learning tools, many are turning to Erlang and Elixir programming languages to develop customised solutions that cater to their needs.

    Elixir, built on the Erlang virtual machine (BEAM), delivers top concurrency and low latency. Designed for real-time, distributed systems, Erlang prioritises fault tolerance and scalability, and Elixir builds on this foundation with a high-level, functional programming approach. By using pure functions and immutable data, Elixir reduces complexity and minimises unexpected behaviours in code. It excels at handling multiple tasks simultaneously, making it ideal for AI applications that need to process large amounts of data without compromising performance.

    Elixir’s simplicity in problem-solving also aligns perfectly with AI development, where reliable and straightforward algorithms are essential for machine learning. Furthermore, its distribution features make deploying AI applications across multiple machines easier, meeting the high computational demands of AI systems.

    With a rich ecosystem of libraries and tools, Elixir streamlines development, so AI applications are scalable, efficient, and reliable. As AI and machine learning become increasingly vital to business success, creating high-performing solutions will become a key competitive advantage.

    Final Thoughts

    Embracing machine learning for business is no longer optional for companies that want to remain competitive. Machine learning tools empower businesses to make faster, data-driven decisions, streamline operations, and offer personalised customer experiences. Contact the Erlang Solutions team today if you’d like to discuss building AI systems using Elixir and Erlang or for more insights into implementing machine learning solutions,

    The post Why you should consider machine learning for business appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/why-you-should-consider-machine-learning-for-business/

    • chevron_right

      ProcessOne: ejabberd 24.10

      news.movim.eu / PlanetJabber · Tuesday, 29 October, 2024 - 14:26 · 9 minutes

    ejabberd 24.10

    We’re excited to announce ejabberd 24.10, a major release packed with substantial improvements and support for important extensions specified by the XMPP Standard Foundation (XSF). This release represents three months of focused development, bringing around 100 commits to the core repository alongside key updates in dependencies. The improvements span enhanced security, streamlined connectivity, and new administrative tools—all designed to make ejabberd more powerful and easier to use than ever.

    ejabberd 24.10

    Release Highlights:

    If you are upgrading from a previous version, please note minor changes in commands and two changes in hooks . There are no configuration or SQL schema changes in this release.

    Below is a detailed breakdown of the new features, fixes, and enhancements:

    Support for XEP-0288: Bidirectional Server-to-Server Connections

    The new mod_s2s_bidi module introduces support for XEP-0288: Bidirectional Server-to-Server Connections . This update removes the requirement for two connections per server pair in XMPP federations, allowing for more streamlined inter-server communications. However, for full compatibility, ejabberd can still connect to servers that do not support bidirectional connections, using two connections when necessary. The module is enabled by default in the sample configuration.

    Support for XEP-0480: SASL Upgrade Tasks

    The new mod_scram_upgrade module implements XEP-0480: SASL Upgrade Tasks . Compatible clients can now automatically upgrade encrypted passwords to more secure formats, enhancing security with minimal user intervention.

    PubSub Service Improvements

    We’ve implemented six noteworthy fixes to improve PubSub functionality:

    • PEP notifications are sent only to owners when +notify ( 3469a51 )
    • Non-delivery errors for locally generated notifications are now skipped ( d4b3095 )
    • Fix default node config parsing ( b439929 )
    • Fix merging of default node options ( ca54f81 )
    • Fix choice of node config defaults ( a9583b4 )
    • Fall back to default plugin options ( 36187e0 )

    IQ permission for privileged entities

    The mod_privilege module now supports IQ permission based on version 0.4 of XEP-0356: Privileged Entity . See #3889 for details. This feature is especially useful for XMPP gateways using the Slidge library.

    WebAdmin improvements

    ejabberd 24.06 release laid the foundation for a more streamlined WebAdmin interface, reusing existing commands instead of using specific code, with a possibly different logic. This major change allows developers to add new pages very fast, just by calling existing commands. It also allows administrators to use the same commands than in ejabberdctl or any other command frontend .

    As a result, many new pages and content were added. Building on that, the 24.10 update introduces MAM (Message Archive Management) support, allowing administrators to view message counts, remove all MAM messages, or only for a specific contact, and also view the MAM Archive directly from WebAdmin.

    ejabberd 24.10

    Additionally, WebAdmin now hides pages related to modules that are disabled, preventing unnecessary options from displaying. This affects mod_last, mod_mam, mod_offline, mod_privacy, mod_private, mod_roster, mod_vcard.

    Fixes in commands

    • set_presence : Now returns an error when the session is not found.

    • send_direct_invitation : Improved handling of malformed JIDs.

    • update : Fix command output. So far, ejabberd_update:update/0 returned the return value of release_handler_1:eval_script/1 . That function returns the list of updated but unpurged modules, i.e., modules where one or more processes are still running an old version of the code. Since commit 5a34020d23f455f80a144bcb0d8ee94770c0dbb1 , the ejabberd update command assumes that value to be the list of updated modules instead. As that seems more useful, modify ejabberd_update:update/0 accordingly. This fixes the update command output.

    • get_mam_count : New command to retrieve the number of archived messages for a specific account.

    Changes in hooks

    Two key changes in hooks:

    • New check_register_user hook in ejabberd_auth.erl to allow blocking account registration when a tombstone exists.

    • Modified room_destroyed hook in mod_muc_room.erl . Until now the hook passed as arguments: LServer, Room, Host . Now it passes: LServer, Room, Host, Persistent That new Persistent argument passes the room persistent option, required by mod_tombstones because only persistent rooms should generate a tombstone, temporary ones should not. And the persistent option should not be completely overwritten, as we must still known its real value even when room is being destroyed.

    Log Erlang/OTP and Elixir versions

    During server start, ejabberd now shows in the log not only its version number, but also the Erlang/OTP and Elixir versions being used. This will help the administrator to determine what software versions are being used, which is specially useful when investigating some problem, and explaining it to other people for help.

    The ejabberd.log file now looks like this:

    ...
    2024-10-22 13:47:05.424 [info] Creating Mnesia disc_only table &aposoauth_token&apos
    2024-10-22 13:47:05.427 [info] Creating Mnesia disc table &aposoauth_client&apos
    2024-10-22 13:47:05.455 [info] Waiting for Mnesia synchronization to complete
    2024-10-22 13:47:05.591 [info] ejabberd 24.10 is started in the node :ejabberd@localhost in 1.93s
    2024-10-22 13:47:05.606 [info] Elixir 1.16.3 (compiled with Erlang/OTP 26)
    2024-10-22 13:47:05.606 [info] Erlang/OTP 26 [erts-14.2.5.4] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:1] [jit:ns]
    
    2024-10-22 13:47:05.608 [info] Start accepting TCP connections at 127.0.0.1:7777 for :mod_proxy65_stream
    2024-10-22 13:47:05.608 [info] Start accepting UDP connections at [::]:3478 for :ejabberd_stun
    2024-10-22 13:47:05.608 [info] Start accepting TCP connections at [::]:1883 for :mod_mqtt
    2024-10-22 13:47:05.608 [info] Start accepting TCP connections at [::]:5280 for :ejabberd_http
    ...
    

    Brand new ProcessOne and ejabberd web sites

    We’re excited to unveil the redesigned ProcessOne website, crafted to better showcase our expertise in large-scale messaging across XMPP, MQTT, Matrix, and more. This update highlights our core mission of delivering scalable, reliable messaging solutions, with a fresh layout and streamlined structure that reflect our cutting-edge work in the field.

    You now get a cleaner ejabberd page , offering quick access to important URLs for downloads, blog posts, and documentation.

    Behind the scenes, we’ve transitioned from WordPress to Ghost, a move inspired by its efficient, user-friendly authoring tools and long-term maintainability. All previous blog content has been preserved, and with this new setup, we’re poised to deliver more frequent updates on messaging, XMPP, ejabberd, and related topics.

    We welcome your feedback—join us on our new site to share your thoughts, or let us know about any issue or broken link!

    Acknowledgments

    We would like to thank the contributions to the source code, documentation, and translation provided for this release by:

    And also to all the people contributing in the ejabberd chatroom, issue tracker...

    Improvements in ejabberd Business Edition

    Customers of the ejabberd Business Edition , in addition to all those improvements and bugfixes, also get MUC support in mod_unread .

    ejabberd keeps a counter of unread messages per conversation using the mod_unread module. This now also works in MUC rooms: each user can retrieve the number of unread messages in each of their rooms.

    ChangeLog

    This is a more detailed list of changes in this ejabberd release:

    Miscelanea

    • ejabberd_c2s : Optionally allow unencrypted SASL2
    • ejabberd_system_monitor : Handle call by gen_event:swap_handler ( #4233 )
    • ejabberd_http_ws : Remove support for old websocket connection protocol
    • ejabberd_stun : Omit auth_realm log message
    • ext_mod : Handle info message when contrib module transfers table ownership
    • mod_block_strangers : Add feature announcement to disco-info ( #4039 )
    • mod_mam : Advertise XEP-0424 feature in server disco-info ( #3340 )
    • mod_muc_admin : Better handling of malformed jids in send_direct_invitation command
    • mod_muc_rtbl : Fix call to gen_server:stop ( #4260 )
    • mod_privilege : Support "IQ permission" from XEP-0356 0.4.1 ( #3889 )
    • mod_pubsub : Don&apost blindly echo PEP notification
    • mod_pubsub : Skip non-delivery errors for local pubsub generated notifications
    • mod_pubsub : Fall back to default plugin options
    • mod_pubsub : Fix choice of node config defaults
    • mod_pubsub : Fix merging of default node options
    • mod_pubsub : Fix default node config parsing
    • mod_register : Support to block IPs in a vhost using append_host_config ( #4038 )
    • mod_s2s_bidi : Add support for S2S Bidirectional
    • mod_scram_upgrade : Add support for SCRAM upgrade tasks
    • mod_vcard : Return error stanza when storage doesn&apost support vcard update ( #4266 )
    • mod_vcard : Return explicit error stanza when user attempts to modify other&aposs vcard
    • Minor improvements to support mod_tombstones (#2456)
    • Update fast_xml to use use_maps and remove obsolete elixir files
    • Update fast_tls and xmpp to improve s2s fallback for invalid direct tls connections
    • make-binaries : Bump dependency versions: Elixir 1.17.2, OpenSSL 3.3.2, ...

    Administration

    • ejabberdctl : If ERLANG_NODE lacks host, add hostname ( #4288 )
    • ejabberd_app : At server start, log Erlang and Elixir versions
    • MySQL: Fix column type in the schema update of archive table in schema update

    Commands API

    • get_mam_count : New command to get number of archived messages for an account
    • set_presence : Return error when session not found
    • update : Fix command output
    • Add mam and offline tags to the related purge commands

    Code Quality

    • Fix warnings about unused macro definitions reported by Erlang LS
    • Fix Elvis report: Fix dollar space syntax
    • Fix Elvis report: Remove spaces in weird places
    • Fix Elvis report: Don&apost use ignored variables
    • Fix Elvis report: Remove trailing whitespace characters
    • Define the types of options that opt_type.sh cannot derive automatically
    • ejabberd_http_ws : Fix dialyzer warnings
    • mod_matrix_gw : Remove useless option persist
    • mod_privilege : Replace try...catch with a clean alternative

    Development Help

    • elvis.config : Fix file syntax, set vim mode, disable many tests
    • erlang_ls.config : Let it find paths, update to Erlang 26, enable crossref
    • hooks_deps : Hide false-positive warnings about gen_mod
    • Makefile : Add support for make elvis when using rebar3
    • .vscode/launch.json : Experimental support for debugging with Neovim
    • CI: Add Elvis tests
    • CI: Add XMPP Interop tests
    • Runtime: Cache hex.pm archive from rebar3 and mix

    Documentation

    • Add links in top-level options documentation to their Docs website sections
    • Document which SQL servers can really use update_sql_schema
    • Improve documentation of ldap_servers and ldap_backups options ( #3977 )
    • mod_register : Document behavior when access is set to none ( #4078 )

    Elixir

    • Handle case when elixir support is enabled but not available
    • Start ExSync manually to ensure it&aposs started if (and only if) Relive
    • mix.exs : Fix mix release error: logger being regular and included application ( #4265 )
    • mix.exs : Remove from extra_applications the apps already defined in deps ( #4265 )

    WebAdmin

    • Add links in user page to offline and roster pages
    • Add new "MAM Archive" page to webadmin
    • Improve many pages to handle when modules are disabled
    • mod_admin_extra : Move some webadmin pages to their modules

    Full Changelog

    https://github.com/processone/ejabberd/compare/24.07...24.10

    ejabberd 24.10 download & feedback

    As usual, the release is tagged in the Git source code repository on GitHub .

    The source package and installers are available in ejabberd Downloads page. To check the *.asc signature files, see How to verify ProcessOne downloads integrity .

    For convenience, there are alternative download locations like the ejabberd DEB/RPM Packages Repository and the GitHub Release / Tags .

    The ecs container image is available in docker.io/ejabberd/ecs and ghcr.io/processone/ecs . The alternative ejabberd container image is available in ghcr.io/processone/ejabberd .

    If you consider that you&aposve found a bug, please search or fill a bug report on GitHub Issues .

    • chevron_right

      Erlang Solutions: Implementing Phoenix LiveView: From Concept to Production

      news.movim.eu / PlanetJabber · Thursday, 24 October, 2024 - 09:22 · 6 minutes

    When I began working with Phoenix LiveView , the project evolved from a simple backend service into a powerful, UI-driven customer service tool. A basic Phoenix app for storing user data quickly became a core part of our client’s workflow.

    In this post, I’ll take you through a project that grew from its original purpose- from a service for storing and serving user data to a LiveView-powered application that is now a key tool in the client’s organisation for customer service.

    Why We Chose Phoenix LiveView

    Our initial goal was to migrate user data from an external, paid service to a new in-house solution, developed collaboratively by Erlang Solutions (ESL) and the client’s teams.

    With millions of users, we needed a simple way to verify migrated data without manually connecting to the container and querying the database every time.

    Since the in-house service was a Phoenix application that uses Ecto and Postgres, adding LiveView was the most natural fit.

    Implementing Phoenix LiveView: Data Migration and UI Development

    After we had established the goal, the next step was to create a database service to store and serve user information to other services, as well as to migrate all existing user data from an external service to the new one.

    We chose Phoenix with Ecto and Postgres, as the old database was already connected to a Phoenix application , and the client’s team was well-versed in Elixir and BEAM .

    Data Migration Strategy

    The ESL and client teams’ strategy began by slowly copying user data from the old service to the new database whenever users logged in. For certain users (e.g., developers), we logged them in and pulled user information only from the new system. We defined a new login session struct (Elixir struct), which we used for pattern matching to determine whether to use the old or new system. The old system was treated as a fallback and the source of truth for user data.

    Phoenix LiveView Migration to in-house database

    With this strategy, we could develop and test the new database system in parallel with the old one in production, without affecting regular users, and ensured that everything worked as expected.

    At the end, we performed a data dump for all users, configuring the service to use the new system as the main source of truth. Since we had tested with a small number of users beforehand, the transition was smooth, and users had no idea anything had changed from their end. Response times were cut in half compared to the previous solution!

    The Evolution of LiveView Application

    The addition of LiveView to the application was first thought of when the ESL team together with the client team wanted to check the test migration data. The team wanted to be able to cross reference immediately if the user data has been inserted or updated as intended in our new service. It was complicated and cumbersome at first as we had to connect to the application remotely and do a manual query or call an internal function from a remote Elixir shell.

    Phoenix LiveVie: Evolution of LiveView Application

    Initially, LiveView was developed solely for the team. We started with a simple table listing users, then added search functionality for IDs or emails, followed by pagination as the test data grew. With the simple UI using LiveView in place, we started with the data migration process and the UI helped tremendously when we went to verify if the data got migrated correctly, and how many users we have successfully migrated.

    Adoption and Expansion of the LiveView Tool

    As we demonstrated the UI to stakeholders, it quickly became the go-to tool for customer service, with new features continuously added based on feedback. The development team received many requests from customer service and other managers in the client’s organisation. We fulfilled these requests with features such as searching users by a combination of fields, helping change users’ email addresses, and checking user activity (e.g., when a user’s email was changed or if users suspected they had been hacked).

    Later, we connected the LiveView application to sync and display data from another internal service, which contained information about users’ access to the client’s product. The customer service team was able to get a more complete view of the user and could use the same tool to grant or sync user access without switching to other systems.

    The best aspect of using Phoenix LiveView is that the development team also owned the UI. We determined the data structure, knew what needed to be there, and designed the LiveView page ourselves. This removed the need to rely on another team, and we could reflect changes swiftly in the web views without having to coordinate with external teams.

    Challenges and Feedback of Implementing Phonenix LiveView

    There were some glitches along the way, and when we asked for feedback from the customer service team, we found several UX aspects that could be improved. For example, data didn’t always update immediately, or buttons occasionally failed to work properly. However, these issues also indicated that the Phoenix LiveView application was used heavily by the team, emphasising the need for improvements to support better workflows.

    While our LiveView implementation worked well, it wasn’t without imperfections. Most of our development team lacked extensive web development experience, so there were several aspects we either overlooked or didn’t fully consider. Some team members had limited knowledge of web technologies like Tailwind and CSS/HTML, which helped guide us, but we realised that for a more polished user experience (UX) and smoother interface, basic HTML/CSS skills alone wouldn’t be sufficient to create an optimal LiveView application.

    Another challenge was infrastructure. Since our service was read-heavy, we used AWS RDS reader instances to maximise performance, but this led to occasional replication delays. These delays could cause mismatches when customer service updated data and LiveView reloaded the page before the updates had replicated to the reader instances. We had to carefully consider when it was appropriate to use the reader instances and adjust our approach accordingly.

    Team Dynamics and Collaboration

    Mob programming way of working was also one of the factors that led to the success of this project.  Our team consists of members with different expertise. By working together, we can discuss and share our experiences while programming together, instead of having to explain later in code review or knowledge sharing what each of us has implemented and why. For example, we guided a member who had more experience in Erlang/OTP through creating a form with Liveview, which needed more experience in Ecto and Phoenix. That member could then explain and guide others with OTP-related implementation in our services.

    Mob programming helped our team focus on one large task at a time. This collaborative approach ensured a consistent codebase with unified conventions, leading to efficient feature implementation.

    Conclusion

    What began as a simple backend project with Phoenix and Ecto evolved into a key tool for customer service, driven by the power of Phoenix LiveView. The Admin page, initially unplanned, became an integral part of the client’s workflow, proving the vast potential of LiveView and Elixir.

    Though we encountered challenges, LiveView’s real-time interactivity, seamless integration, and developer control over both the backend and UI were invaluable. We believe we’ve only scratched the surface of what developers can achieve with LiveView.

    Want to learn more about LiveView? Check out this article . If you’re exploring Phoenix LiveView for your project, feel free to reach out —we’d love to share our experience and help you unlock its full potential.

    The post Implementing Phoenix LiveView: From Concept to Production appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/implementing-phoenix-liveview-from-concept-to-production/