Videos ·

Webinar: Getting started with UMH and platform updates

This webinar walks through deploying UMH with Docker Compose and configuring the redesigned Management Console. Topics include the new data flows UI with CSV-based tag configuration, bridge templating for replicating integrations across machines, and the roadmap for CLI-first AI pipelines.

Webinar: Getting started with UMH and platform updates

In this session, Mateusz Warda (Head of Value Engineering) and Alexander Krüger (Co-Founder & CEO) walk through how to deploy UMH in a production environment using Docker Compose and how to configure the redesigned Management Console, including the new data flows UI, CSV-based tag configuration, and bridge templating for multi-machine rollouts.

If you are deploying your first UMH instance or looking to replicate data integrations across production lines, this session covers each step: architecture, deployment, bridge configuration, data modeling, and enterprise-wide rollout.

"A product that only works for power users cannot scale inside an organization. You need people with very different technical backgrounds to work in the same tool." - Alexander Krüger

The problem

Manufacturers scaling beyond a first pilot hit the same barrier. The integration that works for one machine on one production line does not replicate. Each new use case requires re-wiring data sources, re-mapping tags, and re-building dashboards. Technical debt compounds faster than the business value it was meant to deliver.

What changed

This webinar addresses two updates UMH introduced to break that cycle.

First, a deployment model built for existing IT infrastructure, whether that is Docker Compose on a Linux VM or a managed Kubernetes cluster.

Second, a Management Console redesign that enables both IT and OT teams to configure, model, and scale data integrations without writing YAML or switching between disconnected tools. OT teams work in the table UI and export to CSV. IT teams work in the code view. Both edit the same configuration.


💡
Stay updated with the latest features and updates here: https://changelog.umh.app/

Webinar Agenda

Introductions

  • Speakers: Mateusz Warda (Head of Value Engineering) and Alexander Krüger (Co-Founder & CEO).
  • Audience poll on UMH familiarity and available IT infrastructure (Docker, Kubernetes, Windows/Linux VMs).

Product Vision Recap

  • The Unified Namespace as a single integration point: integrate each data source once, standardize it, and build use cases on top.
  • Why data collection without contextualization and business value delivery is insufficient.

Architecture Overview

  • On-premise UMH stack: TimescaleDB for historical storage, Grafana for visualization, optional MQTT broker.
  • UMH Core instances deployed in segregated machine networks for secure, isolated data collection.
  • Kafka-based data synchronization between instances with store-and-forward guarantees.
  • Enterprise cloud integration via outbound, end-to-end encrypted connections to SAP S/4HANA, hyperscalers, or internal data warehouses.

Deploying UMH in Production

  • Docker Compose as the standard deployment path for site-level installations.
  • Minimal setup vs. advanced configurations (Nginx reverse proxy, TLS certificates).
  • Guidance for air-gapped and highly regulated environments.
  • IEC 62443 security considerations and network segmentation best practices.

The New Data Flows UI

  • Three data flow types: Bridges (machine connectivity), Stream Processors (on-site KPI calculations), and Standalone integrations (ERP/MES interfaces).
  • Bridge Configuration: ISA-95 location hierarchy, OPC UA connection settings, and the read flow table for tag-level mapping with metadata, units, and data contracts.
  • CSV Import/Export: Export tag lists to CSV for enrichment by OT teams in Excel or via scripting, then re-import with metadata and data contracts applied.
  • Conditional Formatting: Rule-based auto-mapping using node ID patterns, tag names, or virtual paths with AND/OR logic.
  • Code View: Full YAML/JSON access for IT teams alongside the graphical UI, so both audiences can work in their preferred environment.

Templating and Scaling

  • Copy an existing bridge configuration as a template, modify the connection parameters (IP address, location), and deploy to additional machines.
  • Current approach uses copy-based templating; reference-based templating with centralized governance is on the near-term roadmap.

Live Dashboard Demo

  • Dynamic Grafana dashboards built on SQL queries against TimescaleDB.
  • ISA-95 hierarchy enables dynamic asset selection and automatic dashboard population as new machines are connected.

Roadmap Discussion

  • CLI-first approach over MCP for enterprise-grade security, authorization, and auditability.
  • Deterministic, auditable AI pipelines vs. non-deterministic LLM-based queries for production KPI calculations.
  • Three-layer AI strategy: CLI for configuration, structured APIs for data access, and deterministic pipelines for reporting.

Q&A Highlights

  • Enterprise UNS aggregation across multiple sites.
  • Performance and scalability guidance (sizing guide, bridge sub-process architecture).
  • Relational data contextualization using stream processors with Kafka caches.
  • Node-RED status and its role alongside the Management Console.
  • Security model for air-gapped deployments, including outbound-only encrypted connections and payload-level encryption.

Chapters

00:00 – Welcome & Introductions
01:41 – Audience Poll: UMH Familiarity & IT Environment
03:18 – Bridging IT and OT in a Single Platform
05:35 - Product Vision & UNS
09:22 - Architecture Overview
15:17 - Deploying UMH
22:28 - Data Flows UI
27:32 - CSV Import/Export
32:33 - Roadmap & AI Strategy
36:15 - Bridge Templating
40:46 - Q&A
55:41 - Closing


Transcript

[00:00] Welcome & Introductions

Mateusz Warda: I'm Mateusz Warda, Head of Value Engineering, and today supported by Alex, my co-host. Would you like to introduce yourself?

Alexander Krüger: Great that everybody is able to join today. I'm Alex, one of the co-founders, mechanical engineer by background. I transitioned into system integration and then to UMH. I can share some context on the high-level vision: what we wanted to achieve, what we adjusted in the product, and why. Looking forward to the updates today.

Mateusz: Two sentences about myself as well. I'm also a mechanical engineer by background, then spent five years at McKinsey doing digital transformation with manufacturers. Now I lead value engineering, focusing on defining projects, building the business case, and implementing with our value engineers on site.

Today's agenda: a quick intro to UMH, a poll to understand who is already familiar with the product and who is not, then how to deploy UMH in a productive environment, and finally the latest features within the Management Console. We also have one or two examples of what you can do once you build up your UNS and standardize data.

Alex: Then 10 to 15 minutes of Q&A. Feel free to put questions into the chat. Our colleague will capture them and we will respond at the end of the session.

This should be more of a conversation than a presentation. We are explaining what we changed and want your feedback. Nothing is set in stone, so we welcome critical questions.

[01:41] Audience Poll: UMH Familiarity & IT Environment

Mateusz: Quick poll. Are you already familiar with the product? Are you using it in a community or enterprise version? Or is this your first touch point? Second question: what IT services are provided to you? Docker environment, managed Kubernetes, or a Linux/Windows VM?

Alex: The question should be visible shortly.

Mateusz: Let's give everyone a minute or two to respond.

Alex: While everybody fills in the poll: the journey over the last years has moved us from a code-centric product, built on Kubernetes and YAML files, toward making the product accessible to a wider range of people. Not only IT, but also OT. This is one of the most significant usability updates we are presenting today, and we can measure that. Configuration steps that required YAML editing now happen in the table UI with no code at all.

Mateusz: Our value engineers see this directly. The feature simplifies their workflow. One thing to have a product work for power users, but as you scale inside an organization, you need people with different capability sets and different technical backgrounds working in the same tool. It has to work for both.

[04:22] Poll Results & Discussion

Mateusz: Currently evaluating UMH. Then we'll spend a bit more time on the product. Docker as a service: 10 out of 17. Managed Kubernetes: roughly half of the participants. Good to see companies moving forward. A year ago, this was under 20%.

Alex: The question is, perhaps we were being imprecise. Managed Kubernetes is available in the organization? Yes. But is it actually available at a site level?

Mateusz: Good to see that we can now run containerized applications, but Windows is still in the lead.

[05:35] Product Vision: From Spaghetti Diagrams to Unified Namespace

Mateusz: Most of you are familiar with the industrial standard when it comes to integrating your current tech stack: the spaghetti diagram. Different network segments. Difficult to understand what is coming from which data source. Just integrating for a single use case can require five, six, seven data sources. And that covers one use case. Replicating for a second or third is difficult.

Scaling is difficult to maintain because every single integration has to be repeated. You run into performance issues, network issues, and it collapses while you're still looking for the business value.

Where we're heading: build up the single integration point, single source of truth, the Unified Namespace. You integrate every data source once, standardize it, model the data correctly, and have it all in a human-readable ISA-95 hierarchy. Then whenever you want to build a use case, new integration, or automation workflow, you connect to the UNS. Single source, all standardized, easy to templatize integrations and scale.

Within the UNS, we see connectivity and modeling as a core feature, but also building the first use cases. Monitoring, automation workflows, operator input on machine states, CMMS ticket creation, reducing mean time to repair.

All of this fits into the product design. It is not about collecting data. It is about the next step: where is the actual business value? Data without action is inventory, not an asset.

[07:45] Data Modeling and Contextualization

Alex: When we started the company, data availability was an issue. But even if you solve connectivity, context was the missing piece. Not the data itself, but the standardization across assets, across business processes, across data types and formats. That standardization enables standardized applications. Think about one dashboard for CNC and milling that automatically populates as you connect mills and CNC machines.

This is what the modeling does: capturing context for the data and making integration to the application landscape repeatable. If you are building a dashboard, straightforward. If you are connecting to an MES system that expects a specific transaction type, like "work order finished," it should look the same regardless of where it comes from. You can model that in our system and then do one integration to the target system.

The platform includes the capabilities to build this end to end. Not only a single piece of the integration stack, but everything from data source to use case.

[09:22] Open Source Philosophy & Architecture

Mateusz: Our philosophy is that the infrastructure, integrations, standardizations, and initial use cases should run on premise. And they should be open source. There is nothing worse than installing something, investing thousands of hours of integrations, and then not being able to reuse it or fighting over pricing.

Alex: The fundamental design decision: it's yours. Your data, your platform. There are vendors who believe they will own the platform, whether in the cloud or on an ERP platform building a data sphere. We do not believe in that. The infrastructure and the data belong to the business. You decide where to push it and where it provides value. That is an integral part of your autonomy and governance. We open source so that you do not need us. We hope we can provide value on top.

Mateusz: In the middle you see the UMH stack with TimescaleDB, Grafana for historical data and initial use cases, and optionally an MQTT broker. Below, in the individual segregated machine networks: this could be a production hall or a specific machine OEM network where you provide remote access only to this network. You segregate this network, but how do you access it properly? How do you extract the data, standardize it, and push it upwards?

For this, you can deploy a small UMH Core within these instances. Important: we do not penalize by instance count. It is one site. You can have as many instances as you need to build up a secure network architecture.

[11:19] Network Segmentation with UMH Core Instances

Alex: The blue arrows between those two instances represent something we are proud of. We synchronize via our own API based on Kafka. Data is stored and forwarded only on intact internet connection. Even in a highly separated, highly distributed setup, you can ensure data is consistent and always available depending on the network connection. If the connection drops, we ensure data is delivered in the right order, at the right frequency, and create back pressure if something goes wrong.

Mateusz: The last layer on top: the enterprise cloud. Could be SAP S/4HANA, any cloud provider where you collect data, aggregate it, create monthly sales reports based on order information, or build additional applications and workflows. This is also an outbound connection, end-to-end encrypted, simple to set up.

Alex: I talked a few weeks ago with Microsoft, and they are reducing the fees on the cloud side. Integration into cloud landscapes gets cheaper. What they want you to do, and I think it fundamentally makes sense, is combine shop-floor data with business information. A potential customer was thinking about automating shift reports. He can now combine data from SharePoint with screenshots from an Andon board for daily reports. That is what we enable: pushing real-time information into an environment where you can work with an agent. Still early, but there is clear demand.

[15:04] Q&A: Enterprise UNS Across Sites

Alex: From the chat: "Do you have something like a UMH Enterprise to build enterprise UNS use cases on top of data from all sites?" Yes. The API that moves data from production network segments to the site level also enables aggregation across site barriers. You can harmonize on one site, or you can take a cloud-hosted environment and aggregate there.

If you are mostly comparing sites and production processes, you can have an enterprise level. A number of our customers are doing that. When you want to combine it with other business information and processes, we are not a generalist IT data warehouse. There is Snowflake, Databricks, or the hyperscaler offerings for that. But for aggregated reporting, specifically when someone at C-level wants a cross-site view, it is usable.

[15:17] Deploying UMH: Docker Compose & Documentation

Mateusz: On docs.umh.app, our documentation page, you'll find a simple installation guide to try out UMH Core on your laptop. Install Docker, run through the setup, and you're ready to connect CSVs, machines on your local network, and see what the product does.

Our single Docker container can also be deployed within any managed Kubernetes environment. We provide a standard deployment focused on Docker Compose because many customers still do not have managed Kubernetes at the site level. They may have it on the enterprise level, somewhere in a hyperscaler, but on site, where you deploy applications and give responsibility to local IT and OT, it is rare.

We provide Docker Compose. You'll find the minimal setup and builds with TimescaleDB, Grafana, depending on your requirements. This allows you to pre-merge compose files, include your environment variables, and deploy productively.

[19:04] Differentiation: IT Best Practices with OT Usability

Alex: From the chat: "From an architecture point of view, what differentiates UMH from competitors?" We focus on making two groups of people productive. First, the IT side. We integrate into Kubernetes for high availability. Everything is versionable and defined via code and YAML, so it is deterministic. You can work with AI agents on top of that.

Second, we realized there is a large number of people on the automation side who are not familiar with YAML and code. So we focus on the OT side with ease of use and templatability. Making both sides productive, because you have both in your company and you need both to succeed in a rollout.

Mateusz: We also ship specific use cases. It is not only about integration. We bring the infrastructure and the business value on top. TimescaleDB, Grafana, all templates for OEE monitoring, energy monitoring, resource monitoring, alerting. Off the shelf, included with the instance.

[22:28] The New Data Flows UI

Mateusz: Let's go into what happens once you deploy. In the Management Console, the latest features are in data flows. Data flows include three elements: bridges for connecting to data sources like OPC UA servers and databases, stream processors for processing loops within the UNS (running aggregates, creating KPIs like energy consumption deltas), and standalone integrations for more complex IT-focused ERP and MES integrations where every transaction interface looks different.

[25:37] Bridge Configuration: Read Flow, Data Contracts, Metadata

Mateusz: Let me show you the injection molding bridge. In the connection section, you define where the machine is physically located. You see our enterprise environment, this time in Spain, shop floor area, line one, injection molding 101. The connection setup is the IP address.

The substance is in the read flow: advanced settings like username, password, security mode, available both in the UI and in code for power users. And below, features to simplify the workflow for OT personnel who are not as technical.

You see a table. On the left, the node IDs you want to read out. You can select data contracts to match the data to a data model, rename data points, add units, and add metadata like serial number, manufacturer, or anything else you need at each tag. Later, a data scientist comparing a machine from Singapore to one in Berlin can do that because the metadata travels with the data.

Alex: If you have matching against SAP DM or SAP EAM, you can ingest an asset ID here. If you have a leading asset management system, you can map that, and the data is already traveling with context. In later processing steps, this information is available.

[27:32] CSV Import/Export for OT Workflows

Mateusz: Behind that is code, and the useful feature is that you can export it as a CSV. When I open it, I have all of the data here. I can change temperature to Celsius, pressure to bar, and so on.

Alex: Why is this person now using Excel? We are trying to move away from these tools. But the reality of getting this information requires an email-based workflow. You ask Matthias, your OT specialist. He opens the TIA Portal project, exports to CSV, formats in Excel, and sends it back by email. Then it is tedious to go line by line and match it into a UI. So we built tooling around your existing business processes.

We do not want to force a choice between "this is for OT" and "this is for IT." You can also run Python or a Jupyter notebook to convert whatever you received into this CSV structure. We create consistency between these views and actual scalability.

Or if you have AI workflows and an existing Excel with many sheets, and you know your data model: download the CSV, let AI match it, verify, import the file, done.

Alex: Context on why this matters: when we started a few years ago, we worked with an energy company running PCS7 as their DCS system. They had contractors paid for weeks to transfer Excel information to a configuration UI. That experience stuck with us. We are not going to waste three to four weeks of anyone's time on copy-paste, especially when you are working in a jump host where you cannot even paste.

[30:58] Conditional Formatting & Code View

Mateusz: If you are working with a large OPC UA server that is already well structured with specific archetypes, you also have conditional formatting. You reference conditions based on the node ID, the name, metadata, or the virtual path, and then create your own logic with AND/OR clauses.

Alex: Can you switch to the code view? This was always available, but it is difficult for people who are not familiar with code. A misplaced comma can break the configuration. We need to provide more structure. With this approach, we support both sides: table UI for OT, code for IT.

One more question from the chat: "MCP included in your roadmap?" We are strong proponents of AI and agentic coding. We use it daily. But we do not believe MCP is where it needs to be on an enterprise level for security, compliance, and usability. We looked at tools we use ourselves, like GitHub. GitHub does not have a native MCP server. It works with CLI: gh login, push, pull. The CLI ensures you can have authorization in the middle, all the permissions at the same level. We will have a CLI on our roadmap, which will make everything you would want to do with AI possible, with enterprise-level security. First half of this year.

[32:33] Product Roadmap: CLI-First Approach and Deterministic AI Pipelines

Mateusz: Why this route: we want to make interactions with the UNS replicable. If you use only MCP and AI, you might ask "what's the OEE of this machine this week" and get 90% one day, 88% the next, with no way to understand what changed in the calculation.

Enabling workflows where you generate with AI exactly what you want and then run it automatically through all instances or your enterprise: that is where the value is. There is nothing worse than KPIs that no one trusts because they are not replicable, not well defined, not transparent. A KPI can determine whether someone gets promoted. It has to be replicable.

Alex: There are three layers to think about. First: AI-assisted configuration. How do you work with code to configure and map? The answer is CLI. Second: serving the information inside your UNS to an agent. The AI should know the context of how data is structured and via which API it is available. Real-time data via GraphQL API, historical context in tables via SQL or REST API. Third: it should dynamically build auditable, deterministic pipelines for the answer you're seeking, rather than ingesting all manufacturing information as context and building a non-deterministic answer. Configuration via CLI, interfacing via context and structured tables and APIs.

[36:15] Bridge Templating for Multi-Machine Rollouts

Mateusz: Once you have a standard template for injection molding, how do you roll it out? When you add a bridge, there are two options. Start from scratch with the UI: select a vendor or protocol and continue with configuration. Or reference an existing bridge: copy the injection molding one as a template, rename the machine to line two, change the IP address, save and deploy.

Alex: One customer a few weeks back was connecting all their electric emulators. They defined a format by factory, and it was entirely copy-based. We do not want to make you click through a thousand fields. The templating covers that.

Important note: this is copying, not referencing. Referencing is risky because if you have a hundred instances and someone changes the upstream template, it can break downstream configurations. Reference-based templating with advanced variables is coming in a few weeks.

Mateusz: That will be a significant update around governance of data models and integration templates. The idea: centralized, specialized teams build UNS best practices, share with local teams who make their own adjustments, then deploy and connect machines.

[38:41] Topic Browser & Standardized Data

Mateusz: In the topic browser, you see all the data captured from the MES and ERP. Here, the injection molding machine with all the renamed tags including metadata. Depending on the interface you connect, the amount of OPC UA attributes varies. Modbus and S7 provide less. But the metadata you included is visible for each tag.

Once all of this is standardized with ISA-95, and machine state is consistently named regardless of which machine you look at, building use cases becomes straightforward. All historical data sits in TimescaleDB, so you use SQL. Replicable queries. And the ability to scroll through machines via a dynamic dropdown, build one dashboard, integrate more machines, add a line, and it populates automatically. Build a use case and bring it to the next site with less setup time.

[40:46] Q&A: Performance, Scalability, and Sizing

Alex: From the chat: "Do you have references where you have tested performance and scalability?" We do not have a published benchmark with tags-per-second numbers for OPC UA interfaces. The underlying architecture is measured in megabytes or gigabytes per second. Internally, we use Red Panda (Kafka) for persistence and streaming, limited by your disk space. The process management can run thousands of configurations and integrations. We have not encountered a scalability bottleneck. Rough guidance is available in the sizing guide.

Mateusz: Every bridge creates a new sub-process, independent of all others. In the sizing guide, you will see expected workload and hardware requirements. Based on that, you define how to split the network, whether you need two instances, and so on.

[42:33] Q&A: Relational Data & Stream Processing

Alex: From the chat: "Can you give an example of how to handle contextualization of data using two data sources of vastly different time scales?" This is a common problem. How we solve it: both business data and tech data in the Unified Namespace are available in Kafka. An action or event defines the trigger for the processing logic. You can use regex with wildcards, which gives flexibility. A tag or business event triggers the stream processor, which then accesses the UNS to get the last five values, for example. We work with caches internally for fast access. From an IT architecture perspective: Kafka with retention, caches for fast access, and stream processors to hydrate inside it.

Mateusz: For a specific use case: assume a system triggers a request for the average temperature over the last five minutes combined with specific order information. You'd have a single tag subscribing to this trigger. Whenever a message comes in, the workflow runs. You add additional information about the temperature topic as a wildcard and another data source from the MES for the current production order. Then you have all the data processing capabilities including caches with a seven-day buffer. You can go back, understand what happened in the last five minutes, create your averages, put it together in a single transaction, and save it in a database, send an SQL insert, or put it back into the UNS to trigger another workflow.

[45:56] Pre-Configured Demo Environment and Machine Simulators

Mateusz: From the chat: "Is there a pre-configured use case demo with out-of-the-box data, models, UNS, and MQTT?" The demo environment is being finalized and is already part of the GitHub project. We provide it as a simple script where you can make adjustments and deploy the full compose: components, passwords, all bridges created including machine simulators, ERP and MES simulators. A good starting point to understand what is possible with UNS, see dashboards, or start implementing use cases. A stable version may take another week. We will announce it in the Discord community.

[47:34] Q&A: Data Formats, Grafana Dashboards, Air-Gapped Environments

Alex: We support two payload formats. First: a tuple of value, time, key. This integrates directly into a database. Second: relational, where an order is in relation to a product, machine, customer. These are condensed into JSON and can be split into tags or aggregated into a relational format.

Mateusz: For live dashboards: the data is automatically inserted based on the structure provided into Postgres with the TimescaleDB plugin. The data source can be an existing database or our standard one. Then you query the data following the ISA-95 hierarchy, add dynamic variables, and build the dashboard.

From the chat: "Must there be a live link to the management console? Most environments are air-gapped."

Alex: It depends on the industry. If it is air-gapped, you cannot access the management console. Customers in highly regulated environments set it up once and only monitor internally. But we recommend having a connection from the instance to the cloud, not a direct connection from the shop floor to the internet. Work with proxies to access the management console.

Mateusz: The instance communicates over an outbound connection. It pings the management console asking: "Here is my current status, here is my configuration, do you have any updates for me?" You can do your configuration, deploy, and cancel the connection. You can always configure the locally hosted instance via config file. But once you go that route, you also need to think about Prometheus, Grafana for monitoring, logs, and so on. You end up building a locally hosted CI/CD pipeline with additional complexity.

Alex: We will not bring the full Management Console to an air-gapped scenario. The connection is not only encrypted outbound, but the payload itself is also encrypted. Even if someone compromised the management console, the payloads are still encrypted. End-to-end, patented. Let's have a conversation about reverse proxy and the single connection outside before planning a fully air-gapped setup, because it saves a significant amount of time.

In pharma environments, production lines are encapsulated. They mostly run software installed via USB stick, usually Windows-based. What we have seen work is OPC UA tunnelers that bridge the gap. Then we fetch data from the next level up. But if it is fully air-gapped, the data does not leave.

[53:37] Node-RED Status and Platform Positioning

Alex: From the chat: "Do you still have Node-RED available as low code?" On our positioning: we have always aimed to make it easier for you to get to the tools you need. We previously included Grafana, TimescaleDB, Node-RED, MQTT. We see a pattern of more and more Node-RED functionality moving into the Management Console and UMH Core. But you can still install it on site for HMIs or local dashboards. It is no longer part of the default stack.

Mateusz: Feel free to add it to the Docker Compose. You can access the database, the broker, and so on. Node-RED may still support some older proprietary interfaces, so it could be useful for a pilot. But it is not our recommended path because it limits scalability: single flows running on the edge, difficult to maintain centrally.

[55:41] Hannover Messe 2025 and Community Event

Mateusz: We will be at Hannover Messe in April. We are organizing a community dinner on Tuesday, April 21st. The dinner will bring together professionals from the industry, solution providers, integrators, users, and people evaluating UMH. Seats fill up fast. If you are planning to attend, let us know now so we can reserve your spot.

Alex: Thank you for joining. Thank you for the questions. Looking forward to seeing you at Hannover Messe, the community dinner, and the next webinar. Take care.


To stay informed about the demo environment release, upcoming webinars, and product updates, join the UMH Community on Discord. For questions about deploying UMH in your environment, contact our team or visit the documentation.

Read next

How to connect SparkplugB with UMH Core
OPC-UA ·

How to connect SparkplugB with UMH Core

In this demo we explain how to connect SparkplugB-enabled devices with UMH Core. By setting up a read bridge, you can subscribe to device groups, stream data in real time, and extend the flow to databases or additional brokers.

Share, Engage, and Contribute!

Discover how you can share your ideas, contribute to our blog, and connect with us on other platforms.