Webinar: Getting started with Industrial Data w/ Brian Pribe
“If there’s anything about UMH, it’s easy to get started with. From connecting PLCs to building CI/CD pipelines, you really get the full developer toolbox to shape your data exactly how you need it.” – Brian Pribe, MACH Controls
Getting started with data integration on the shop floor has long been complex, with deployments often requiring deep expertise in both IT and OT. In this webinar, Brian Pribe from MACH Controls joins the UMH team to showcase how UMH Core brings simplicity and speed to the process.
From spinning up edge deployments in seconds to connecting PLCs and IO blocks, the session walks through practical pipelines that unify data and deliver actionable insights in Grafana. You’ll also see how UMH Core balances intuitive UI-driven workflows with YAML-based CI/CD pipelines—making it equally valuable for automation engineers and IT teams. Whether you’re integrating your first PLC or scaling to enterprise-wide Unified Namespace architectures, UMH Core provides the flexibility to start small and grow with confidence.
Webinar Agenda
The Challenge: Industrial Data Integration Complexity
- Deploying edge data infrastructure often involves multi-step setups and steep learning curves.
- Manufacturers struggle with connecting diverse PLCs, structuring data consistently, and scaling across sites.
The Solution: UMH Core for Fast, Flexible Deployments
- UMH Core makes it possible to spin up instances in seconds with pipelines ready to go.
- Balances usability (UI-driven setup) with scalability (YAML configuration for CI/CD).
- Designed for both first-time users and enterprise rollouts.
Architecture & Deployment Scenarios
- Demo pipelines included: Allen Bradley PLC (Ethernet/IP), Phoenix Contact IO block (Modbus), and Kafka → TimescaleDB → Grafana.
- Supports edge deployments on Docker, Kubernetes, or lightweight hardware.
- Fits into ISA-95 structured architectures for clarity and scalability.
Live Demo: Connecting Devices & Pipelines
- Showed step-by-step deployment in Docker.
- Configured pipelines to capture PLC fault tags, stream data, and trigger alerts in Grafana.
- Demonstrated value of quick fault detection and actionable alerts.
Data Modeling & Governance
- Importance of ISA-95 hierarchy for mapping assets (plants, lines, machines).
- Ensures consistency between physical assets and virtual data structures.
Configuration as Code
- All pipelines and deployments captured in a single YAML file.
- Enables versioning, CI/CD, and replication across sites.
- Provides a bridge between automation engineers (UI workflows) and IT teams (code-driven deployments).
IT / OT Integration Platform for Industrial DataOps
Connect all your machines and systems with our Open-Source IT/OT Integration Platform to make all shop-floor data accessible at a single point.
Chapters
00:00 – Introduction
02:20 – How Brian Discovered UMH & Early Impressions
04:24 – Live Demo Setup (Docker & UMH Core)
06:41 – ISA-95 Structure & Device Mapping
09:03 – Connecting Allen Bradley PLC
13:47 – Historian vs Raw Data Setup
14:44 – Topic Browser & Data Flow in UMH
16:48 – Alerts in Grafana (Visual Inspection Example)
19:10 – Pipelines: UNS → Postgres/TimescaleDB
21:32 – Creating and Testing Alerts (Discord Integration)
25:12 – IO-Link Block with Modbus Example
26:55 – Using LLMs for Device Register Mapping
28:41 – Modbus Challenges & Data Handling
30:44 – Structuring Data & YAML Configurations
33:09 – UI vs Code Balance in UMH
35:32 – Q&A: Learning Curve with UMH
37:50 – Q&A: Writing Back to PLCs with UMH Core
40:11 – Q&A: Skills for New Automation Engineers
42:35 – Closing Remarks
Transcript
Alex
Great to get to know everybody. I'm Alex. I'm the co-founder and CEO of United Manufacturing Hub. And today I Brian with me. Brian can also introduce him in a few seconds, but perhaps before he does it, he's one of our longest standing community members of open source community. And for us, it's not only about our product in the commercials, but also really empowering people all around the world who there's something really meaningful with UMH.
And Brian would to showcase what he did and what he can do with UMH. And I think it's something to take over for it, but also then let the community showcase what they build. And this is for today, let the showcase. Brian, over to you for the time being. Yeah. Hi. How are you guys doing? I'm Brian with MACH Controls, and we're a systems integrator here based in Toledo, Ohio. And we help get manufacturers up to speed with their data.
So in this example, we've been toying a lot with UMH Core. We've been with UMH Core, helping them for probably two years almost at this point with developing the UMH Classic Edition and then now onto the Core Edition. So today what we want to show off is how quickly you can get set up with UMH Core on being able to deploy to your instance and start connecting to your actual devices and getting data from it.
I will say because of me mucking around with the whole setup and everything, I've been deploying the demo and destroying it the whole time. So it may be a little bit of a rough demo. If you guys can just be patient with me, I really appreciate it. No worries, Brian. Can you also elaborate a little bit of where did you get to know your match? What was actually your background and what sparked interest in the beginning for you to...
Try it out and give it a shot. so I think originally, so I've been in the industry for community Discord channel for probably four years at this point. And what really peak what started to pique my interest with United manufacturing hub was one day I just saw in the Discord channel. Message from Jeremy saying that they're going to go full in on unified namespace. The whole technology, the whole idea behind it.
Alex
And that they were going to commit their product to it. And with that, that kind of piqued my interest at the time of , okay, who are these guys? What's going on? What are they doing? Why are they going full in on unified namespace and do for an open source approach? Because nobody else at the time had really done it. And every time that I had been looking at the source code for United Manufacturing Hub and trying to understand what's going on and the underlying source code, every single time they picked the right tools.
When it came to the database, when it came to the brokers, when it came to how you connect all the various applications together, UMH just had it right every single time. And then from there, we've just been developing with them on testing out the product, testing out deployments, various different deployment architectures, whether on-premise, on the edge, Docker containers and Kubernetes instances, and really trying to figure out how to best
deploy UMH and how we can help it further along. And so where UMH is at now is we've done so much developing to where we can now get it to a container instance. And I think where that's kind of led, why we got into that direction was kind of caused by us when we really wanted to use UMH at the Proof-it conference, but we wanted the edge deployment.
Instance for us to be able to connect to the edge cases to get it into the unified namespace. And then that's what we really realized that the guys that you image had to go back to the drawing board and really try to correct. Not correct or just improve it to where edge deployments are a lot easier and a lot more possible. And then with that, going, I'm going to do a little more set up on my end. I'm going to tear down a Docker stack.
There we go. Okay, so I think I just want to show that we got a Docker.
Alex
And you can see we only got one running on here. It's just a Fortaner agent instance. And then we can go to our instances and you can see that it's going to be on edge deployment to here where we'll see it start back up. So we do close up and run it in the background where you can see we're going to be deploying a couple of different instances. But the most important one here is UMH core and you can see in just point four seconds.
This instance deployed with all the configurations and pipelines already connected for it. And you can see it already come up and online. And honestly, the thing that's taking the longest in the stack to get ready is a Grafana. So in the data flows, I can just kind of show you some that we already have, which is pretty much three different pipelines we wanted to show. So.
The first one is connecting to Allen Bradley Ethernet, Allen Bradley PLC over Ethernet IP. And then the next one, we have an IO block that is talking over Modbus. And then we have another standard pipeline that goes directly from the Kafka instance into our time scale DB instance. And that lets us see everything that's kind of going on in Grafana. Now, when it comes on to the visualization point,
I hadn't got that done, but we did get to have an alert. So I want to at least show what this kind of looks getting started. So, okay, there we go. All right. So you can see I deleted one of them. We only have one that is still up and on, but we're going to go ahead and add a bridge. So this is pretty much what it's going to look for you to get started with. You're going to go through the UI. It makes it so much easier.
But at the end, you'll be able to see what the configuration file looks . And then you can see how your CI CD, your guys can really kind of leverage that. So we're going to start with a name for it. And in this one, this is a program for a tube vendor conveyor where it's got some conveyors. There's a robot involved with it. There's some vision, but everything is just with IO and all of the names in it. It's just here.
Alex
It's garbage PLC code and logic, but it still produces millions of dollars in app and value for the customer day in and day out. So having that machine go down or somebody trying to make changes to it just really doesn't make sense for the manufacturer. And this is where having you image at the edge to be able to collect the data from these disparate type of devices, unifying them and getting them in is really going to provide a lot of value for them. So.
This one, we're just going to give it the name, tube, vendor, conveyor. And then the instance we're going to connect the two is the second one. And then you can see immediately it pulls in the exact location that the edge deployment is already deployed in. And then we can give it some other names. So because this is the conveyor for it, we're just going to call it the tube, vendor, conveyor. And that'll get us started off there. And Brian, if you're thinking about the levels, if you ,
in with the customers, how would you advise them to structure their ISO 95 hierarchy? So you were on a flat level now, what would you advise them to structure it on? considering all the levels? When we were considering the first connectivity phase of things, you definitely want to take an ISO 95 approach on the physical locations. That gives you the best idea about where assets actually are relative to the virtual space within it.
Now, once you have that structured on physical assets where they are inside of the business, then you could deploy in any of your virtual has to go with it. But when you're aggregating data from your business, at the physical level, you want to have that map as close to reality as possible. in this case, we're just making the example that this is its own stationary.
A machine that's within the Toledo plant is in an area and then this case it's a, it's a little above for the example, but definitely if you're a manufacturer and you have various areas, you have various different production lines. You do really want to start considering on where things would be. In inside of 95 structure and that's just.
Alex
what areas, what are the production lines, then what are the control units that would be controlling those aspects of the machine? And then you could go further down into there. But that gets you, that gets you started to where you're getting physical data points out rather than system data points and stuff that. Does that make sense? 100%, 100%. All right, now we're gonna put in the IP for that. And then also.
the port for Ethernet IP, which is 44818. And then now that we have that, we're going to save and deploy. Now Ryan, that's happy. Cool. So now that you connected to it as an NMAT to check to make sure that it's there, now we can do our protocol. Now we're gonna go all the way down to Ethernet IP. And then you get various different options. You can either do custom if you really don't wanna use anything that they set up, but I really recommend if you're doing base tag,
aggregation, you really want to use the time series feature gives you all of the all of the tag things that you could be asking for. So if you're trying to historicize the tags, if you're trying to organize the tags and getting good quality from them, if you're trying to do conditioning on the tags, there's a lot of features inside of the tag processor feature in bentos UNH that really give you a lot of different capabilities.
We won't show that here today, but definitely something for you to look into for it. Now, so now that we've selected the data type for it, UMH, the management UI automatically generates some of this stuff for us. So you have the three main parts of it. You have the input, you have the processing, and then you have the output. Now, for what we're doing, we don't really need
A lot of comments here just kind of muddles things up. Okay, alrighty. So this is pretty much where you're going to be configuring, connecting to your tags. Now, in this case, I have Studio 5000 that I am remote desktop into. so here is our Studio 5000 for our Alan Bradley PLC. You can see that there are.
Alex
quite a few different tags with a bunch of different names. And there's, of course, a couple of different sequences. But really, for us, we're having issues with the machine. in this example, we want to talk about getting some actual insights, or at least getting to where you can get some results from it. So in this case, we know that when we have a fault that occurs with a tube, we know there's a specific bit that comes on.
And anytime that comes on, we want an alert. We just want to send a notification to be able to know that the part is that faulted because anytime that does. We always run into just stop times. It causes a lot of problems for operators and for the plan managers. So, in this one, we know that in the visual, the vis tube faults pilot, the one all the way at the very bottom here, this one tells us that.
Anytime this bit goes on, that means the visual inspection failed. Some operator has to come in and fix it. But even at that point, it just stops production. And if there's not an operator, they're ready to fix the problem. Well, then it just stays there and then it becomes a 10 minute problem and then an hour problem. And then by then you've already lost a lot of production time. So this is the tag that we want to collect from. So I'm going to stop sharing there and we're going to share. All right.
Cool. And you guys still able to see my screen with you image there? Perfectly fine. Awesome. All right. So now we know what that. Tag is, so we're going to go ahead and put that in here. So this. To the fault. Pilot light that's what the PL stands for. We know that it's a bull and there is no alias for this. So this just gets us started on the input side of things. And what's really nice with.
The tag browser for the you lies that it automatically generates some of this stuff for you. So in this example, there's 2 things that automatically get generated for you. You have the always section and then you have conditions sections. So, in this case, we're just going to get rid of the condition because it doesn't really do anything for us. But then in this 1, what we'll do is we're going to keep the always section and then we're going to keep the data contract as historian.
Alex
Now, usually when you are doing a lot of discovery, you don't want to use historian that adds in a lot of that that saves a lot of the data into the broker. And then if you have you'll make classic that saves a lot of that data into the database as well. So when you're doing a lot more of the discovery stuff, we do recommend that you start with raw because that gets it to where you can deploy it. It's not going to save any of the data right away. You can do a lot more discovery and get things done.
And then on the output here, this automatically gets generated and puts it into the inside of poor. But you can also make changes and decide where you want that to go. So, we're going to save this instance.
Alex
Okay, now that is working and now we're going to go to our topic browser and this is specific for UNH. So in this one, you can see we did have an historian one, but we now have our raw topic and that gives us data directly coming in from that PLC. So you can see that it's false right here. Now for some reason, I have my networking all messed up to where that Studio 5000 instance isn't able to connect.
Otherwise, I would turn the bit on and off. But in this case, we're just going to leave it alone for them. So you can see we have data coming directly in from our Allen Bradley PLC. And now that we have that data, let's do something with them. Let's get some insight, some action. So I talked about how we had initially just UMH Core, but now we could connect it with other.
platforms so in that. Docker compose set up, we have a time scale, dd and go Fana all plugged in and what we'll do is go immediately to that. All right, so let's start with some alerts. And then graphon and this makes it really, really easy to do alerts because you just get so many different options for it, but we're going to start here and we're going to.
see.
So this is the one that we had for it. All right. We're going to delete this actually before we delete something that is that is working. We're just going to create a new one that is just it. All right. So this is going to be our visual inspection alert. Okay. Now we already have this can configured and connected with the historian from between Grafana and timescale DB. So that's all nice and set up. But.
Alex
What this allows us to do is any time any of the data gets hysterized, we're able to save that into the TimescaleDB instance and then from there able to visualize it inside of Grafana. Is there any questions on that? That I see, but I think a good question would be how does it come from Umaj Core? How does it get ingested in TimeScan and from there how it's available in Grafana? Gotcha, gotcha. That would actually be a pretty good start for
How that work. right. So I'm going to show then we showed what the two better one now we can go to this court postgres. So this is really how we're going to be able to get that data from UMH core into Postgres. So not only are you able to do bridges, but you can also do streams. Streams are more for relational transactions, batching or anything where you have
a set of data that you want to get processed. That's where stream processing comes in. But in this case, we're going to do a standalone. So in this example, we're just going to look at this flow. And then this will give us an idea of what's kind of going on. So in here, we have the input for UNS inside of UMH Core. So
What this allows us to do is tap right into the red panda Kafka broker within it. And then it gives you the UNS input gives us a lot of different capabilities for selecting which topics you want and allowing some rejects. within this UMA topic here, you can define a rejects pattern for the exact topics that you want to be populating before it in the Kafka.
Input one, you weren't really able to define the topics very well when it comes to a topic in the input. and S you're able to use to define that a lot easier. And then this is where we get a lot of different funny things kind of going on where we're processing a lot of the tags as they're coming in to then be put into the historian. And then on the output side, this is where it gets even cooler. In my opinion.
Alex
This is where it guarantees inserting and setting up the SQL tables inside of that database so that anytime you have new data come in, the pipeline guarantees that those tables exist for that database before any of that data fills in. So if you're dealing in a situation where you may have data come in for one pipeline, but it relies on another pipeline,
for defining what that database has to look . Rather than having two separate sets, you can now define it all in one pipeline. So that's really a really cool feature with what you're able to do with BethoCMH. So that's pretty much what's going on inside of this pipeline. And we can just go ahead and save and deploy. No changes, continue. All right. So now that one is running.
and we'll eventually get to the packaging station. now that we have those two pipelines, we have the tube bender conveyor and then the court postgres pipeline connected. Now we can build out some learning in here. So in this one, this is the visual inspection alert. This is where we want to check any time we get a fault from it. So in this one, we're going to do something really simple.
We're just going to look for the tag. We're going to do accounts. And then we're going to go by value itself, and just because this is the only time that's in there when the value is greater than zero, then it'll send an alert. OK, now that we have that to find now, we want to put it to a specific folder for it. So I've already created it. It's going to be in our alerts folder.
And then you have to define an evaluation group and this one is just going to be the alert group. Then we're going to define how long it will be allowed to pen for do up for a minute rather than anything else. And then we're going to define what our notification pipeline is. So this is where you can really kind of define a lot of different ones. So in here you can go in you can either use the original one or you can create another.
Alex
You have a lot of different options for how you want to get notifications and you can even do directly have this go right back in. But in this example, we're just going to stick with Discord because it's already kind of set up for us. Yeah, we're cancel. So that and we're going to select disregard the name. I didn't have time to change the thing. All right anyways, so now that that is working. Now that that is up.
We're going to save the rule and now we have two rules here. We're going to go ahead and take this one. You're going to delete it and that's fine. Now we have this guy ready to go. All right. Now this may cause me an issue because I still have to think the, I gotta force the visual inspection tag. I may be pending. Learning visual inspections. Have I gotten my alert yet from discord? Not yet. Nothing yet.
So I have to go here and make a creation in the one line.
Alex
And this is where we're gonna start getting that. right. So now it's not wanting to connect. It was connected, not anymore.
That's what I got to plug things in or replug them out. Let's do it.
Alex
That's and then also here's inside the panel. If anybody.
Alex
I don't know we got that. looks it hit.
cool. Okay. So it was able to send a notification. All right. So it was working. Cool. You guys able to see my screen? Yes, we see. All right. Well, there it is. And we got the alert coming in from it. Okay. Well, that worked.
Okay. Cool. All right. Well, that one connected that one work there. All right. Now we can also kind of show. , maybe you don't have existing equipment that , they're Alan Bradley, PLCs and stuff that. Maybe you just want , an IO link block to be able to see , the anytime you have a high or low happen. So.
In this case, we have a Phoenix Contact IO Link Block that's going to be talking over Modbus. And then we just have simple little. Can you stop sharing real quick so that it's so smart for me to see? my bad. Perfect. Now you can show what you showed already. Yeah. we have now for the next one, we have a Phoenix Contact IO Link Block that is going to be talking over Modbus.
And then we have just a simple little proc sensor that is going to just be tracking our president stuff that for us. that's essentially it's gonna be for count. Now let's go ahead and get started showing that.
Alex
Sure.
And you can see I already started this. right. So in the general, this is pretty much what it looks . You get set up, you get connected to it. Actually, it's not. It's this one. Save and deploy. Cool. Now we go in here and we can select Modbus. Okay. And then our time series instance for it. So this gets us.
Same setup where it gives us always in a bunch of different conditions, but we'll get rid of those. And we're going to keep all of this information there. All right, so now that we have this information then for ours, because all of this stuff is already kind of set up for it. All we gotta do. Let's just change some tags. So this one we're going to name it. Part present. Okay. And.
We know that the register for this is going to be input and then the address is 8001. Now, this one is a very unique one. It is actually a UN 16 and then we want to keep this native. So what is going to do is going to connect. It's going to go to the input register for 8001.
Now for the manual on this device.
Alex
without pretty certain system. Is it system installed? No. Not that one. This one. Cool. I pretty much just scroll all the way down to... I think this is also a really powerful case for not doing that, but also having an LNM extract information from the PDF for you to then create automatically the mappings inside Umesh Core. So, text driven approach. So just one single yaml.
The output is quite specific and defined. And now you can throw in whatever you get from the vendor or machine vendor, which is hugely verbose and not very helpful. then the LLM, , iterating crunch the data for you. So a classical case. Yeah. Yeah. And it's really, really big, especially when you have a device this that has a ton of registers and you have to configure a lot of it with, with, with, with benthos, UMH specifically.
you're able to do quite a bit of data just manipulation with it. Just get it exactly the way that you want it rather compared to if you're having to use Node-RED or some other application to do these exact mappings. So in this one, can see register 8001 is the set pretty much gives us the port status for each of these. And you can see for the bit which bits and then are for which port.
So that's pretty much how we were able to find this information for this. We're happy with everything that was in always, and we're going to save and deploy.
Alex
think he was calling something wrong. Yep, did something wrong. What'd do wrong?
Packaging station and the timer which should become what? think this is in this... Let's see here. Now, I know I have... Pretty sure I did it correctly, but we're just going to quickly check. And these mod-boss things can be tricky. There's so much that you can configure. More bit flip anything or whatever. know, I think I just... It doesn't even need this output section here.
Maybe it's just that, save and deploy. Might be. Perhaps it was requesting a Modbus function that the device was unable to respond to, therefore crashing up. if you... Modbus, tricky. Very tricky. It's not the most fun dealing with Modbus in general, but you do get the data points you're asking for. You have the spreadsheet.
I would say we can go into our historian here and you can see our part present and then even from visualizing it on said once publish every 10 seconds. And you can see it has a value of 2. now the reason it's got a value of 2 that goes back to pretty much this register chart where it's the 2nd bit that's on there. So.
That will give us a value of two for us seeing that. And then this is where it's , yeah, you could really do a lot with benthos-umh on manipulating the registers and getting them mapped to proper data types so that you could actually explore this a lot easier and a lot better. Therefore this if statement inside the protocol, which is really good. Yeah. So then map.
Alex
based on the value of the Modbus device and then switch case based on whatever you get out there. Perhaps zero means door closed, two means door opened, four means whatever. So then you can really go case by case and map it to a logical, actionable data point. And the best part about this is that you're able to get a lot of the raw values that come directly out of the machines. And you can tell it's , ,
A lot of the tags that kind of come off of the PLC, they're not the most organized. They're not the most thought out as to what the business is looking for and interpreting. The tags are built out in a way just for that machine to operate with not exactly much else to think on for that end. when it comes to aggregating this data up and out, it's really difficult to even just start doing kind of basic mappings into UDTs and stuff that.
You may court does a great job and getting you that foundation that data foundation on how you're how you should be treating your data. How you should be organizing it, how you should be structuring it and all of that. One more thing I want to show when it comes to image core and what's really cool with that. So you saw all of the different pipelines. have three different pipelines that we're showing there. Now, what's really cool with you image core is that all of that information.
It's all put into one configuration YAML file. that part is just crazy cool because now you can define all of your pipelines, how that data gets moved, and everything about it, all defined in a single YAML that you could deploy in a CI-CD pipeline and really mind boggling for me. So you can see all of the information. I won't go all the way down because that's got keys and stuff, but it gets...
you're able to have all of that information right here at your fingertips. And you can also have it offline as well. I this is really the key of what Ryan showed you is really great for first time users or automation engineers, maintenance people who are close to the data and who should say what the two actually means. think it's one of the largest and most important jobs to get the context of the people who understand the machine into the data, but then scaling that and replicating that to more sites, more machines, that was mostly a job of IT.
Alex
And they love scalability and scalability means in the first sense, control Z, control V. And then secondly, just CICDing the YAML from A to B and versioning that. And I think this is a really good balanced approach in between those two extremes. Full UI versus full code, which mostly doesn't either or help with and we're trying to combine both. Cool. Well, that's all that I got to present. I had more I wanted to, I wanted to bolt.
pull out the whole dashboard and Grafana to kind of show some counting and tracking and stuff. Didn't have enough time to develop it all, sadly, for the webinar. But if there is any questions you guys have, Alex can push those over to me. we do. Yes, here's one more. So how long did it take for you to get into the system? So how long do you think it's the learning curve for somebody to get started? Learning curve with it?
I think the learning curve is going to be more on your infrastructure. So I've been playing around with the U of H for quite a bit. The concepts of what you're supposed to do with the data isn't very difficult. Getting the infrastructure that you want set up where you want U of H is probably going to be more difficult. you can now deploy U of H core anywhere
on any device, on any architecture. So you really, the world's your oyster getting started with UMH in that case. And hell, you even saw how quickly it took to just connect to the PLC. it can be really quick. Where I can see a lot of the difficulty lying in utilizing UMH is the rest of your infrastructure. And , how are you going to be connecting? What are your pipelines going to look ? What are the problems you're having to deal with? What are the data needs?
that you're going to have to be dealing with. The benefit with UMH is that you kind of get the full developer toolbox on getting that data delivery guarantee, being able to manipulate the data exactly how you need it, be able to use CICD pipelines with it. it's more of a, , I don't think it's... How long will it take to get started with UMH? I don't think it takes that long to get started and to use it.
Alex
I would say that has, UMH has a longer road for how much you could learn about. If there's anything about UMH, it's easy to get started with, but there is so much to learn about it. What you can do with it, how you're supposed to deploy with it, what are the various different ways you could deploy with it? What are the different systems you could connect with? That's really, I would say, where it starts to take off. Thank you. There's another one from Bert. So you showed just...
putting the context and I want to question your shoulder a little bit of reading and actionable insights or dashboarding, alerting, et cetera, et And can you also write back with you, MagCorp? So is this also the possibility to put an insights or alerts also in action regarding in the direction of the PLC? Yes, you can. Yes, you can. So right now you guys have OPC write operations that you can do with it.
I don't think that you have it for Ethernet IP, for Modbus, and for Profinet yet. So in that regard, use UMH as that data aggregation layer. And then keep with those fieldbus protocols, maybe have an OPC device in the middle to be that control interface. And then with OPC write with UMH Core, you're able to.
write directly to those OPC servers. All IT connectors support write and OPC UA from the OT level, but we are little bit careful and cautious with, I would say, a little bit more legacy OT connectors as they're mostly also running on the same CPU as actually a control loop, for example, at S7. With reading, you can do a little bit wrong, but with writing, can actually mess up a lot of things.
So we currently just support from the only protocols, OPC UA, right? But stay tuned, there will be a feature also releasing soon, which makes the right functionality more intuitive and also extending on OTP protocols. But yeah, I said, Brian said, within OPC UA server, and immediately make things a little bit easier. Yeah, so we recommend it also currently. Okay, so I'm just gonna read off question here. Any tips on how to do be...
Alex (37:50.1)
The world of automation can start. I was in software dev who joined a systems integrator shop recently building mobile apps and web dashboards. But I wanted to expand my skills and skillset to the PLC and IT integration. What should I start with? PLC side. I'm interested in Siemens as I have the hardware to test. This can be more of the OT question, but still helpful with me. I mean, I'll be honest. You can really kind of start with anything. Nowadays.
We now have PLCnext container, which is pretty much a PLC runtime that you can deploy on a container for Ubuntu. There's also code assist runtime that you can run as a container or on a Raspberry Pi. So you have both of those options for very cheap PLC examples to be able to connect to if you want a hardware physical example. Now, as far as getting actual PLCs,
Get the ones you're gonna be working on. Ain't no point in using any other ones. Get the ones you guys are using. So if you're ethernet, if you're an Allen Bradley house, probably get some Allen Bradley PLCs. If you're a Siemens house, get some Siemens PLCs. If you guys are a Hodgepodge, all of them, mean, Modbus may be the best one to start with. Modbus or OPC may be the better one to start with on utilizing for.
getting data and up and down, especially with UMH. There's another one also in chat. If you could just take it there. All right. As a junior automation engineer looking into these ITOT applications and projects, could you list some of the most important skills and technologies you should learn and build on? You got to know containerization. That's probably one of the biggest ones in the ITOT space that you're starting to see. Reason being having
VMs on the OT side is really difficult to manage. It's much easier to manage with containers, lower resources, way quicker recovery times. And in general, it's still the same application just running on small hardware. So you definitely got to know containerization, start with Docker, learn about Docker Compose, then get into Kubernetes if you've got the time for it.
Alex (40:11.17)
Kubernetes is more of a specialized one. So if your business has a specific stack, they know that stack inside and out and they want to use that, Kubernetes will really improve it. But if you are just starting out with a brand new step, you don't even know what to use. Starting out with Kubernetes is just going to overcomplicate a lot of things for you. Other skills to be learning about in the OT IT space.
Let's see specific on the OT side. Definitely got to know ladder logic because it's everywhere. You're never going to get away from it. Let's see what other added skills. Knowing more of the command line. So those are generally the ones that I would look into a lot more is containerization using the command line a lot more often understanding ladder logic. Those are probably the three. All right, now we got another one.
I am stuck on how to start with PLC partner. The IT side, I can build simple app a random truck simple account app, but I am stuck on PLC side. No clue on what to do, such how for integration. believe OT should be known. Then I only can even use your basic ladder logic. I understand. Okay. No clue on what to do and how. So they even understand are you building out control systems?
As an, as an IT guy in the OOT space where you have to be programming the PLCs, maybe it's on that end on it. I mean, there are courses. There are some people, one that comes to me is Solace PLC with Vlad. He has some of the best courses on building out a lot of PLC code, and understanding a lot of that underlying system, where to start with stuff. That would probably be my number one. Let's see. Yeah. Just seeing another guy's code for now.
Yeah, definitely going to Solace PLC. He's got a lot of great courses on PLC stuff, especially on the Alan Bradley side of stuff. Fine. First and foremost, thank you actually for taking the time. It's always great to see that people in the wild are also doing great stuff for human edge and also over the pond in the US and also work with the native technologies on there. So thank you for taking the time. Hopefully for everybody else it was also a lot of fun. And we will send Brian a new camera so that next time he's even more...
Alex
More sharp on the screen? Camera and a mic. Any other advice or improvements? Send them my way. No worries. I think it's all about content. This was really, really cool and good. And thank you for taking the time. And also everybody, thank you for joining us today. Stay more tuned. And if you would to engage with Brian and ask him more questions and all, us, please drop it via Discord. are roughly 1,000 people already in there. Brian, one among them. And they are also discussing those stuff back and forth all day.
If you have any questions, can also post them there, and we can help you out. We're a really supportive community.