Monitoring Legacy Analog Systems with TinyML, Edge Impulse, and Blues Wireless - On Demand Webinar
The concept of "digital transformation" sounds both intimidating and expensive for most organizations. Replacing legacy system hardware and software to achieve a modernization utopia, while a lofty goal, is unrealistic for most companies.
However, today we can measure, monitor, and take action on closed analog systems without having to initiate pricey upgrades and systems changes. Using established Machine Learning techniques alongside modern IoT technologies, we can effectively modernize by tracking and reporting on existing systems in previously unheard of ways.
In this webinar, we dive into "Monitoring Legacy Analog Systems with TinyML, Edge Impulse, and Blues Wireless". We spend our time together learning about:
- What TinyML is and how it relates to the IoT.
- The value of combining Edge Computing capabilities with cellular IoT.
- Building an ML solution on constrained devices with Edge Impulse.
- How two working projects use TinyML to monitor analog gauges and perform anomaly detection in thermal imagery.
Webinar Transcript - Monitoring Legacy Analog Systems with TinyML, Edge Impulse, and Blues Wireless
Speaker: Rob Lauer - Director of Developer Relations - Blues Wireless 00:00
Hi, everybody, and welcome. I think we are pretty much ready to get started here today. Again, welcome to our webinar on monitoring legacy analogue systems with TinyML, Edge Impulse and Blues Wireless.
There's a lot going on in that title, so we will slowly but surely unpack it all for you today. As the title says, we're going to look at what I consider one of the most engaging and useful use cases in the IoT. That is regarding machine learning, specifically TinyML, and how hardware and services from Edge Impulse and Blues Wireless work together – and dare I say, accelerate certain digital transformation initiatives. I know that's a very overused and loaded term but hopefully, you'll get a good sense of what we all mean by that today.
First, I want to do some really quick introductions, starting with myself. My name is Rob Lauer, I'm director of Developer Relations at Blues Wireless. With me today is TJ VanToll, who is the Principal Developer Advocate here at Blues. I'm also super happy to have Louis Moreau here today, Louis is a Senior Developer Relations Engineer at Edge Impulse. Together, we're going to provide what I hope is a very informative and pretty engaging ML and IoT journey for you all today.
Now, along with our speakers, I'm pretty excited about our agenda today, I'm going to kick things off with the first section, with this concept of an easy button for digital transformation. Again, remember we're scoping this webinar, to effectively monitoring, analyzing legacy analogue systems, really without interrupting their service. So, hold on to that thought for a bit.
Next up, Louis is going to provide an intro to Edge Impulse and what they are doing to kind of tackle TinyML on constrained devices. TJ and I are going to briefly dive into wireless IoT, and show off how Blues Wireless is tackling this problem in really new and cost-effective ways. Then TJ and I will be back to show off some pragmatic ways of putting some of the concepts we talked about today into action.
Lastly, I want to cover some brief logistics. We will be doing some live Q&A at the end but you don't have to wait until then to ask your questions. All three of us are here. We can answer any questions that may come up during the webinar. So, enter them in the Q&A panel as they pop into your head. We're also recording this; we will send out a link to the recording in the next day or so if you need to drop early or whatever.
It's always nice for me to start with this quote: ‘Complexity kills. It sucks the life out of developers, it makes products difficult to plan, build and test.’ This quote is from Ray Ozzie, he's the Blues Wireless founder and CEO. My hope is that what you see in the rest of this webinar – between what Edge Impulse and Blues Wireless provide, and the problems TinyML is helping to solve – I hope you really come away with this idea of simplicity, how complexity kills, but countering it with simplicity can lead to really useful and engaging solutions that honestly can delight developers and executives alike.
Now what can be a better example of simplicity than the easy button. So, when we talk about this concept of digital transformation, which we've been collectively talking about for years, we often think about tearing down and rebuilding or replacing archaic legacy systems. Sometimes it's necessary, but it's always disruptive, whether it's the people, systems or the bottom line. What we hope to show off today are some high-level ideas on how we can unobtrusively start to monitor, gather, and most importantly report on way more data about existing systems than could ever have been done previously.
So, what do I mean by this? Well, connectivity has fundamentally altered legacy embedded systems. It used to be that the types of systems that we built were closed, self-contained. For instance, like the machine on a shop floor may have been a very sophisticated system with complex embedded hardware or software, but if there was a monitoring system in place, you know, it was localized, and when the machine broke, you just replaced it. This system worked in the world of localized on-prem computing, it worked in what we considered or what we would call today like a non-connected world.
Now these closed systems were a norm previously, but with the opportunities of modern monitoring and connectivity are becoming more rare, right? But connectivity, of course, is more than just putting a Wi-Fi or cellular radio on a board. In a monitored system connectivity is about insight, it's about control, it's about deriving those insights from previously unheard-of sources like machine learning. It's about moving our ability to visualize and monitor anywhere, not just on the shop floor, but thousands of miles away. It's about enabling remote or mobile control so we can take action on our systems from anywhere, anytime. And of course, leveraging the power of the cloud, ::cough, cough:: Notecard and Edge Impulse here, for the kinds of insights we could never replicate on the factory floor.
The kind of insight that helps us spot problems before they happen, right? So we can fix machines before they break, instead of replacing them when it's too late. And while most hardware engineers are perfectly comfortable with embedded systems, microcontrollers and design, they're less comfortable with this side of the equation here. With the web, with mobility, with the cloud and with machine learning. These are the key areas where I would say the IoT is becoming real, solving real problems. Now, there are always exceptions to this rule, but for many, the IoT and ML require new skills, new hardware, and new services.
Speaker: Rob Lauer 05:44
So, at the risk of maybe getting ahead of myself a little bit here, I do want to take one small step back and address the elephant in the room: like what even is this thing called TinyML? And maybe to answer that, I need to take one more step back and talk about what machine learning (ML) is.
At a high level, ML is focused on using mathematical techniques, and large-scale data processing to build programs that can really find relationships between inputs and outputs. To me, a great way of summarizing that is to look at it in terms of a mathematical formula, like this one here:
x * y = z
x = 4, y = 2
In classical computing, an engineer presents a computer with input data; for example, the numbers four and two, as well as an algorithm for converting them into an output. So, multiply x times y to make z. As the program runs, inputs are provided, the algorithm is applied, and it produces some outputs – pretty straightforward.
And this on the other hand, flips this on its head in a way, so it's more of the process of presenting a computer with a set of inputs and outputs, and asking the computer to identify the algorithm or the model using ML terms, that then translates the inputs into outputs:
x1 = 2, y1 = 2
x2 = 4, y2 = 2
z1 = 4, z2 = 8
Often this requires a lot of different inputs to ensure the model will properly identify the correct output every time. So, for this example, if I feed an ML system, the numbers 2 and 2, and an output of 4, it might just decide the algorithm is to always add those numbers together. But then if I provide 2 and 4 with an output of 8, the model should learn from those two examples that the correct approach would be to multiply the two provided numbers.
So, ML is really a paradigm shift for a lot of us. We're moving from writing rules in code and getting answers which are very strict, to starting with answers like a subset of data that builds rules for us, leading to a more flexible result. As the data changes, those rules should become easier to change.
What is TinyML then? In a nutshell, it's machine learning on highly constrained devices, right? It's constrained, or they're constrained in terms of available memory, available processing power. They're great for off-grid, low-bandwidth, low-power, edge computing scenarios. So, there is absolutely a natural correlation here between TinyML and the Internet of Things.
There are virtually endless opportunities for utilizing ML concepts to legitimately improve people's lives, where they work, how they work, while optimizing for safety and efficiency. And these use cases are really key, because the IoT has a bit of a perception problem, right? For some people, the IoT is building internet connected toasters that can print patterns on bread.
We have a reputation issue. In this industry, we've really fallen into a trap of putting Wi-Fi or cellular radios in everything and calling it done. It makes it all too easy to dismiss what we're doing as a fad. It's also why if you're familiar with the Gartner Hype Cycle, why the IoT appears at the bottom of this trough of disillusionment, it's really stuck in this mode of post-inflated expectations and hasn't yet found its way up to this magical plateau of productivity. But I'm here to propose that what we're talking about today, this idea of merging TinyML with the IoT (and yes to aid in digital transformation efforts), is absolutely key to the future of the IoT.
So, what's good about the IoT today? Well, we're taking these incredibly small microcontrollers that communicate with sensors, or power servos, or whatever they may be doing to control other devices. Maybe they're generating ML inferences, we're building these awesome, really small programs, they're running on highly constrained devices, they are essentially great at sensing, measuring, tracking, controlling something, something physical. But the important part here is that they're also sending data to the cloud, leveraging connectivity to modernize.
For me, IoT is only going to be successful when we double down on these opportunities where connectivity enables us to do something that would be difficult, if not impossible otherwise, which is exactly what we're talking about today: ML and IoT solutions that focus on solving real problems, to move us out of that Gartner trough of disillusionment.
Let's focus a little bit on the overlap of the IoT and TinyML, this is where Edge Impulse and Blues Wireless come in to play. As I've already mentioned, when you start talking about ML on edge devices, 9 times out of 10 (I'm just making that statistic up), you're also talking about connectivity in the same solution. Sometimes Wi-Fi can be the answer other times LoRa, LoRaWan or Cellular.
And while TinyML shines with Edge Impulse, Cellular is where Blues Wireless shines with the Notecard. With that very, very long-winded intro, I do want to hand the mic over to Louis Moreau, who's going to give us a bit of an intro to what Edge Impulse is all about.
Louis Moreau - Senior Developer Relations Engineer - Edge Impulse
My name is Louis Moreau and I’m a Senior Developer Relations Engineer with Edge Impulse. I actually started my career connecting rhinos in Africa when I was developing a low power solution to one GPS tracker to put that into the rhino’s horn. And so, I was like, at the beginning of IoT and doing some crazy stuff.
After that, an old friend came (my manager actually), and he told me, with all the experience you have building IoT solutions and building cloud environment things, I want you on my team. So, I said OK, let's join in. It's been more than a year that I joined Edge Impulse, and it's an incredible company where everything is transparent and we're building tools for developers. This is really what I like, we want to build a tool for developers with no knowledge of machine learning, and let them try to create their own machine learning pipeline.
So, what is Edge Impulse? Edge Impulse is an embedded machine learning platform which lets you build custom machine learning pipelines. And so, you can accomplish a wide variety of ML tasks such as classification, object detection, anomaly detection, etc., or just neural network classifiers. And we focus more on the predictive maintenance industry, old use cases, etc., etc.
But we also work with a classical consumer that wants to just put some knowledge in the device. And so basically, using Edge Impulse, you can collect data from any sensors and deploy that model to almost any devices, as long as they report C or C++. And what is really important is that you maintain the control over your data and your firmware for the whole time.
We have no black box. Every single block that we that we provide is open source, and you can have a look at the code. And I would strongly encourage you to do that. It's an online platform, and we provide for each step, different tools within the studio, so you can perform actions. Usually, the first thing you want to do when you're trying to create a machine learning model is to collect some data. And we have different tools so that you can collect data directly from the device or you can import preexisting datasets directly into your studio projects.
Then once you've got all your data that are ready, you can design your Impulse, and Impulse is a mix of DSP (Digital Signal Processing), and machine learning blocks. This DSP, Digital Signal Processing, is really key in TinyML, because it will extract some meaningful feature from the raw data, and then pass that to the neural network so it will learn easily. Then we have tools to test your models to make sure it's accurate, and it will work well in real life.
Then you can deploy your model to, let's say, almost any devices, as long as they can support C++. We also provide ready to go firmware for official issue 14 dev boards, so you can just test directly on the device. Most of the people use the C++ library or just some external components, so they can be on there, or they can integrate that model to a broader system.
So that was it for me, I will be available for the Q&A. So, feel free to ask me any questions. I'm also here in the chat if you have any questions, I would be more than more than happy to answer. Thank you.
Speaker: Rob Lauer 14:44
Perfect. Thanks Louis. I'm going to do the same intro but for Blues Wireless here really quickly. I'm going to assume you're seeing my screen. So, let's take a really quick look at this company Blues Wireless.
I know many of you are already familiar with Blues, but for those of you who are new, welcome! Blues Wireless is an IoT company that is focused on wireless connectivity. We provide hardware and services that really try to back up this message of making wireless IoT easier for developers, and more affordable for all.
So, if this is our core mission, how do we make that happen? Well, I'd say we have three core focuses:
- Securing data. From a moment, it's acquired by a sensor all the way through to landing on your cloud application of choice.
- Low power. All of our hardware solutions and our firmware defaults are low power, out of the box, to the tune of eight micro amps when idle. We're also very much a developer focused company. And this is super important to us. Like I know it is with Edge Impulse, our developer experience is a top priority, and I think you'll see that play out today to a certain extent.
Looking at our hardware really quickly, the Notecard is the core product that we provide. It is a low-power System-on-Module, measures a tiny 30 by 35 millimeters, has that M.2 edge connector at the bottom for embedding in your project. There's both cellular or Wi-Fi variants of the Notecard. The cellular includes GPS as well, and it comes prepaid with 500 MB of data and 10 years of global service.
The API - the way you interact with a Notecard - is all JSON. And we provide SDKs for popular languages here. There's also some community supported SDKs for Rust and .Net. So pretty good language coverage. And on the cellular side of things, we do support popular cellular protocols, again, available globally like NB-IoT and LTE-M.
Now, to make it easier to use your Notecard when you're prototyping, or even when you're ready to embed in a permanent solution, we provide these development boards called Notecarriers. They allow you to snap a Notecard in, and connect it to virtually any solution you can dream up. And finally, Notehub is the Blues Wireless cloud service that receives data as a proxy from the Notecard and then in turn can securely route that data to your cloud app of choice.
Less important in the context of today's talk, with Notehub, you can manage fleets of devices, you can perform over the air (OTA), microcontroller and Notecard firmware updates. And again, Notehub is all about security as well. Data is transferred off the public internet via private VPN tunnels when we're talking about the cellular Notecard, that data can optionally be encrypted as well.
Great thing about using the Notecard with Notehub, there's no certificate management required, there's no key rotation, the Notecard knows exactly where it's supposed to go as soon as it's turned on. Now, again, everything is JSON in and JSON out with a Notecard API. For example, if you want to get your Notecard’s GPS location, you simply call this request card.location, and it's going to return a JSON response with the requested location.
And to kind of help visualize where the Notecard and Notehub sit in any given IoT solution, you're going to bring your microcontroller. One thing that makes Blues unique is that you can bring any microcontroller, any single board computer, use any sensors, use virtually any language. You're going to compose packets of JSON data that we call notes. Those notes get saved or queued on the Notecard and then at a cadence you specify, they get securely synced with our cloud service Notehub.
Now, we do not want your data to live on Notehub, we want you to route that data out somewhere – preferably to some cloud app of choice, right? It could mean AWS, Azure, Google Cloud, or some IoT-optimized provider like Ubidots or Losant or Datacake. You can also reverse this entire process and send data back to a Notecard or a fleet of Notecards from the cloud for remote control or fleet variable updating scenarios. With that, I want to pass things over to TJ to do a quick demo of the Notecard in action.
Speaker: TJ VanToll - Principal Developer Advocate - Blues Wireless 14:04
Yeah, I want to show some of this stuff, just what it looks like, how it works, because I know we have some people here that are completely new to Blues. I think to understand some of the sort of fully-fledged projects that we're going to be showing in a minute, it helps us see some of the hardware and the basics of how the Notecard works in action.
So, what you're seeing right now is my desk where I've got two of these Notecards: I have a cellular Notecard and a Wi-Fi Notecard, and they're 30 x 35 millimeters. Hopefully you get a sense of how big that is compared to my hand. These are devices that you can embed into pretty much any IoT solution through this M2 edge connector on the side there as Rob showed.
We do make a series of Notecarriers as well, so, you can embed these in just about anything. But we provide a series of Notecarriers to essentially make prototyping, and in some cases even deployment, easy. The first Notecarrier I'll show is our Notecarrier A, it's got this nice black PCB. You'll see it has a slot for the Notecard to slot in, but it also has JST connectors on the side for like LiPo batteries and solar. It has these onboard cellular and GPS antennas. It's got a micro-USB slot that we'll use in a second for connecting up to my laptop to show you a little bit more of how this works.
We have the A, I've got a Notecarrier AF over here. Similar concept for slotting in a Notecard, but with the AF you can also slot in a Feather-compatible microcontroller. If you want to use that alongside the Notecard, the AF is perfect for that sort of thing. And I also have a Notecarrier Pi – I’ve got to grab it from the other side of my desk here, and make sure I don't lose any cords in the process.
But the Carrier Pi is a HAT for the [Raspberry] Pi, that slots right on top as 3D stackable headers – again this has a slot for a Notecard to go through. It also has this nice little slot for a Pi camera to slide through as well, because I'm going to be using this here momentarily.
That's what the basic hardware looks like. You'll recall that I mentioned that all of the Notecarriers have this micro-USB connector on the side. The next thing you'll want to do once you make this connection – I'm going to take my Notecarrier that I was just working with and connect it to my computer via USB – is plug it in the correct way. And when I do so, you'll head to dev.blues. This is our hub for really everything we do at Blues, all of our tutorials, all of our guides, data sheets, if you really want to dive deep into our hardware. But if you're brand new to Blues, the very first thing I'd recommend you do, once you have a Notecard, is head to our Notecard Quickstart.
This is a quick tutorial that's going to walk you through the basics of how the Notecard works: how it communicates, how you can get data off the Notecard and push it up to the cloud, which I'm going to give you the quick highlights of here today. One thing you're seeing, here on the side, is that you can actually connect to the Notecard directly through your web browser. We use what's called the web serial API that's built into chromium-based browsers like Google Chrome, like Edge, like Opera. You can actually make a connection directly to your Notecard, and start issuing commands directly within your browser.
So, if I scroll down in this tutorial, you'll see that once you hook things up, the first thing that the tutorial is going to have you do is run a card.version request, which I'm going to go ahead and do, and then we'll talk about it, and I'll point out a couple quick things. First of all, notice that the entire Notecard API is all JSON. So JSON in, (in this case, I'm saying the request I want to run is card.version) and it's also JSON out. So, in this case, I'm getting a bunch of information about the Notecard, like firmware version, and I want that back from it.
Now, I'm running everything in the browser, and that's what I'd recommend when you're first getting started and you're doing what we're doing today, just getting familiar with your hardware, seeing how things work. But I will mention that you can also do any of the actions I'm showing through the Notecard CLI, which we have available for Windows for Mac OS and for Linux. So, if you're more of a CLI person, you can issue these commands there.
We also have the SDKs that we have available for C, C++, Arduino, Python, CircuitPython, Go, a number of platforms as well. So, long term, when you're starting to deploy these actual solutions, chances are you're going to be running these commands and building them into some sort of script or program that you're deploying, and one of those hardware options. When we get into our full projects here in a minute, you'll see some of those things in action.
First, there's two more commands I want to just give you the basics of because you'll see they're sort of core to how a Notecard works. And the first is for communicating with Notehub. Now Rob mentioned this, but one of the cool things about the Notecard is it knows how to talk to our cloud backend, Notehub, out of the box. There's no significant management, there's no craziness that you have to do, really all you have to do is go to Notehub, which is available at notehub.io.
You will need to create an account to create a project, so, I'm going to create one real quick – just call it testing because that's sort of what we're doing here today. And when this gets built, I'm just going to grab this identifier, because the next command I need to run if I scroll down in the tutorial a bit is this hub.set request. And with hub.set all you need to do is pass that identifier you just created in Notehub, because the Notecard knows how to talk to Notehub, but it needs to know which project to associate the device with and to push data to. So, I'll run that hub.set to make that association and I also need to run a hub.sync.
The other thing about the Notecard, it is very low-power friendly by default. It tries to avoid or I should say like, minimize the number of times it's doing expensive things like making cellular connections or GPS connections. So, it's not going to take this association to this new project and push it out to the cloud until it reaches whatever interval you configure, or you run a sync to tell it to explicitly sync anything that's happened on this device up to Notehub. When I run the sync, what I should be able to do is go back here, and you'll see I now have a connected device.
And so, I've now made the association between the hardware here, the Notecard, and what I have up here in the cloud. Now, last thing I want to show is, once you've made this association, chances are you want to do something with it. And most of your sort of IoT projects, you're going to want to take some sort of data you're collecting – that data can be sensor data, that data can be location data, that data could be like the machine learning... in the machine learning world it can be like inference or classification data, like what did this model show – and push that data up as well.
But regardless, usually we have some data you want to push up to the cloud, and the easiest way to do that in Notecard language is with the note.add request. So, the note.add request takes a body, which is just an arbitrary JSON body, so you can put whatever in here. So again, this could be your sensor data, your location data, your inference data, whatever you happen to have, and push that up, which I will go ahead and do.
Then I'm going to run this hub.sync to again, take any changes that I've made locally, and push those up to the cloud. When I do, I should see that data come through as an event. And remember, I am using a cellular Notecard for this, so, it sometimes can take a little bit of time/ latency for the data to come up. Looks like it's still seeking a few things, so we can give it a second to see that data pushed up. But overall, that's the workflow, you're collecting data on the Notecard.
And again, the specific data as you're going to see as we move into these projects here in a second, won't be hardcoded. We're going to look at how to capture some actual live information, but the Notecard makes it really trivial to toss this on just about any device, capture the data, bring it up to the cloud, where you can do more interesting things with it. We'll see if it came through – yeah, there it is. So, it took a little second because it was syncing the stuff, but our temperature and humidity came through.
So, this should give you a basic background of Blues. Obviously there's a lot more we could cover, but I wanted to make sure you had some of the general information for understanding some of these fuller projects. I will say before I toss it back to Rob, that if you are completely new to Blues, and you're trying this stuff out for the first time, this Quickstart is what I'd recommend completing.
It's going to walk you through the steps that I just did. It's also going to show you how to communicate in the opposite direction as well. So, if you want to take data from Notehub and push it down to the card, like if you want to do some sort of remote control scenario where you're pushing commands, a request from some cloud down to your device, that's something you can do. And the Quickstart can sort of show you how to do that.
I'd also recommend heading to the guides and tutorials section of the docs, because you can find our sensor tutorial, which is going to help you instead of using hard coded data, hook up some actual sensors to different boards using different languages and figure out how to push those up to the cloud. I'd also recommend checking out our routing tutorial, because we can help you then take that data from Notehub and push it out either to just some HTTP endpoint, or a number of different platforms, IoT platforms, whatever it is you have in mind, and help walk you through that process as well. Because I find that once you've completed these three tutorials [Quickstart, sensor tutorial, routing tutorial], you're in a pretty good state for completing some of these fuller projects that we're going to be showing.
But Rob did I did I get everything? Did I miss it? Do you think that’s enough to go from there?
Rob Lauer: 28:58
Fair enough. Well, I will do my best to share here. Again, we're going to dive into a couple of projects that really try to utilize this concept of machine vision, which is really just allowing a computer or microcontroller to see. So, we're going to go through one project that uses machine vision to interpret analogue gauges, this is what TJ will show later on. The other is using machine vision to analyze thermal images from a heating system, my home heating system, to perform a type of anomaly detection. Of course, you can easily extend these into more full-fledged anomaly detection solutions or even utilize predictive maintenance, a lot of opportunities there.
Speaker: Rob Lauer 29:04
So, I have the pleasure of diving into this anomaly detection app I built that uses the Notecard along with Edge Impulse to analyze thermal images. Now I should put air quotes around “anomaly detection” because what I'm really doing in this app is using image classification to ask my ML model, what type of image am I looking at? And if it doesn't know, I'm calling that an anomaly – potato potahto. But let's take a little closer look at the story.
So, this is a very personal story for me, and it starts with my home's hot water boiler system. Any of you who own or manage a system like this, know they are not cheap, especially here in the States. They're one of the least common sources of home heat. Our last one ran for like 15 years before we had to pony up and replace it. So, I'm super paranoid about failures with the system, and it came to me that I could, or I'd like to, somehow actively monitor this closed system. But I certainly wasn't about to open up the case and hack into the wiring and void the warranty. Now, you can easily apply that on a broader scale when you talk about large-scale machinery on the shop floor.
I built out a machine learning solution that could effectively do this for me. Now, I wanted to look for anomalous behavior. For instance, there's a pressure relief valve on the upper right part of the boiler. In reality, this should never get hot, as it would mean there's too much pressure in the boiler, and water would come out, hot water would come out. If it does, I'd really like to know about it.
So, looking again, at my system, alongside one of the first thermal images I took, you should roughly be able to see the mapping here. Now this point is when I also learned about this concept of thermal emissivity, which some of you probably already know about. This is the concept of the effectiveness of materials in emitting energy as thermal radiation, which is what's caught by a thermal camera.
Now copper, I learned has an incredibly low emissivity rating, which is why the pipes don't blossom much at all in the heat. But thankfully, the iron pumps do. (Just an FYI, if you want to avoid the Predator, I think you just need to get inside a copper box, or something.) But I built this project using a Raspberry Pi Zero. Why? Well, it's my favorite tiny Linux single-board computer, otherwise, there's no particular reason. I could have used virtually any microcontroller for this project.
So, truth be told, there's no strict advantage to using the Zero other than it's super easy to work with. And with Notecard and Edge Impulse, since the Zero has a 40-pin connector, like the full-sized Raspberry Pi, you just slot that Notecarrier Pi HAT on it that TJ showed you. This also has a cellular Notecard on it. I should note, this also does make a Zero great option for these lower-power edge computing scenarios, and the thermal camera. So, I wired up an MLX 9640 thermal camera to the Zero. It produces a really tiny, but quite cool (no pun intended), 32x24 resolution thermal image.
And now let me get into some code and geek out a little bit here. I did write the app in Python. And my first step, I knew with developing an ML model is that I needed data, lots of data, to build out an accurate ML model. So, this meant taking a ton of thermal images of my boiler in action.
I wrote this Python script, which is a little bit abridged here for space. And it took a picture every like 5 or 10 minutes or so, for more than 24 hours. It would snap a picture of the system and save it to the Zero’s file system.
thermal-img.py
MINTEMP = 20.0 # low range of the sensor (deg C) MAXTEMP = 40.0 # high range of the sensor (deg C) COLORDEPTH = 1000 # how many color values we can have INTERPOLATE = 20 # scale factor for final image mlx = adafruit_mlx90640.MLX90640(board.I2C()) def takePicture(): # get sensor data frame = [0] * 768 mlx.getFrame(frame) # create the image pixels = [0] * 768 for i, pixel in enumerate(frame): coloridx = map_value(pixel, MINTEMP, MAXTEMP, 0, COLORDEPTH - 1) coloridx = int(constrain(coloridx, 0, COLORDEPTH - 1)) pixels[i] = colormap[coloridx] # save to file img = Image.new("RGB", (32, 24)) img = img.resize((32 * INTERPOLATE, 24 * INTERPOLATE), Image.BICUBIC) ts = str(int(time.time())) filename = "ir_" + ts + ".jpg" img.save("images/" + filename)
The only thing I’d point out in this code are the constants at the top there, when using a thermal camera, there's some work you have to do to identify the high and low range of the sensor temperature that you predict you're going to get in order to get the best color range, the color variations in your images. And these are the values that happen to work best for me.
So again, I ended up you know, my first step here, taking a ton of images. They're all stored on my Zero in the file system, you can probably imagine. And even when I started to look through them briefly, I was very easily able to start classifying three types of images at a very high level, right?
- Cold, the system is off completely,
- Warm, it's either heating up or cooling down, and
- Hot, meaning it's actively running and producing a lot of heat.
But what about classifying anomalies? You know, any ML model I would program at this point wouldn't be able to tell me about an anomaly, because it only knows what I tell it, right? So, with a little help from Photoshop, (I cheated a bit, kind of) I created a set of ‘anomalous images’ that would simply focus on there being hotspots where they shouldn't be. For instance, on that pressure relief valve. I'll be honest, I don't love this solution. And I'm sure there's a better way to do it. But for my POC, it did work just fine and gave me a good start on my model at least.
And this is where the fun honestly really started for me with Edge Impulse. Now, I don't work for Edge Impulse, I'm not getting paid by Edge Impulse, but if you are looking for the easiest way to really build a variety of ML models, and deploy them to all types of microcontrollers, or single-board computers, I can't recommend Edge Impulse enough. So, I started by creating an image classification project. There are a lot of options here, depending on the specific type of model you want to create. Maybe it's audio-based or you want to use gesture recognition. Whatever it is, there's a path for you.
The next step is the data acquisition phase. What I did was I uploaded all of my collected thermal images. Now there are numerous ways of acquiring data within Edge Impulse studio. I will say that the data acquisition phase, for me at least, is always the most tedious, boring, time-consuming task. However, Edge Impulse really does a great job of simplifying the process by providing some really engaging tooling options to classify groups of images, for example, or even start processing them individually.
And I believe if you're using an object detection project, I think that's what it is, Edge Impulse studio will actually learn from previously classified images, and then start to guess and pre-classify images as you're going through. It's really something else, I highly recommend checking it out.
Anyway, setting up the rest of my model, with learning blocks, and identifying the four features I was looking for, was a matter of click, click, click and done. Yet another nice feature for me about Edge Impulse, is the fact that for idiots like me, and based on the type of model you're creating, and the data you've added, many of these values are pre-populated with defaults, best guesses, which frankly, I often end up using. You can also train your model in the cloud. After training my model, I could see I had a pretty decent POC ready, right? In this feature explorer, you could see three different states really well, pretty well identified here.
But this is where I caution you, in that your ML model is only as good as the data you provide. You can see here I supplied 129 images. In reality, if I could do this again, I’d probably do, I don't know, 5 or 10 times this number. Also, the anomalies identified, they ended up mixing in too closely to the other images to be super useful. But it did work, enough. Again, it’s a POC but I just want to be completely transparent here. So, at this point, I've got all these images, I've got my ML model created in Edge Impulse studio. Since I was on Linux, I could use the Edge Impulse Linux runner to download the model file in this EIM format directly to my zero.
And next up, it was time to write some code, some Python code to actually use the model on the device, which again, what I love about Edge Impulse is that I can spend way more time writing code and less on building and kind of tweaking that ML model.
Again, another python script here. Again, very much abridged code here for clarity. I was taking the picture every 10 minutes, and then processing that image with the Edge Impulse Linux runner which was previously installed on my zero. There's some great Docs provided by Edge Impulse on getting started on the Pi, by the way.
Step 1: Take a Picture Every 10 Minutes
filename = thermal.takePicture(), img = cv2.imread("images/" + filename)
Step 2: Process Image w/ Edge Impulse Linux Runner
features, cropped = runner.get_features_from_image(img), res = runner.classify(features) if "classification" in res["result"].keys(): print('Result (%d ms.) ' % ( res['timing']['dsp'] + res['timing']['classification']), end='')
Step 3: Store Generated Inferences in JSON and Sync with Cellular Notecard
note_body = {}, for label in labels: score = res['result']['classification'][label] print('%s: %.2f\t' % (label, score), end='') note_body[label] = round(score, 4) print('', flush=True) note.add(nCard, file="thermal.qo", body=note_body)
So, this runner generates an inference, an interpretation of the image that I'm sending it. Again, this could either be hot, cold, warm, or anomaly. And next, I wanted to save these inferences in JSON format, because I'd be using the Notecard, of course to relay these inferences to the cloud. And this is an example of a note, which is just a JSON object representing a cold state. So, in this example, 99.8% chance that it's representing a cold image.
Example “Note” Representing Cold State
{ “anomaly”: 0.0, “cold”: 0.99850000000000019, “hot”: 0.0, “warm”: 0.0015 }
Now, the Notecard would then securely sync this data with our cloud service, Notehub. And we can see an example here if you squint and look really closely of the data stored from my ML model. Now, again, we don't want that data to live on Notehub, we want to make it really easy to route it to your cloud app of choice. And that's where Notehub routes come into play. So, routes allow you to forward your data from Notehub, to a public cloud like AWS, Azure, Google Cloud, or you can use MQTT, or a custom HTTP endpoint.
In my case, I created a simple route to Ubidots. Literally all I had to do was plug in this endpoint URL, and an auth token to make it work. And so, I built out a very simple Ubidots dashboard and it's really, to me a great example of the opportunity of ML here. And that is, instead of uploading all of that accumulated data to the cloud for processing, which of course can have both logistical and privacy concerns. I'm creating these inferences locally, and just sending to the cloud what I need.
And this project got a little bit better, though, because I also added an alerting system onto it with Twilio, to automatically send me SMS alerts when anomalies were detected. Again, this is all just using some simple Notehub routes. As TJ mentioned, we have some nice guides, and tutorials on dev.blues.io, if you're curious to know, to learn a little bit more about routing data.
And if you want to see any more about this project, I do a deeper dive on our Hackster page. You can head to this URL (hackster.io/blues-wireless) and check those out. We have a lot of examples of actually using Edge Impulse and Notecard in other scenarios as well. And that is my reminder to send it back to TJ I assume you are ready to show off your little demo.
Speaker: TJ VanToll
I will do it. Yep, I'm going to bring my screen back up. There we go.
Speaker: TJ VanToll 40:00
I think you're going to see a lot of similarities between this project, and the one Rob just showed. So, this is a project that originally came from our coworker, Brandon. But it's something that I've sort of forked and experimented with and made my own through different projects as well. But Brandon's original problem is around this gauge.
Now, for anybody here that's worked with pools before, or hot tubs, or any systems, you might know how much of a pain that is. Well, in Brandon's case, his system that sort of cleaned out the pool was driven by this presser gauge. And specifically, if I go back to this version that has these sort of labels on it, when this gauge showed pressure between 15 and 25 psi, that indicated that the pool was cleaning and operating normally as expected, whereas if the values were too high or too low, that means some sort of manual action was required. So, if it's too high, it means the filter needs to be backwashed. If it's too low, the filter needs to be cleaned out. But some action needs to be taken to get the thing working back as it should.
Now, if you want to add some intelligence around this, so that you don't have to manually go out and check this cage every day, or every week, or whatever cadence makes sense for your equipment. Historically, as Rob mentioned, with his boiler system, historically, you'd have to make some sort of physical changes to that system, go in and put in a new sensor on the line somehow, or put in some sort of smart gauge on there. Which if you're really good at that sort of thing, in some cases, it's an option, but lots of times it isn't. I'd be terrified to go in and try to modify this sort of pool system, this expensive system, afraid I'd screw something up.
And that's even more true as you get into more industrial equipment. Lots of times, these aren't controls that you can easily change. Which makes these sorts of machine learning based approaches so intriguing, because it gives you the ability to monitor the systems without having to physically change them, which can be expensive, or in some cases, like I said, not even possible.
So, the steps, the sort of way I like to approach these sorts of problems. Because as I've learned more about this approach, I've seen them coming up more and more, these sort of machine learning cases, is I like to think of it in terms of these steps.
So, first of all, you need to set up hardware. And again, as you saw from both Edge Impulse and Blues, you have lots of options here, it's really whatever hardware that you're most comfortable using. In this case, if you're working with image data, you do need some sort of camera. In Brandon's case, this was a Pi Camera on a Raspberry Pi, but really any sort of camera device to capture image data. You need to capture data – which I will show that in a second Edge Impulse built out some sort of model – get that model down to your hardware, and then take that data that you're collecting locally, and transfer it up to the cloud in some way, shape, or form.
And you're going to see a lot of parallels between what I'm showing as I go through the steps and what Rob showed as well. So, first of all, for setting up the hardware. Now, when Brandon initially set this up for his workflow, he mounted a Pi with a Pi Camera in front of the gauge, just sort of mount it on a tripod in a weatherproof case. My sort of hacky setup was this thing, this little container that I set outside with some actual duct tape to hold the camera up, which is hacky. But as Rob sort of mentioned too, that it's sort of okay when you're first doing this to not worry too much about your hardware setup.
What's most important is that you get data in so that you can start experimenting, and iterating on these solutions. Because if you come up with something that works, you can always come back and iterate on this sort of process coming up with something a little more production-ready, a little more ready for some sort of a long-term deployment. But feel free to start with something hacky, just so you can get some data in, because the most sort of tedious part of this entire process is getting the data that you need to drive these models.
So, I'm going to just show this in Edge Impulse real quick. This is the project that drives this tank system. And you can see that Brandon's already got a number of different images in here. And there's different ways that you can connect this so you can import data into here, but you can also sample directly from devices. So, what I'm going to do is I'm going to start up my watcher. So, this is my Raspberry Pi that I've got sitting on my desk, I've got MSSH into it. And if you run this Edge Impulse Linux command, if you install that you can actually get a feed.
So, watch right here, you see me, a feed from the Pi Camera. Now I don't have, I'm not going to walk out to Brandon's pool right now but I do have the slides. So, give me a second to finagle this and not laugh at me as I do this. We'll go back to this slide, and I’ve got to get it up, hold on a second. I can do this, right, I got to have this on the other the monitor so we can do this. So, this is not going to be awkward at all. We're going to put this over here. We're going to take a picture. Look at how good I am at this.
So, we got our picture in, this label here is also kind of important, because this is how you classify the different states of your images. So, tank pressure normal, which it happens to be, probably going to screw up Brandon's model, but that's okay. That is a normal pressure reading on the gauge.
The challenging part again, this is similar to Rob's experiment, as well as you really do need a dataset that encompasses all the different states that you want your model to be able to detect. So that means I might have to get this gauge pointing at all the different values, right? High values, which in this case, I probably could finagle, I maybe could turn off the system and play with the gauge and get a pretty good comprehensive data set with that needle pointing in lots of different directions, so that I have a good representation of high pressure, low pressure, and whatever I need to drive this model.
And you can see that it took quite a bit of images so Brandon's got like 400 in here to help drive this. Alright, I've got to reassemble my stuff, hopefully won't be that hard. So, we did this, we captured the data. Our next step is to actually build a model based on the data and deploy that model to our hardware. Rob showed this, I think you'll find the steps to be kind of similar, which is a good thing, because these are sort of repeatable steps that you can take. This is the Impulse design screen in Edge Impulse where you can do this sort of thing.
Now, like Rob, I'm also very much a machine learning novice. So, I also appreciate that the defaults here in Edge Impulse are actually quite good. When I've built things with this before, usually I just take the defaults, and get started with that. And that usually gives me most of what I need. And then I can refer to the docs or ping smart people, like Louis, if I need help to further sort of fine tune these processes and get them like I need.
But overall, this is how you create your model, and you can deploy either through the CLI Rob actually showed – I didn't realize you could deploy models directly to the Edge Impulse CLI, so I learned something new. Or, you can use their UI here in the deployment section to actually deploy this.
Then the next step is to go to our actual source code. This is the code that runs this sort of gauge detector. And I'm not going to walk through every line of this Python file, I'll give you a link to the full source code and the full project right up after this. So, if you do want to replicate this or take this in its entirety, you're welcome to do so. I'm just going to point out a few quick things.
So, first of all, I do have to load up that model, because one size bigger too, I'm going to load up that model file. And if I scroll down a little bit here, the main sort of loop that drives this, I'm going to take the model file and set it up as an Edge Impulse runner, I'm going to get some code to get an image from the Pi Camera. Then I'm going to run that image through the Edge Impulse classifier. So, the classifier is going to take that image and classify it into those different labels that we set up. And specifically, if I go in, and you can actually look this, if you go into like model testing, and I pick one of these images at random, it's kind of cool that you can classify this straight through here.
I have an image, the resolution is bad but this is pointing at like 22-23, and you can see the classifications here. In this case, the model was very confident that this was tank pressure ‘normal’. It's actually positive that it's tank pressure, ‘normal’. And if it were other ones, you would see it reflected in these classifications. So those are the values that come back in how you sort of classify to know which mode you think the gauge is in, or what classification this image came back with. Which takes us to the final step in our process. Because you have that data, usually you want to do something with it, which is where the Notecard comes in really, really handy.
And so, the other thing I'll point out in this file is if I scroll back up, you're going to see some of the commands that we saw earlier when I gave the basic intro to the Notecard. So, there's a little code that uses our Python SDK to initialize the Notecard so that we're able to communicate with it. If I go down, you'll see the hub.set command. So that was the next command we ran to associate our actual Notecard physical Notecard with a Notehub back end.
Then if I scroll down, you'll see I am also note.adding the classification results, so I'm pushing those results up to Notehub. And if I go into the Notehub project, and I switch back to the pool tank and go into the events, you'll see that this classification data comes in and actually it's 97% sure that the tank pressure is too high. So, we're going to have to talk to Brandon a little bit after this about his pool situation. So, it's pushing that data up to Notehub.
Finally, the last piece of the puzzle here is there's another note.add request that happens here that's conditional. So, if the state, the classification, that came back from the image is either low or high, there's a separate note that's going to be sent out with this tank alert. And this actually acts as the trigger in Notehub to send out an alert. So, you'll see actually, because the tank pressure is high, an alert went through here, and we have a route set up so that specifically when that type of alert, or that type of note file comes through, it will send that SMS through Twilio.
So there's an example of the text that came through. And if you found any of this stuff interesting or if you want to try to replicate this yourself, if you head to https://bit.ly/ml-pool-tank you can find the full write-up on this – Brandon's full write-up that has details on the project. It's got the full Python source code.
It also has links to our Twilio docs, so if you do want to set up alerts that are Twilio-based SMS alerts, there are full instructions on how to do that there as well. So, check that out. What's exciting about this is, the more I use this, the more I run across things in my day-to-day life – it's like oh, hey, that would be a really cool thing to set up, spin up a basic model, put it out in the field, get some results, send it up to the cloud. There's lots of interesting things you can do once your brain starts to see problems through this lens. That was it, Rob, did I get it all?
Speaker: Rob Lauer
Yeah, it was perfect. I always try and pride myself on ending webinars before the allotted time, so I'm going to get right to the ending here so we can jump in for a little bit of Q&A. Just really quick notes before we jump into the Q&A: both Blues and Edge Impulse have their own previews you can use. So, if you head to dev.blues.io for Blues, you can look at all of our technical resources. Edge Impulse has this awesome evaluation, or walkthrough rather, at studio.edgeimpulse.com/evaluate. Also, just for attending, we welcome you all, if you're curious about Blues Wireless, use that URL or scan that QR code to take a 15% off a Starter Kit. And again, like if you want to look at more of these projects in more detail, check out hackster.io and you can find all kinds of Edge Impulse and Notecard projects.
Speaker: Rob Lauer 51:35
So, with that, I know we got a lot of chat questions. If you have any other questions, please put them in the Q&A panel. And we will try and get to those in the next five minutes or so. Let me see if I can try and moderate some kind of Q&A.
I know one interesting point was brought up about how we use the Raspberry Pi for both of these projects. And I totally confess that I think I tend to default to the Pi platform. I think I mention this because it's so easy. And it's like, and even though it's easy, I can use Python, it's really comfortable. It's not super realistic for a lot of Edge computing scenarios, because it is a bit of a power hog, even the zero itself.
So, this is just a reminder for TJ and I going forward, I think we need to work a little bit more on constrained microcontrollers with our projects. So just putting that disclaimer out there. Actually, Louis, do you want to talk about FOMO a little bit? Because that was a pretty interesting new feature you guys released.
Speaker: Louis Moreau
Yeah, sure. With pleasure. So yeah, I’ve seen some questions on how to run object detection on microcontrollers. And we very recently, like a month and a half ago released a new model called FOMO, which stands for “Faster Objects, More Objects”. And it actually performed really well on microcontrollers, that is on Cortex M7, I think you can achieve something like 30 frames per second if you've got the Arduino Nicla Vision or even the Arduino Portenta Pro or the OpenMV Cam. And yeah, that's astonishingly fast.
And how does it work exactly? So instead of training, we still use some transfer learning techniques. But instead of extracting bounding boxes using SSDs, which stands for "Single Shot Detection”, we actually train on centroids. And we get one image into sub-grids, and we classify each of those sub-grids independently.
So you click the location of your objects in the image, what you won't be able to get is the size of the object because you won't get the bounding box but you will get just a dot where your object is. And this has been developed by our ML experts and yeah, it's a brand-new technique. And yeah, I strongly encourage you to test it if you're interested in object detection on concern devices.
Speaker: TJ VanToll
Cool. I'll say too, just as a quick note that the object detection stuff is a lot of fun to play with. So, Rob and I are mostly showing classification. But object detection is essentially just finding an object in an image. And I played with it to just teach it to tech students of my kids stuffed animals just as a fun project, which they had a blast doing too. And it's just, it's fun times, is all I’m going to say, if you're looking for a fun, like weekend project, you can entertain yourself with that pretty well.
Speaker: Rob Lauer
There's a really good question from Pratik who talks about is it possible to address anomalies in real time? And I think, yeah, that's like kind of the whole point of what we wanted to talk about today is, you know, we showed you two kind of somewhat silly, but hopefully slightly pragmatic examples here. But ideally, you can think of a scenario where, you know, like, for the Gauge Monitoring Project, where if the gauge was measured to be in the high zone, we're simply talking about application logic at that point where you, in theory, you engage some kind of filter process from there.
So, it's like, yeah, you can start, you know, we're focused more on just the reporting aspect, but you can take this full circle, and really dive in pretty deep with creating these inferences and actually taking action on that data automatically, instead of just alerting. So, there's a lot in the chat I'm trying to look through. And I only answered some of these initially, Louis or TJ, if anything stands out to you. I know.
Sorry, one more thing: there's a question from Greg about changing Notecard motion settings, like from the note up API to note up to the Notecard. So basically, yeah, I mean, it's a good question. So, I did talk about this bi-directional nature of the Notecard, how you can send data and effectively receive data from Notehub as well. I don't have a good answer for Greg's question, because he's talking about changing a specific setting on the Notecard through the API. I'm pretty sure we don't have an API for that yet.
We do you have this concept of environment variables that are kind of like cloud variables that get synced with the device. So, you have the capability today to set any kind of custom variables you want, that you can use to download from Notehub, and then use within your application logic to make any setting changes you want, if that makes sense. We do have some docs on this, but I don't want to dive into it right now. And the other big questions that either of you saw
Speaker: TJ VanToll
Someone was asking if the Edge Impulse models are run in the cloud, but I believe all the inference and such happens locally on the device, which is kind of the cool part of it. So, it can happen super-fast. And offline and everything.
Speaker: Louis Moreau
Yeah, correct. And that's the value of it, because you can actually treat the problem locally or send a report or send the anomalies, you could run that in the cloud, the whole value of TinyML and it's not just Edge Impulse, is the ability to run the models directly onto the device.
Speaker: Rob Lauer
Cool. Question from Malini came in about the Notecard, providing a means for the host to offload application processing. No right now so, the STM32 chip on the Notecard is not available for end user usage. Certainly, something we've heard about before so stay tuned maybe we'll have something for you at some point. And the question about what the difference is between the Swan, Feather and ESP32.
So, the Swan I mentioned it in the chat, I believe Swan is a microcontroller that we build and provide. We actually have a great Edge Impulse tutorial with the Swan. It is Adafruit Feather compatible, so does the same pin configuration is Adafruit Feathers. It does use an STM32L4 chip, so pretty low power but pretty powerful at the same time chipset. So that's different from the ESP32 which is a totally different chip from a company, different company.
And Alright, I think we're at the top of the hour, it's probably a good time to cut it off. Thank you again. Thanks, Louis. Thanks, TJ. Thanks, everyone for attending. And again, you'll get a list of, a set of resources and a recording here in about a day or so.
Speaker: TJ VanToll
Thanks a lot. Alright. Thanks, everybody.
Recommended Videos
Rapidly Prototyping Environmental Monitoring IoT Devices
We dive into challenges and solutions for adding wireless connectivity. Between cellular, LoRaWAN, and Wi-Fi there are a myriad options and potential pitfalls. Walk away with real advice for building your next IoT device thanks to the experiences of Clean Earth Rovers and LimnoTech!
How to Create IoT Asset Tracking Applications with Drag-and-Drop Tooling
Datacake and Blues Wireless team members showcase an end-to-end technical demonstration of actively tracking assets with GPS and cellular, securely delivering tracking data to the cloud, and building a robust cloud-based reporting application.
Global Asset Tracking with a Cellular Notecard and Datacake
Asset tracking is a common, useful, and pragmatic IoT use cases. Whether simple journey tracking or complex monitoring of global medical supply shipments, it is key to implement an accurate, low-power asset tracking system.
Getting Started with Blues Wireless and the Notecard
Join us to learn the basics of Blues Wireless and the Notecard!
Lightning Fast – From IoT Concept to Pilot With Additive Manufacturing
In this webinar, we discuss innovative ideas that reduce time and risk in hardware product development. We’ll explain how you can get your IoT product to pilot phase much faster and cheaper than your competitors.
From Device to Cloud Dashboard with Cellular IoT
You can build an IoT device with advanced features and capabilities in less than a day with Blues Wireless and Ubidots.