Instrument Your Virtual Instruments! (Part 1)
Since LabVIEW 1.0, engineers and developers have been able to write Virtual Instruments, or VIs. NI founders wanted to make it easier to configure and use all these massive, complex hardware cabinets and systems through modern communication like VME and GPIB through this concept of instrumentation. And they succeeded!
However, not only hardware could/should(?) be instrumented, but software can very well be instrumented too. Nowadays code instrumentation is an instrumental part of software, and an application that does not expose simple logs and metrics is not even considered professional.
Sadly, most people dislike code instrumentation:
- A salesman will find it hard to be sold
- A project manager or a product owner will claim it's not a product feature
- A developer may want avoid it as it may turn pure, elegant and clean program into an inexplicable mass of code
- A customer may get annoyed if the application files randomly grow heavier or because undesired (log) files appear on his machine (or slow down his app)
But (fortunately!) some other people do like it:
- A platform operator will love it when he can see if something went wrong
- A DevOps or SysAdmin will find it immensely easier to have a centralized setup to search for logs and metrics
- A developer can observe trends about how his software behaves over time, and precious information when it fails
See how lunatic a developer can be...
Let's try to convince the first group and see how crucial code instrumentation is.
Beyond Status and Errors
(Code) Instrumentation may be as basic as a logbook and error handling.
Logs are a consequence of an event that ocurred at some point and we cannot generally control how often they pop. A log is a string, and is also immutable: it can not be replaced later in time.
There's a great deal of articles on how a good logging system can be implemented and reused with little effort, in any languages, and many LabVIEW libraries and tools exist out there to help you out with that. I suggest you browse these by yourself if you're unaware of them:
- Structured Error Handler: simple and efficient way to centralize error/warning handling
- Actor Framework & DQMH: these are so much more than about handling/processing errors as they are used a the core architecture for you application. However, these framework include great tools to handle your errors and logs.
- MGI Error Reporter: personally only used it once but promising! More advanced than the SEH but also more complete...
Instrumenting your code can also be achieved using metrics. Metrics are all the contrary to logs: they're a numeric representation of something objective (and usually language agnostic) that can always be measured, and often stored in a time-series database. From there, it becomes pretty easy to query the metrics you need and see the trends or events that may cause issues.
Collecting Metrics
Let's pick a super typical and simple metric: the RAM your process uses.
What's interesting here is that you should want to know this metric for all your apps, for all the machines, all the setups around your lab/site, on all your servers, so it's best if we have a reusable, scalable way to save this data and visualize it instead of just showing indicators in the app itself.
This is where tools such as Prometheus, InfluxDB or Graphite come handy. These solutions are server-based suites built around time-series database models.
All these solutions allow for data mining/collecting/gathering (using either a pull or push method), alerting, graphing and more. There are some comparisons around, like this one.
To offer a better understanding of how collecting your processes RAM usage can be done, here is how Prometheus basically works:
Prometheus pulls data that are exposed through a small piece of code called an exporter. This exporter can be stand-alone, or be integrated inside an application, depending on the needs and the existence of an off-the-shelf exporter. The exporter is responsible for serving an HTTP endpoint callable by Prometheus, and for gathering all the metrics whenever Prometheus requests them (this is usually done every few seconds).
And that's about it. Prometheus then stores the collected data in the TSDB and can interact with plugins such as an alert manager, or a powerful data visualization tool like Grafana where you can finally see the RAM used by all your processes at once!
In this process, little to no programming is necessary, so once you have set up Prometheus and the satellite services, being able to parse display and query new metrics can be done in no time. The only custom section is the way you expose your metrics, and that is for you to decide how this is done.
Next time I will help you with this tedious task and present you PromVIEW, a Prometheus client library for LabVIEW applications. Keep an eye on TheLabVIEWLab in the next days!