In the first part of this blog article, I introduced key concepts surrounding data ingestion for the industrial Internet of Things, the role and importance of metrics and self-services capabilities for shop floor personnel. So let's see how this looks in practice and how the knowledge of a process or control engineer can be turned into action.
In the second part of this blog article, I describe how simple and quick it is to onboard data from industrial assets using the MQTT protocol, transforming any unstructured raw events into metric events and make them available for self-service analytics. A basic setup would like this:
Connecting to an MQTT Broker
For this exercise, I am using the free HiveMQ MQTT Browser Client. Two temperature readings are published to a topic in a single message using HiveMQ as an MQTT broker. Note that the structure of the message can vary and that it can contain additional measurements, a timestamp or dimensions.
The following settings have been used for the HiveMQ Browser Client Connection:
· Host: broker.mqttdashboard.com · Port: 8000 · Topic: testsplunk/readings · Sample Message: temp1: 62.8, temp2: 71.9
Using the MQTT Modular Input from BaboonBones Ltd. which is an Add-On for indexing messages from an MQTT Broker, I connect to HiveMQ, subscribing to that topic and ingesting MQTT messages. In order to use the MQTT Modular Input you first need to obtain an activation key. The following settings have been specified for the input:
Activation Key = <your key> Output Settings Data Output = STDOUT Connection Settings Topic Name = testsplunk/readings Broker Host = broker.mqttdashboard.com Broker Port = 1883 Source type Set sourcetype = manual Source type = <your custom sourcetype> Index Index = <your index>
As soon as the sample message above is ingested into an event index it will look as follows:
Converting MQTT Messages to Splunk Metrics
In order to use our temperature readings for analytics, even if it goes into an event index, we need to extract new fields out of the raw payload. To store them in a metrics index we apply the log-to-metrics conversion. For that purpose, the following two lines have to be added to props.conf:
… TRANSFORMS-extmsg = extract_message METRIC-SCHEMA-TRANSFORMS = metric-schema:extract_metrics
The TRANSFORM setting references the field extractions needed to extract the temperature readings. The METRIC-SCHEMA-TRANSFORMS setting is used to associate the log-to-metrics schema. The required stanzas in transforms.conf are:
[extract_message] REGEX = temp1: (?<temp1>[^;]+), temp2: (?<temp2>[^;]+) FORMAT = temperature1::$1 temperature2::$2 WRITE_META = true [metric-schema:extract_metrics] METRIC-SCHEMA-MEASURES = _ALLNUMS_
The METRIC-SCHEMA-MEASURES setting identifies how to extract numeric fields in events as measures in the metric data points that correspond to those events. The _ALLNUMS_ setting extracts numeric values as measures for the field. See Log-to-metrics configuration for details.
Currently the inputs.conf generated by the MQTT Modular Input must be modified manually to reflect the target metrics index as only event indexes are available in the drop-down list on the configuration page.
Using Analytics Workspace for Self-Service Analytics
Now switching to the Analytics tab within the Search & Reporting app will bring up Analytics Workspace directly. It will derive a list of all metrics that the logged-in user has access to and generate a tree on the left that can reflect a hierarchical representation of metrics. That can be used for further navigation. Another useful function is that RBAC (role-based access control) is available to restrict access to metrics by applying permissions on metric indexes.
We are now ready to go and can start our self-service analytics by bringing in and combining metrics into the workspace. By the way, in this example there is a direct relationship between temperature1 and temperature2, a decrease in temp2 will lead to an increase in temp1. But this is just an arbitrary example.
What’s next? Ready for Predictive Maintenance?
There are an increasing amount of customers that extend the use of Splunk from IT to IoT by leveraging their existing investment in Splunk hardware, infrastructure, and people. IT can continue to focus on running the environment and data onboarding so that shop floor personnel can stay focused on manufacturing uptime and availability. But it is not just the platform and analytical capabilities. Organizations may also benefit from a reduced total cost of ownership (TCO) if they extended the use of Splunk as other costly IoT and/or analytical software products may not be needed.
Turning our engineers’ knowledge into actions is just the first step where Splunk can help. Enrich your metrics, process and sensor data with any other source in manufacturing and start turning all your data into doing. Find out if predictive maintenance is possible by looking at “Can I even do Predictive Maintenance” or by downloading the Splunk Essentials for Predictive Maintenance app.
Thanks and until next time,