<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Hardie Gras]]></title><description><![CDATA[Hardie Gras]]></description><link>https://blog.hardiegras.myds.me/</link><generator>Ghost 3.0</generator><lastBuildDate>Mon, 11 May 2026 10:54:43 GMT</lastBuildDate><atom:link href="https://blog.hardiegras.myds.me/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Run your Node application on a Raspberry Pi as a service]]></title><description><![CDATA[<p>If you reboot your Raspberry Pi, and you want your Node application to be executed on startup, you can install it as a service and specify that it should always run when restarted.</p><p>Create a <code>.service</code> file under <code>/etc/systemd/system</code>. For my outlet service, I created <code>/etc/systemd/system/</code></p>]]></description><link>https://blog.hardiegras.myds.me/run-your-node-application-on-a-raspberry-pi-as-a-service/</link><guid isPermaLink="false">5f10ae22af20910001ee6ca8</guid><dc:creator><![CDATA[Chris Hardie]]></dc:creator><pubDate>Thu, 16 Jul 2020 19:59:17 GMT</pubDate><content:encoded><![CDATA[<p>If you reboot your Raspberry Pi, and you want your Node application to be executed on startup, you can install it as a service and specify that it should always run when restarted.</p><p>Create a <code>.service</code> file under <code>/etc/systemd/system</code>. For my outlet service, I created <code>/etc/systemd/system/outlet.service</code> and added the following:</p><pre><code>[Service]
WorkingDirectory=/home/pi/Documents/Repos/outlet-device
ExecStart=/usr/local/bin/npm start
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=notell
User=root
Group=root
Environment=NODE_ENV=production

[Install]
WantedBy=multi-user.target</code></pre><h3 id="helpful-commands">Helpful commands</h3><p><code>sudo systemctl enable outlet</code> - enables the service which will start on reboot</p><p><code>sudo systemctl disable outlet</code> - disables the service, preventing it from starting on reboot</p><p><code>sudo systemctl start outlet</code> - starts the service immediately</p><p><code>sudo systemctl stop outlet</code> - stops the service, but the service will restart on reboot if it is still enabled</p>]]></content:encoded></item><item><title><![CDATA[Putting it all together: Building a simple Home Automation IoT platform with IoT Hub and SignalR]]></title><description><![CDATA[<p>In a <a href="https://blog.hardiegras.myds.me/connecting-your-device-to-the-azure-iot-hub/">previous post</a> we discussed how we could connect an IoT device to an IoT Hub in Azure. In <a href="https://blog.hardiegras.myds.me/real-time-communication-with-signalr-service-and-node-js-azure-functions/">another previous post</a>, we walked-through setting up a real-time communication solution with SignalR Service.</p><p>This post is going to build on those to demonstrate how we can build a Home</p>]]></description><link>https://blog.hardiegras.myds.me/reading-and-publishing-to-iot-hub-using-azure-functions/</link><guid isPermaLink="false">5e388b1c5108db0001c62fa1</guid><category><![CDATA[raspberrypi]]></category><category><![CDATA[Azure Functions]]></category><category><![CDATA[iot]]></category><category><![CDATA[nodejs]]></category><category><![CDATA[signalr]]></category><dc:creator><![CDATA[Chris Hardie]]></dc:creator><pubDate>Tue, 18 Feb 2020 20:25:02 GMT</pubDate><content:encoded><![CDATA[<p>In a <a href="https://blog.hardiegras.myds.me/connecting-your-device-to-the-azure-iot-hub/">previous post</a> we discussed how we could connect an IoT device to an IoT Hub in Azure. In <a href="https://blog.hardiegras.myds.me/real-time-communication-with-signalr-service-and-node-js-azure-functions/">another previous post</a>, we walked-through setting up a real-time communication solution with SignalR Service.</p><p>This post is going to build on those to demonstrate how we can build a Home Automation IoT platform that will allow us to view telemetry data from our devices and also manage our devices from a webpage or mobile application.</p><p>At the end of this post, we will have the following solution:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/02/image-13.png" class="kg-image"></figure><h2 id="azure-functions-for-message-ingress-and-egress">Azure Functions for message ingress and egress</h2><p>At this point, we should have a pretty good handle on the IoT Hub and SignalR Service. What we need to do is get them talking to each other. The easiest way to do this is with Azure Functions. We can use the same Azure Functions project<a href="https://blog.hardiegras.myds.me/real-time-communication-with-signalr-service-and-node-js-azure-functions/#create-your-first-azure-function-in-vs-code"> we set up previously</a>.</p><p>We are going to need functions for two purposes:</p><ul><li>cloud-to-device messaging - these are messages to configure our device or put it into a particular state</li><li>device-to-cloud messaging - these are the telemetry messages we want our mobile and web clients to display</li></ul><h2 id="add-local-settings">Add local settings</h2><p>We'll need to add a connection string to our IoT Hub Event Hub-compatible endpoint. Open you IoT Hub in the Azure portal and open the "Built-in endpoints" menu item. This will expose the endpoint we will connect our reader function to.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/02/image-14.png" class="kg-image"></figure><p>We will also need a connection string to the IoT Hub device registry so we can direct messages to our device. We need a policy that has both <code>registry write</code> and <code>service connect</code> permissions. I'm going to use the <code>iothubowner</code> policy, but as a best practice a new policy should be created with just those permissions.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/02/image-20.png" class="kg-image"></figure><p>Open <code>local.settings.json</code> and add the following to the <code>Values</code> property:</p><!--kg-card-begin: markdown--><pre><code class="language-json">&quot;Values&quot;: {
  ...
  &quot;IoTHubConnectionString&quot;: &quot;Endpoint=sb://...&quot;,
  &quot;IoTHubRegistryConnectionString&quot;: &quot;HostName=...&quot;
</code></pre>
<!--kg-card-end: markdown--><h2 id="device-to-cloud-messaging">Device-to-cloud messaging</h2><p>For this flow, we are going to create an Azure Function that will read messages off our IoT Hub, massage them a bit and then send them on to SignalR Service.</p><p><a href="https://blog.hardiegras.myds.me/real-time-communication-with-signalr-service-and-node-js-azure-functions/#create-your-first-azure-function-in-vs-code">As done previously</a>, we'll create a new Azure Function but instead of using the HTTP Trigger template, we are going to use the Azure Event Hub trigger template. IoT Hub and Event Hub share the same technology for message ingress, so this trigger will be compatible for us. Name the function <code>deviceToCloud</code>.</p><h3 id="edit-devicetocloud-s-function-json">Edit deviceToCloud's function.json</h3><p>We are going to declaratively configure a couple bindings. The <code>eventHubTrigger</code> will already be in there, just ensure that the <code>"connection": "IoTHubConnectionString"</code> entry is in there so the function can connect to the IoT Hub endpoint we specified in our <code>local.settings.json</code>.</p><p>We will add a second output binding to push messages to SignalR Service, specifying the <code>chat</code> hub.</p><!--kg-card-begin: markdown--><pre><code class="language-json">{
  &quot;bindings&quot;: [
    {
      &quot;type&quot;: &quot;eventHubTrigger&quot;,
      &quot;name&quot;: &quot;eventHubMessages&quot;,
      &quot;direction&quot;: &quot;in&quot;,
      &quot;eventHubName&quot;: &quot;ServerlessIoTHub&quot;,
      &quot;connection&quot;: &quot;IoTHubConnectionString&quot;,
      &quot;cardinality&quot;: &quot;many&quot;,
      &quot;consumerGroup&quot;: &quot;$Default&quot;
    },
    {
      &quot;type&quot;: &quot;signalR&quot;,
      &quot;name&quot;: &quot;signalRMessages&quot;,
      &quot;hubName&quot;: &quot;chat&quot;,
      &quot;direction&quot;: &quot;out&quot;
    }
  ],
  &quot;scriptFile&quot;: &quot;../dist/deviceToCloud/index.js&quot;
}
</code></pre>
<!--kg-card-end: markdown--><h3 id="edit-devicetocloud-s-index-ts">Edit deviceToCloud's index.ts</h3><p>We need to write some code that will take each message and interrogate it for the id of the device that published it. We will then push that message along to SignalR Service using the device id as the group name. </p><p>Any client interested in observing a device's telemetry will simply need to join the SignalR group based on its id. Then SignalR will invoke the client's local handler and pass in the message (in this case, we expect the client to have a handler registered for <code>handleMessage</code>)</p><pre><code class="language-ts">import { AzureFunction, Context } from "@azure/functions";

interface DeviceMessage {
  deviceId: string;
  [key: string]: any;
}

const eventHubTrigger: AzureFunction = async function(
  context: Context,
  eventHubMessages: DeviceMessage[]
): Promise&lt;void&gt; {
  context.log(
    `Eventhub trigger function called for message array ${eventHubMessages}`
  );

  eventHubMessages.forEach(message =&gt; {
    context.bindings.signalRMessages = [
      {
        // message will only be sent to this group
        groupName: message.deviceId,
        target: "handleMessage",
        arguments: [message]
      }
    ];
  });
};

export default eventHubTrigger;
</code></pre><h2 id="cloud-to-device-messaging">Cloud-to-device messaging</h2><p>For this flow, we are going to create an Azure Function that will take some message that includes the id of the device of interest, and then push it into the IoT Hub where it will then be routed to that device. We are going to leverage Device Twin Desired Properties <a href="https://blog.hardiegras.myds.me/connecting-your-device-to-the-azure-iot-hub/#device-twin-desired-properties">as we have done previously</a> to manage our devices' state.</p><p>Also <a href="https://blog.hardiegras.myds.me/real-time-communication-with-signalr-service-and-node-js-azure-functions/#create-your-first-azure-function-in-vs-code">as done previously</a>, we'll create a new Azure Function using the HTTP Trigger template. Name it <code>cloudToDevice</code>.</p><h3 id="edit-cloudtodevice-s-function-json">Edit cloudToDevice's function.json</h3><p>Just remove the <code>get</code> from the list of acceptable methods:</p><pre><code class="language-json">"methods": [
  "post"
 ]</code></pre><h3 id="edit-cloudtodevice-s-index-ts">Edit cloudToDevice's index.ts</h3><p>Because there is no Device Twin binding, we are going to have to write a fair bit of code. We need to:</p><ul><li>retrieve the id of the device we want to send a message to from the querystring</li><li>connect to our IoT Hub device registry</li><li>get our device twin from the registry</li><li>patch our device twin's desired property based on the incoming message</li></ul><pre><code class="language-ts">import { AzureFunction, Context, HttpRequest } from "@azure/functions";
import { Registry, Twin } from "azure-iothub";

const httpTrigger: AzureFunction = async function(
  context: Context,
  req: HttpRequest
): Promise&lt;void&gt; {
  const deviceId = req.query.deviceId;

  if (!deviceId) {
    context.res = {
      status: 400,
      body: "No device id in the request"
    };
    return;
  }

  const registry = Registry.fromConnectionString(process.env.IoTHubRegistryConnectionString);

  // Using the callback form of getTwin is not possible due to the need
  // to await so that I can call context.log before the function returns
  const getTwinResponse = await registry.getTwin(deviceId)
    .catch(getTwinErr =&gt; {
      context.res = {
        status: 400,
        body: `Could not retrieve device twin for device with Id ${deviceId}`
      };
      context.log(getTwinErr.message);
      throw getTwinErr;
    });

  // responseBody is presently typed incorrectly as HttpResponse&lt;any&gt;,
  // forcing this cast
  const twin = getTwinResponse.responseBody as Twin;
  const propertyPatch = {
    properties: {
      desired: req.body
    }
  };
  
  // Need to await so that I can call context.log before function returns
  await twin.update(propertyPatch)
    .catch(updateTwinErr =&gt; {
      if (updateTwinErr) {
        context.res = {
          status: 400,
          body: `Could not update device twin for device with Id ${deviceId}`
        };
        context.log(updateTwinErr);
      }
  });
};

export default httpTrigger;</code></pre><p>At this point we should be able to trigger this Azure Function to update our device's status.</p><blockquote>Note: this obviously has not been secured in any way. Clients generally should not have direct access like this to your IoT Hub. One possible security improvement would be to create a proxy that has access to a secret it can forward to this Azure Function, and users could authenticate against the proxy and use it instead. This will be the subject of a future post.</blockquote><p>After hitting <code>F5</code> to start debugging our functions, we can POST the following:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/02/image-18.png" class="kg-image"></figure><p>Execute that, visit the device twin in the Azure portal, and we can see that our device's desired properties - and reported properties - have been updated:</p><pre><code class="language-json">"properties": {
    "desired": {
      "status": "on",
      "$metadata": {
        "$lastUpdated": "2020-02-14T20:18:53.4911471Z",
        "$lastUpdatedVersion": 71,
        "status": {
          "$lastUpdated": "2020-02-14T20:18:53.4911471Z",
          "$lastUpdatedVersion": 71
        }
      },
      "$version": 71
    },
    "reported": {
      "status": "on",
      "$metadata": {
        "$lastUpdated": "2020-02-14T20:18:53.6069387Z",
        "status": {
          "$lastUpdated": "2020-02-14T20:18:53.6069387Z"
        }
      },
      "$version": 72
    }</code></pre><h2 id="coding-our-client-webpage">Coding our client webpage</h2><p>We are going to code a very simple webpage that will print out telemetry messages from a device with the id of <code>test-device</code>, the same one used in a previous post. We will also change its state as we did before with a web request.</p><p>I'll include the source of the webpage in its entirety at the end of this post until I get everything moved over to Github.</p><h3 id="handling-incoming-device-to-cloud-message">Handling incoming device-to-cloud message</h3><p>This hasn't changed since <a href="https://blog.hardiegras.myds.me/real-time-communication-with-signalr-service-and-node-js-azure-functions#building-our-client-web-page">the previous post</a> where we set up our SignalR Service. We receive the message, stringify it and then render it onto the page:</p><pre><code class="language-js">function handleMessage(message) {
  document.querySelector("#log").innerHTML = `&lt;div&gt;${JSON.stringify(
    message
  )}&lt;/div&gt;`;
}</code></pre><h3 id="sending-cloud-to-device-message">Sending cloud-to-device message</h3><p>Instead of broadcasting a message to a SignalR Service group, we are going to instead send a message via our new <code>cloudToDevice</code> Azure Function.</p><pre><code class="language-js">function sendMessage() {
  const newStatus = document.querySelector("#isOn").value;

  const payload = { status: newStatus };

  axios.post(`${apiBaseUrl}/api/cloudToDevice?deviceId=${DEVICE_ID}`, payload);
}</code></pre><h3 id="have-i-mentioned-i-am-not-a-designer">Have I mentioned I am not a designer?</h3><p>Here is the rudimentary UI I have built that allows a user to subscribe to <code>test-device</code>'s telemetry stream and also set its status:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/02/image-22.png" class="kg-image"></figure><p>Once I click the checkbox to subscribe, telemetry will start rendering in real-time on the page:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/02/image-23.png" class="kg-image"></figure><p>If I set the a status in my dropdown and hit the Send button, my Azure Function will forward the message to my IoT Hub.</p><blockquote>To re-iterate, this is not secure and you shouldn't provide your client direct access to your IoT Hub like this.</blockquote><p>As you hit the send button, you should see the <code>test-device</code> device twin update its desired and reported properties:</p><pre><code class="language-json">{
  "deviceId": "test-device",
  "tags": {
    "kind": "outlet"
  },
  "properties": {
    "desired": {
      "status": "on",
      ...
    },
    "reported": {
      "status": "on",
      ...
 }</code></pre><h2 id="wrapping-up">Wrapping up</h2><p>We now have a simple but super cool platform that lets us provision devices, publish telemetry, receive updates in real-time, and manage our devices from our computer or phone.</p><p>In a future post I will go through the exercise of deploying this code into Azure so that it will be always on and available for use. I'll also upload all the relevant code to Github. Until then, here is the full source of the client webpage:</p><pre><code class="language-html">&lt;html style="font-size: 20px;padding: 20px;"&gt;
  &lt;body&gt;
    &lt;div style="float: left; margin-right: 80px"&gt;
      &lt;div&gt;
        &lt;input type="checkbox" id="subscribe" name="subscribe" &gt;
        &lt;label for="subscribe"&gt;Subscribe to test-device telemetry&lt;/label&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div style="float: left;margin-bottom: 50px; border-left: 1px solid #ccc; padding-left: 20px;"&gt;
      &lt;div&gt;
        &lt;label for="isOn"&gt;Set device status&lt;/label&gt;&lt;br /&gt;
        &lt;select id="isOn"&gt;
          &lt;option value="on"&gt;On&lt;/option&gt;
          &lt;option value="off"&gt;Off&lt;/option&gt;
        &lt;/select&gt; 
        &lt;button id="send"&gt;Send&lt;/button&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;hr style="clear:both;" /&gt;
    &lt;h3&gt;Telemetry data&lt;/h3&gt;
    &lt;div id="log" style="margin-top: 50px"&gt;No telemetry received&lt;/div&gt;
    &lt;script src="https://cdn.jsdelivr.net/npm/@aspnet/signalr@1.1.2/dist/browser/signalr.js"&gt;&lt;/script&gt;
    &lt;script src="https://cdn.jsdelivr.net/npm/axios@0.18.0/dist/axios.min.js"&gt;&lt;/script&gt;
    &lt;script&gt;
      const username = new URLSearchParams(window.location.search).get("username");
      var apiBaseUrl = "http://localhost:7071";
      const deviceId = "test-device";

      function changeGroup(event) {
        if(event.target.checked) {
          axios.post(
            `${apiBaseUrl}/api/joinGroup?userId=${username}&amp;groupName=${deviceId}`
          );
        }
        else {
          axios.post(
            `${apiBaseUrl}/api/leaveGroup?userId=${username}&amp;groupName=${deviceId}`
          );
        }
      }

      function sendMessage() {
        const newStatus = document.querySelector("#isOn").value;

        const payload = { status: newStatus };

        axios.post(`${apiBaseUrl}/api/cloudToDevice?deviceId=${deviceId}`, payload);
      }

      function handleMessage(message) {
        document.querySelector("#log").innerHTML = `&lt;div&gt;${JSON.stringify(
          message
        )}&lt;/div&gt;`;
      }

      document.querySelector("#subscribe").onclick = changeGroup;
      document.querySelector("#send").onclick = sendMessage;

      var connection = new signalR.HubConnectionBuilder()
        .withUrl(`${apiBaseUrl}/api/${username}`)
        .configureLogging(signalR.LogLevel.Information)
        .build();
      
      connection.on("handleMessage", handleMessage);
      
      connection
        .start()
        .catch(console.error);
    &lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;</code></pre>]]></content:encoded></item><item><title><![CDATA[Machine Learning on the Raspberry Pi with Tensorflow.js]]></title><description><![CDATA[<p>Tensorflow is Google's open-source machine learning platform. It's pretty impressive, even for a machine learning layman like myself. It can be used for classifying and locating objects in an image, determining how toxic a message is, recognize sounds, predicting probabilities  of future events, and way more.</p><p>Tensorflow has been ported</p>]]></description><link>https://blog.hardiegras.myds.me/machine-learning-with-tensorflow-js/</link><guid isPermaLink="false">5e4b0255005870000178aebc</guid><category><![CDATA[raspberrypi]]></category><category><![CDATA[Tensorflow]]></category><category><![CDATA[Machine Learning]]></category><dc:creator><![CDATA[Chris Hardie]]></dc:creator><pubDate>Mon, 17 Feb 2020 22:29:53 GMT</pubDate><content:encoded><![CDATA[<p>Tensorflow is Google's open-source machine learning platform. It's pretty impressive, even for a machine learning layman like myself. It can be used for classifying and locating objects in an image, determining how toxic a message is, recognize sounds, predicting probabilities  of future events, and way more.</p><p>Tensorflow has been ported over as a Javascript library that is compatible in both browser and Node.js runtimes. I got image classification and object detection up running with a few hurdles I'll call out in this post which will hopefully get smoothed out with time.</p><h2 id="image-classification">Image classification</h2><p>Image classification is where we provide Tensorflow with an image, and it will indicate what class it believes the object represents.</p><h2 id="object-detection">Object detection</h2><p>Object detection is where we provide Tensorflow with an image with possibly multiple objects, and it will indicate whether an object of a particular class has been detected. Some machine learning models will also provide a bounding box to show where each object is in the image.</p><h2 id="wait-what-s-a-model">Wait, what's a model?</h2><p>A machine learning model is a set of assumptions used to make predictions about data. The models I have seen in my limited experience are trained by providing them large amounts of data and then having humans tell them what that data means. This is an example of supervised learning, where the model is trained on data where both the inputs and outputs are supplied.</p><p>Unsupervised learning involves providing inputs but no outputs, the model has to identify patterns on its own. I have no idea how this works :)</p><h3 id="pre-existing-models">Pre-existing models</h3><p>What's really cool about this concept of a model is that once they have been trained, they can be distributed to run on other Tensorflow instances. For instance, Google has created a <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md">detection model zoo</a> where anyone can grab a copy of a model and run them without going through the labourious effort to train one themselves.</p><p>Those models are compatible with Tensorflow, which is written in Python, but if we want to stick with Javascript, there have been <a href="https://github.com/tensorflow/tfjs-models">some models ported over</a> to Tensorflow.js. </p><h2 id="setting-up">Setting up</h2><ul><li>a Raspberry Pi 3 with at least Rasbian Buster updated and upgraded</li><li>a Raspberry Pi camera</li><li>Node.js v10 - I had issues with Node v12, and didn't attempt a previous version of Node.js</li></ul><h2 id="create-your-project">Create your project</h2><p>On your Raspberry Pi, create a new folder called <code>camera-test</code>. <code>cd</code> into it and run <code>npm init</code> and use all the defaults.</p><p>Use the following for your <code>package.json</code> and run <code>npm install</code>:</p><pre><code class="language-json">{
  "name": "camera-test",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" &amp;&amp; exit 1",
    "start": "ts-node index.ts --skipLibCheck true"
  },
  "author": "",
  "license": "ISC",
  "devDependencies": {
    "@types/node": "^13.7.1",
    "ts-node": "^8.6.2",
    "typescript": "^3.7.5"
  },
  "dependencies": {
    "@tensorflow-models/coco-ssd": "^2.0.1",
    "@tensorflow-models/mobilenet": "^2.0.4",
    "@tensorflow/tfjs-node": "1.2.11"
  }
}
</code></pre><p>Note the pinned version for <code>@tensorflow/tfjs-node</code> - there is a more recent version, but on the Raspberry Pi I wasn't able successfully build a binding with it, a task we'll get to shortly.</p><p>After you have installed the above packages, we need to manually build the Tensorflow Node.js binding. This is a Raspberry Pi-only step, I'm unsure why it is necessary.</p><p><code>npm rebuild @tensorflow/tfjs-node --build-from-source</code></p><p>Finally, if you're using Typescript like I am, you'll need to adjust your <code>tsconfig.json</code> to include the <code>skipLibCheck: true</code> compiler option.</p><h2 id="predictive-analysis-script">Predictive analysis script</h2><p>I'm going to post the entire script to capture a still image from my Rasperry Pi camera, and then have it processed by both an image classifier model and an object detection model.</p><pre><code>import * as tf from "@tensorflow/tfjs-node";
import * as fs from "fs";
import * as mobilenet from "@tensorflow-models/mobilenet";
import * as cocoSsd from "@tensorflow-models/coco-ssd";
import { spawn } from "child_process";

var filename = "./capture.jpg";
// Set camera arguments
var args = [ "-hf", "-vf", "-w", "2592", "-h","1944", "-o", filename, "-t", "1" ];

var spawned = spawn("raspistill", args);

spawned.on("exit", function() {
  fs.readFile(filename, async (err, data) =&gt; {
    if (err) {
      console.error(err);
    }

    const imgTensor = tf.node.decodeJpeg(data);

    const mobileNetmodel = await mobilenet.load();
    const mobilenetPredictions = await mobileNetmodel.classify(imgTensor);

    console.log("Mobilenet predictions: ");
    console.log(mobilenetPredictions);

    const cocoModel = await cocoSsd.load();
    const cocoPredictions = await cocoModel.detect(imgTensor);

    console.log("Coco predictions: ");
    console.log(cocoPredictions);
  });
});
</code></pre><p>I didn't have much success with the <code>mobilenet</code> model - it once accused me of being a sofa. Now, I believe the Javascript port is not the most recent version, and also this mobile-optimized library sacrifices accuracy for the sake of speed. Things may improve with time.</p><p>However, I was pretty pleased with how well the <code>coco-ssd</code> model ran. It detected various members of my family and pets in images and applied a class and bounding box where the object was detected.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/02/image-21.png" class="kg-image"></figure><p>For the above photo, <code>coco-ssd</code> produced the following:</p><pre><code class="language-json">Coco predictions:
[ { bbox:
     [ 852.9707651138306,
       449.7543740272522,
       953.0751829147339,
       1483.4483785629272 ],
    class: 'cat',
    score: 0.9699275493621826 },
  { bbox:
     [ 32.730048179626465,
       286.1092700958252,
       1323.1229524612427,
       1671.2676229476929 ],
    class: 'person',
    score: 0.7677765607833862 } ]</code></pre><p>Without a whole lot of code required, my little Raspberry Pi can now scan images and identify detections with a classification, bounding box and confidence scores. When I eventually get around to building a wildlife camera, I'm pretty much there!</p>]]></content:encoded></item><item><title><![CDATA[Real-time communication with SignalR Service and Node.js Azure Functions]]></title><description><![CDATA[<p>In the course of building my IoT Home Automation platform, I needed something to facilitate real-time communication between my IoT devices and my clients (web pages, React Native web apps, etc...). I went with SignalR's PAAS offering in Azure - <a href="https://dotnet.microsoft.com/apps/aspnet/signalr">SignalR Service</a>.</p><p>I've played with SignalR off and on since</p>]]></description><link>https://blog.hardiegras.myds.me/real-time-communication-with-signalr-service-and-node-js-azure-functions/</link><guid isPermaLink="false">5e41aaaea2caa40001e26c95</guid><category><![CDATA[azure]]></category><category><![CDATA[nodejs]]></category><category><![CDATA[Typescript]]></category><category><![CDATA[signalr]]></category><category><![CDATA[Azure Functions]]></category><dc:creator><![CDATA[Chris Hardie]]></dc:creator><pubDate>Tue, 11 Feb 2020 18:08:16 GMT</pubDate><content:encoded><![CDATA[<p>In the course of building my IoT Home Automation platform, I needed something to facilitate real-time communication between my IoT devices and my clients (web pages, React Native web apps, etc...). I went with SignalR's PAAS offering in Azure - <a href="https://dotnet.microsoft.com/apps/aspnet/signalr">SignalR Service</a>.</p><p>I've played with SignalR off and on since it was released, and in the course of building apps with it, it became apparent that I would end up writing near-identical functions over and over. Users need to connect. Users need to enrol in groups. Users need to send messages.</p><p>What's cool about SignalR Service is there is a serverless option that means next to no setup necessary, no infrastructure to worry about, and the repetitive back-end code I used to have to write has been replaced with terse <a href="https://azure.microsoft.com/en-ca/blog/introducing-azure-functions/">Azure Functions</a>. Azure Functions are another serverless PAAS offering that lets you run code without worrying about infrastructure. You can write them in .Net, Python, Java, Javascript - or in my case - Typescript!</p><h2 id="pre-requisites">Pre-requisites</h2><ul><li><a href="https://code.visualstudio.com/">VS Code</a> - Microsoft's lightweight IDE</li><li><a href="https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions">Azure Functions for Visual Studio Code Extension</a> - lets you create, manage and deploy Azure Functions from within Visual Code</li><li><a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=windows">Azure Functions Core Tools</a> - <code>npm install -g azure-functions-core-tools</code></li><li>An Azure subscription</li></ul><h2 id="create-the-signalrr-service-resource">Create the SignalrR Service resource</h2><p>Bring up the Azure Portal and find the SignalR Service. Specify your subscription and resource group, and specify the Free pricing tier and <code>Serverless</code> option for  <code>ServiceMode</code>.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/02/image.png" class="kg-image"></figure><p>The Free tier is more than enough for my needs. It provides a single unit, which allows for 20 concurrent connections and 20,000 messages. Note that I will not be connecting IoT devices to SignalR Service, only my clients, which will only be a handful. (I'll go into detail about the end-to-end flow of messages from devices to clients via SignalR in a future post.)</p><p>Once the deployment is complete, take note of the keys and connection strings that have been generated, our Azure Functions will need them.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/02/image-1.png" class="kg-image"></figure><h2 id="create-your-first-azure-function-in-vs-code">Create your first Azure Function in VS Code</h2><p>After you have installed the Azure Functions extension mentioned above, a new Azure menu item in the sidenav will appear. Click on it, and then click on the Create New Project button at the top:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/02/image-2.png" class="kg-image"></figure><p>Select the folder to create the new project in. For my language choice, I've selected Typescript. Choose to make a <code>HTTP Trigger</code> function.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/02/image-3.png" class="kg-image"></figure><p>Name it <code>negotiate</code> - this name is important as it will be used for an endpoint name that SignalR Service will consume based on convention.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/02/image-4.png" class="kg-image"></figure><p>For Authorization Level, choose <code>Anonymous</code></p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/02/image-5.png" class="kg-image"></figure><p>Once that's done, a new Azure Function will be added to the project with boilerplate code. The project view shows a high-level overview of your functions and their bindings which we'll discuss shortly.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/02/image-6.png" class="kg-image"></figure><p>Click on the Explorer button in the sidenav and you can see a number of things have been created on disk.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/02/image-7.png" class="kg-image"></figure><h3 id="manage-settings-with-local-settings-json">Manage settings with local.settings.json</h3><p>While developing locally, settings will be read in from <code>local.settings.json</code> We'll need to include our SignalR Service connection string we captured from the portal, and we'll also need to configure CORS as our webpage will end up in a domain different than our Azure Functions domain.</p><!--kg-card-begin: markdown--><pre><code class="language-json">{
  &quot;IsEncrypted&quot;: false,
  &quot;Values&quot;: {
    &quot;AzureSignalRConnectionString&quot;: &quot;&lt;ENTER CONN STRING HERE&gt;&quot;,
    &quot;FUNCTIONS_WORKER_RUNTIME&quot;: &quot;node&quot;,
  },
  &quot;Host&quot;: {
    &quot;LocalHttpPort&quot;: 7071,
    &quot;CORS&quot;: &quot;http://localhost:8080&quot;,
    &quot;CORSCredentials&quot;: true
  }
}
</code></pre>
<!--kg-card-end: markdown--><h2 id="negotiate-azure-function">negotiate Azure Function</h2><p>The purpose of this function is to negotiate a token from the SignalR Service and provide it to the client. The client will then supply the token to SignalR Service for authentication.</p><h3 id="edit-negotiate-s-function-json">Edit negotiate's function.json</h3><p>Open <code>/negotiate/function.json</code> for editing. This file supplies configuration details at the function level. Looking at the boilerplate, you can see that a couple bindings have been included that are standard for a HTTP trigger.</p><!--kg-card-begin: markdown--><pre><code class="language-json">{
  &quot;bindings&quot;: [
    {
      &quot;authLevel&quot;: &quot;anonymous&quot;,
      &quot;type&quot;: &quot;httpTrigger&quot;,
      &quot;direction&quot;: &quot;in&quot;,
      &quot;name&quot;: &quot;req&quot;,
      &quot;methods&quot;: [
        &quot;get&quot;,
        &quot;post&quot;
      ]
    },
    {
      &quot;type&quot;: &quot;http&quot;,
      &quot;direction&quot;: &quot;out&quot;,
      &quot;name&quot;: &quot;res&quot;
    }
  ],
  &quot;scriptFile&quot;: &quot;../dist/negotiate/index.js&quot;
}
</code></pre>
<!--kg-card-end: markdown--><p>Bindings are resources that you can declaratively attach to your functions. Input bindings are triggers that invoke the function, while output bindings are where the Azure Functions emit their result.</p><p>Bindings are optional, there's nothing stopping you from writing imperative code inside your function. But if you have a binding available to you, they can be an excellent choice to reduce the amount of code you need to write. We'll see how the repetitive back-end code I used to have to write has been minimized by leveraging existing bindings.</p><p>We are going to make a couple changes to this file. We are going to alter the input binding's route so we can supply a <code>userId</code> to SignalR service, which is a mandatory piece of identity we need to supply if we want to enrol our users into groups.</p><!--kg-card-begin: markdown--><pre><code class="language-json">    {
      &quot;authLevel&quot;: &quot;anonymous&quot;,
      &quot;type&quot;: &quot;httpTrigger&quot;,
      &quot;direction&quot;: &quot;in&quot;,
      ...
      &quot;route&quot;: &quot;{userId}/negotiate&quot;
    },
    ```</code></pre>
<!--kg-card-end: markdown--><p>We will now add an additional input binding so that we can accept real-time communication from our SignalR Service hub called <code>chat</code>:</p><pre><code class="language-json">	{
      "type": "signalRConnectionInfo",
      "name": "connectionInfo",
      "hubName": "chat",
      "direction": "in",
      "userId": "{userId}"
    }</code></pre><p>At the end your <code>function.json</code> should look like this:</p><pre><code class="language-json">{
  "bindings": [
    {
      "authLevel": "anonymous",
      "type": "httpTrigger",
      "direction": "in",
      "name": "req",
      "methods": [
        "post"
      ],
      "route": "{userId}/negotiate"
    },
    {
      "type": "http",
      "direction": "out",
      "name": "res"
    },
    {
      "type": "signalRConnectionInfo",
      "name": "connectionInfo",
      "hubName": "chat",
      "direction": "in",
      "userId": "{userId}"
    }
  ],
  "scriptFile": "../dist/negotiate/index.js"
}</code></pre><h3 id="edit-negotiate-s-index-ts">Edit negotiate's index.ts</h3><p>This is where the declarative binding really starts to shine. Delete the contents of <code>/negotiate/index.ts</code> and replace it with the following:</p><pre><code class="language-ts">import { AzureFunction, Context, HttpRequest } from "@azure/functions"

const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest, connectionInfo: any): Promise&lt;void&gt; {
    context.res.json(connectionInfo);
};

export default httpTrigger;
</code></pre><p>This one liner negotiates authentication tokens from SignalR Server and provides them to our clients so they can authenticate. Easy!</p><h2 id="joingroup-azure-function">joinGroup Azure Function</h2><p>SignalR has the concept of groups, which allows for messaging to be targeted to a subset of connected users. They aren't strictly necessary, but for my IoT Home Automation platform, they are useful so that I can direct data from my devices to only the clients that wish to observe that data by joining a group with the name of the device's ID.</p><p>Go ahead and create another HTTP Trigger Azure Function like before, but this time call it <code>joinGroup</code>.</p><h3 id="edit-joingroup-s-function-json">Edit joinGroup's function.json</h3><p>Again, we are going to leverage a pre-existing binding that exposes SignalR group management actions. We are going to add the <code>signalRGroupActions</code> binding to the bottom of the file, which should now look like:</p><pre><code class="language-json">{
  "bindings": [
    {
      "authLevel": "anonymous",
      "type": "httpTrigger",
      "direction": "in",
      "name": "req",
      "methods": [
        "post"
      ]
    },
    {
      "type": "http",
      "direction": "out",
      "name": "res"
    },
    {
      "type": "signalR",
      "name": "signalRGroupActions",
      "hubName": "chat",
      "direction": "out"
    }
  ],
  "scriptFile": "../dist/joinGroup/index.js"
}</code></pre><h3 id="edit-joingroup-s-index-ts">Edit joinGroup's index.ts</h3><p>Again, we'll leverage our declarative binding to enrol a user into a group. We will also accept the group's name as a querystring parameter. If the group doesn't exist, the group will be created.</p><pre><code class="language-ts">import { AzureFunction, Context, HttpRequest } from "@azure/functions";

const httpTrigger: AzureFunction = async function(
  context: Context,
  req: HttpRequest
): Promise&lt;void&gt; {
  context.bindings.signalRGroupActions = [
    {
      userId: req.query.userId,
      groupName: req.query.groupName,
      action: "add"
    }
  ];
};

export default httpTrigger;</code></pre><h2 id="leavegroup-azure-function">leaveGroup Azure Function</h2><p>This function is nearly identical to the <code>joinGroup</code> function so I won't dive into it. The <code>function.json</code> is the same, but there is a slight change to the <code>index.ts</code> to indicate a different group action:</p><pre><code class="language-ts">import { AzureFunction, Context, HttpRequest } from "@azure/functions";

const httpTrigger: AzureFunction = async function(
  context: Context,
  req: HttpRequest
): Promise&lt;void&gt; {
  context.bindings.signalRGroupActions = [
    {
      userId: req.query.userId,
      groupName: req.query.groupName,
      action: "remove"
    }
  ];
};

export default httpTrigger;</code></pre><h2 id="sendtogroup-azure-function">sendToGroup Azure Function</h2><p>The final function we need will send messages to members of a specific group. Create another HTTP Trigger function like before.</p><h3 id="edit-sendtogroup-s-function-json">Edit sendToGroup's function.json</h3><p>We'll add the <code>signalRMessages</code> binding to the function.</p><pre><code class="language-json">{
  "bindings": [
    {
      "authLevel": "anonymous",
      "type": "httpTrigger",
      "direction": "in",
      "name": "req",
      "methods": [
        "post"
      ]
    },
    {
      "type": "http",
      "direction": "out",
      "name": "res"
    },
    {
      "type": "signalR",
      "name": "signalRMessages",
      "hubName": "chat",
      "direction": "out"
    }
  ],
  "scriptFile": "../dist/sendToGroup/index.js"
}
</code></pre><h3 id="edit-sendtogroup-s-index-ts">Edit sendToGroup's index.ts</h3><p>We will supply the binding with three properties:</p><ul><li><code>groupName</code> - the name of the group we want to send a message to</li><li><code>target</code> - this is the name of the function we want to invoke on our client. We'll go into this later where we code the <code>handleMessage</code> using the SignalR client library.</li><li><code>arguments</code> - this represents the arguments we pass into the target function we specified. We're going to pass our request body in.</li></ul><pre><code class="language-ts">import { AzureFunction, Context, HttpRequest } from "@azure/functions";

const httpTrigger: AzureFunction = async function(
  context: Context,
  req: HttpRequest
): Promise&lt;void&gt; {
  context.bindings.signalRMessages = [
    {
      // message will only be sent to this group
      groupName: req.query.groupName,
      target: "handleMessage",
      arguments: [req.body]
    }
  ];
};

export default httpTrigger;</code></pre><blockquote>If you want to send a message to a single user, remove the <code>groupName</code> property and replace it with <code>"userId": "myUserId"</code></blockquote><h2 id="running-your-azure-functions-locally">Running your Azure Functions locally</h2><p>Now that our functions have been coded, hit F5 to start debugging. This will execute <code>npm install</code>, then <code>npm build</code>, followed by <code>func host start</code></p><p>If everything worked. you should see a message indicating your application has started, and a list of your functions with their URLs.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/02/image-8.png" class="kg-image"></figure><h2 id="building-our-client-web-page">Building our client web page</h2><p>To test everything, we'll create a rudimentary webpage (I'll include full source at the bottom of this page until I get the code copied over to Github). Users will click checkboxes to join and leave a couple of groups - the Super Group and the Awesome Group. Users will also be able to send messages to those groups.</p><p>We'll start with this markup:</p><pre><code class="language-html">&lt;div style="float: left; margin-right: 80px"&gt;
    &lt;div&gt;
        &lt;input type="checkbox" id="superGroup" name="superGroup" &gt;
        &lt;label for="superGroup"&gt;Join super group&lt;/label&gt;
    &lt;/div&gt;
    &lt;div&gt;
        &lt;input type="checkbox" id="awesomeGroup" name="awesomeGroup" &gt;
        &lt;label for="awesomeGroup"&gt;Join awesome group&lt;/label&gt;
    &lt;/div&gt;
&lt;/div&gt;
&lt;div style="float: left"&gt;
    &lt;div&gt;
        &lt;label for="superGroupText"&gt;Send message to super group&lt;/label&gt;&lt;br /&gt;
        &lt;input type="text" id="superGroupText" name="superGroupText" /&gt;
        &lt;button id="sendSuperMessage"&gt;Send&lt;/button&gt;
    &lt;/div&gt;
    &lt;br /&gt;
    &lt;div&gt;
        &lt;label for="awesomeGroupText"&gt;Send message to awesome group&lt;/label&gt;&lt;br /&gt;
        &lt;input type="text" id="awesomeGroupText" name="awesomeGroupText" /&gt;
        &lt;button id="sendAwesomeMessage"&gt;Send&lt;/button&gt; 
    &lt;/div&gt;
&lt;/div&gt;
&lt;br style="clear:both" /&gt;
&lt;div id="log" style="margin-top: 50px" /&gt;</code></pre><p>We are now going to create some functions that will consume the endpoints our Azure Functions have exposed. We'll start with the functions to join and leave groups. I'll use Axios to make this is a little easier on myself:</p><pre><code class="language-js">const username = new URLSearchParams(window.location.search).get("username");
var apiBaseUrl = "http://localhost:7071";
      
function joinGroup(groupName) {
    axios.post(
        `${apiBaseUrl}/api/joinGroup?userId=${username}&amp;groupName=${groupName}`
    );
}

function leaveGroup(groupName) {
    axios.post(
        `${apiBaseUrl}/api/leaveGroup?userId=${username}&amp;groupName=${groupName}`
    );
}</code></pre><p>To send a message, we'll capture the text in an input and include it in the payload to our endpoint:</p><pre><code class="language-js">function sendMessageToGroup(groupName, buttonId) {
    const text = document.querySelector(buttonId).value;

    axios
        .post(`${apiBaseUrl}/api/sendToGroup?&amp;groupName=${groupName}`, {
        sender: username,
        group: groupName,
        text: text
    })
}</code></pre><p>Remember when we registered the <code>handleMessage</code> target in our <code>sendToGroup</code> Azure Function? We're now going to code its implementation to render the message onto the screen.</p><pre><code class="language-js">function handleMessage(message) {
    document.querySelector("#log").innerHTML = `&lt;div&gt;${JSON.stringify(
        message
    )}&lt;/div&gt;`;
}</code></pre><p>Finally, we need to build our SignalR connection, register the handler for the <code>handleMessage</code> target, and start it.</p><pre><code class="language-js">var connection = new signalR.HubConnectionBuilder()
    .withUrl(`${apiBaseUrl}/api/${username}`)
    .configureLogging(signalR.LogLevel.Information)
    .build();

connection.on("handleMessage", handleMessage);

connection
    .start()
    .catch(console.error);</code></pre><p>The page needs to be served from a webserver, CORS will fail for a page served off disk. <code>http-server</code> can be used for this, just ensure it serves at <code>http://localhost:8080</code> as that is the URL we specified in <code>local.settings.json</code> in our CORS configuration.</p><p>Once your server is up and running, browse to your page ensuring you include a username as a querystring parameter:</p><p><a href="http://localhost:8080/index.html?username=John">http://localhost:8080/index.html?username=Ja</a>ck</p><p>If you open Devtools, you should see a happy message in your console indicating a successful connection.</p><blockquote> [2020-02-11T17:43:24.224Z] Information: WebSocket connected to wss://serverless-demo.service.signalr.net/client/?hub=chat&amp;id=C_tHxxx </blockquote><p>If you open your network tab and click on <code>WS</code>, you can select your websocket connection to see the frames being passed back and forth. In the capture below you can see <code>{type: 6}</code> being ferried around - it's a keepalive message. In the event that the browser doesn't send notification of a disconnection - say if there is a power failure - then the keepalive message won't be returned to SignalR indicating the connection is no longer valid.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/02/image-9.png" class="kg-image"></figure><p>Open a second browser window, and open your page but with a different username.</p><p><a href="http://localhost:8080/index.html?username=Jack">http://localhost:8080/index.html?username=</a>Jill</p><p>You should now be able to join and leave groups, and send messages between the two users!</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/02/image-10.png" class="kg-image"></figure><p>As promised, here is the full source of the webpage.</p><pre><code class="language-html">&lt;html style="font-size: 20px"&gt;
  &lt;body&gt;
    &lt;div style="float: left; margin-right: 80px"&gt;
      &lt;div&gt;
        &lt;input type="checkbox" id="superGroup" name="superGroup" &gt;
        &lt;label for="superGroup"&gt;Join super group&lt;/label&gt;
      &lt;/div&gt;
      &lt;div&gt;
        &lt;input type="checkbox" id="awesomeGroup" name="awesomeGroup" &gt;
        &lt;label for="awesomeGroup"&gt;Join awesome group&lt;/label&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div style="float: left"&gt;
      &lt;div&gt;
        &lt;label for="superGroupText"&gt;Send message to super group&lt;/label&gt;&lt;br /&gt;
        &lt;input type="text" id="superGroupText" name="superGroupText" /&gt;
        &lt;button id="sendSuperMessage"&gt;Send&lt;/button&gt;
      &lt;/div&gt;
      &lt;br /&gt;
      &lt;div&gt;
        &lt;label for="awesomeGroupText"&gt;Send message to awesome group&lt;/label&gt;&lt;br /&gt;
        &lt;input type="text" id="awesomeGroupText" name="awesomeGroupText" /&gt;
        &lt;button id="sendAwesomeMessage"&gt;Send&lt;/button&gt; 
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;br style="clear:both" /&gt;
    &lt;div id="log" style="margin-top: 50px" /&gt;
    &lt;script src="https://cdn.jsdelivr.net/npm/@aspnet/signalr@1.1.2/dist/browser/signalr.js"&gt;&lt;/script&gt;
    &lt;script src="https://cdn.jsdelivr.net/npm/axios@0.18.0/dist/axios.min.js"&gt;&lt;/script&gt;
    &lt;script&gt;
      const username = new URLSearchParams(window.location.search).get("username");
      var apiBaseUrl = "http://localhost:7071";
      
      function handleGroupChange(event, groupName) {
        if (event.target.checked) {
          joinGroup(groupName);
        }        
        else {
          leaveGroup(groupName);
        }
      }

      function joinGroup(groupName) {
        axios.post(
          `${apiBaseUrl}/api/joinGroup?userId=${username}&amp;groupName=${groupName}`
        );
      }

      function leaveGroup(groupName) {
        axios.post(
          `${apiBaseUrl}/api/leaveGroup?userId=${username}&amp;groupName=${groupName}`
        );
      }

      function sendMessageToGroup(groupName, buttonId) {
        const text = document.querySelector(buttonId).value;

        axios
          .post(`${apiBaseUrl}/api/sendToGroup?&amp;groupName=${groupName}`, {
            sender: username,
            group: groupName,
            text: text
          })
      }

      document.querySelector("#superGroup").onclick = (e) =&gt; handleGroupChange(e, "superGroup");
      document.querySelector("#awesomeGroup").onclick = (e) =&gt; handleGroupChange(e, "awesomeGroup");

      document.querySelector("#sendSuperMessage").onclick = () =&gt; sendMessageToGroup("superGroup", "#superGroupText");
      document.querySelector("#sendAwesomeMessage").onclick = () =&gt; sendMessageToGroup("awesomeGroup", "#awesomeGroupText");

      function handleMessage(message) {
        document.querySelector("#log").innerHTML = `&lt;div&gt;${JSON.stringify(
          message
        )}&lt;/div&gt;`;
      }
            
      var connection = new signalR.HubConnectionBuilder()
        .withUrl(`${apiBaseUrl}/api/${username}`)
        .configureLogging(signalR.LogLevel.Information)
        .build();
      
      connection.on("handleMessage", handleMessage);
      
      connection
        .start()
        .catch(console.error);
    &lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;</code></pre>]]></content:encoded></item><item><title><![CDATA[Control power using a TIP120 transistor and Node.js]]></title><description><![CDATA[<p> Earlier posts have shown how to control power by <a href="https://blog.hardiegras.myds.me/control-power-using-a/control-power-with-an-iot-relay/">using a relay</a> or by <a href="https://blog.hardiegras.myds.me/control-rf-outlets-with-your-rasperry-pi-and-node-js/">controlling an RF outlet</a>. However, there are use cases that can't be satisfied with those options. Some devices don't have a standard plug you can plug into an outlet. You may want to control the amount</p>]]></description><link>https://blog.hardiegras.myds.me/control-power-using-a/</link><guid isPermaLink="false">5e2dabccf6979d0001374491</guid><category><![CDATA[raspberrypi]]></category><dc:creator><![CDATA[Chris Hardie]]></dc:creator><pubDate>Wed, 29 Jan 2020 00:53:46 GMT</pubDate><content:encoded><![CDATA[<p> Earlier posts have shown how to control power by <a href="https://blog.hardiegras.myds.me/control-power-using-a/control-power-with-an-iot-relay/">using a relay</a> or by <a href="https://blog.hardiegras.myds.me/control-rf-outlets-with-your-rasperry-pi-and-node-js/">controlling an RF outlet</a>. However, there are use cases that can't be satisfied with those options. Some devices don't have a standard plug you can plug into an outlet. You may want to control the amount of electricity being supplied, perhaps to control the speed of a fan. For those scenarios, you can use a TIP120 transistor.</p><p>For our purpose, we will use the transistor to control a higher-voltage circuit. The Raspberry Pi can supply 3.3 volts, which is often not enough to drive the device you are attempting to power. To make things work we will need a few things.</p><h2 id="power-supply-unit">Power supply unit</h2><p>To supply a higher voltage, we'll use a common power supply unit like this one, which outputs at 12V:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image-16.png" class="kg-image"></figure><p>Note that I cut off the normal plug and attached a terminal I got <a href="https://www.amazon.ca/gp/product/B0188DMF3A/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&amp;psc=1">from this set.</a> I won't write about how you use a crimping tool like <a href="https://www.amazon.ca/gp/product/B00DHCRVSC/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&amp;psc=1">this one</a> because there are plenty of great <a href="https://www.youtube.com/results?search_query=crimping+wires">Youtube videos</a> that would do better justice to the subject matter, and also because I'm not very good at it :)</p><h2 id="tip120-transistor">TIP120 transistor</h2><p>This is a cheap and common component you can get from any electronics store.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image-17.png" class="kg-image"></figure><p>The pins connect to the three parts of the component:</p><ul><li>the base which activates the transistor (left lead in this model)</li><li>the collector which is the positive external lead (center lead in this model)</li><li>the emitter which is the negative external lead (right lead in this model)</li></ul><p>To drive a device, we connect the power supply unit and the device into a circuit with this transistor mediating power flow. When we supply current to the base lead from our Raspberry Pi, the transistor will activate and close its circuit which will start supplying the device with power from our power supply unit.</p><p>Using pulse width modulation (PWM) via the Raspberry Pi's GPIO18 pin, you can pulse the base lead intermittently, creating duty cycles which supplies and cuts power at desired intervals. This is how you would control the intensity of a fan.</p><h2 id="creating-the-circuit">Creating the circuit</h2><p>To create this circuit you will need:</p><ul><li>1K ohm resistor</li><li>1 N4001 diode</li><li>TIP120 transistor</li><li>power supply unit</li><li>a fan or some other device to power</li><li>Raspberry Pi</li></ul><p>With those components in hand, the circuit will look like this:</p><figure class="kg-card kg-image-card kg-width-full"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image-18.png" class="kg-image"></figure><p>If you wish to use PWM, connect GPIO18 to the base lead of the transistor, which is the leftmost one.</p><p>Be sure to include a diode oriented in the direction indicated in the diagram, which ensures current will run in one direction. Without it, a higher current than your Raspberry Pi can handle could run through your board and ruin it.</p><p>If you want to turn the current completely on or off, it's just a matter of writing high or not to the pin. To be a bit more interesting, I'll show some code that leverages PWM to control the intensity of a fan. I'll be using <a href="https://github.com/rwaldron/johnny-five">Johnny-Five</a>, a great library that let's us create robots and smart devices with Javascript.</p><!--kg-card-begin: markdown--><pre><code class="language-js">import PiIO from 'pi-io';
import five from 'johnny-five';

const board = new five.Board({
  io: new PiIO()
});

board.on(&quot;ready&quot;, function() {
  board.pinMode('GPIO18', five.Pin.PWM);
  
  // analogWrite accepts an int between 0-255 which controls the 
  // duty cycle supplying power to your device
  board.analogWrite('GPIO18', 1 * 255); // 100% of full power
  
  // after 5 seconds, reduce to 25%
  setTimeout(() =&gt; {
    board.analogWrite('GPIO18', 0.25 * 255); // 25% of full power
  }, 5000);
});</code></pre>
<!--kg-card-end: markdown--><p>Now that I can control a fan, I am able to control the temperature of my kamodo grill whose temperature is controlled by the amount of air flow through its bottom vent. I can read a bbq probe that is dangling inside my dome, and based on whether the temperature is above or below a threshold I set, the fan will turn on or off to maintain my desired temperature.</p><p>You can see below there is a squirrel cage blower that I've connected to my bottom vent via some tubing that is food safe to use at high temperatures.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image-19.png" class="kg-image"></figure><p>And yes, that pulled pork was totally delicious.</p>]]></content:encoded></item><item><title><![CDATA[Connecting your device to the Azure IoT Hub with Node.js]]></title><description><![CDATA[<p>The first step to creating a home automation IoT platform is to figure out how to manage message ingress from your device into the platform. This post will detail how we can leverage Azure's IoT Hub for this.</p><p>The IoT Hub ingests message at massive scale, but has a lot</p>]]></description><link>https://blog.hardiegras.myds.me/connecting-your-device-to-the-azure-iot-hub/</link><guid isPermaLink="false">5e133c3d718c990001cbc651</guid><category><![CDATA[iot]]></category><category><![CDATA[nodejs]]></category><category><![CDATA[azure]]></category><dc:creator><![CDATA[Chris Hardie]]></dc:creator><pubDate>Mon, 06 Jan 2020 20:00:00 GMT</pubDate><content:encoded><![CDATA[<p>The first step to creating a home automation IoT platform is to figure out how to manage message ingress from your device into the platform. This post will detail how we can leverage Azure's IoT Hub for this.</p><p>The IoT Hub ingests message at massive scale, but has a lot of other really cool features that make setting up my platform really easy:</p><ul><li>device registration and management</li><li>remote control</li><li>monitoring</li><li>security</li></ul><h2 id="setting-up-your-iot-hub-in-the-azure-portal">Setting up your IoT Hub in the Azure portal</h2><p>There are a couple ways to provision an IoT Hub, but we'll use the Azure portal. First, find the IoT Hub resource in the portal and click Create.</p><p>For the basics, you'll need to fill out your subscription, resource group, region and IoT Hub name. Your resource group and IoT Hub name must be unique to your region.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image.png" class="kg-image"></figure><p>Next, select Size and Scale to set your hub's size:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image-1.png" class="kg-image"></figure><p>What's really nice about the IoT Hub is there is a free tier! I already have a Hub, so you can see above the error indicating I can't have more than one. This is a fully functional hub, but with the disadvantage of a small message per day allowance of 8000 messages. If you are going to exceed this allowance, you'll need to bump up your tier to Basic or Standard. The big downside to the Basic tier is that you <a href="https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-scaling#basic-and-standard-tiers">cannot send messages to your device</a>, which is a showstopper for me.</p><p>Leave the Device-to-cloud partitions to 2 (you can't change it on the free tier anyway). More partitions give you flexibility to scale out ingestion during times of heavier load. For a home automation platform, 2 is more than enough.</p><p>Review your changes, click create, and wait for your hub to be deployed:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image-2.png" class="kg-image"></figure><h2 id="provisioning-a-device-in-the-iot-hub-registry">Provisioning a device in the IoT Hub registry</h2><p>Once your hub has been created, we need to provision a new device in the hub registry in order to generate keys our physical device will use to send messages to the hub.</p><p>In an enterprise scenario, device provisioning is a complex operation, with certificates burned onto device silicone and managed with something like the <a href="https://docs.microsoft.com/en-us/azure/iot-dps/about-iot-dps">Azure Device Provisioning Service</a>. For our scenario, manually provisioning a few home automation devices won't be particularly cumbersome.</p><p>Click IoT Devices in the navigation menu, and we'll be presented with an empty list of devices. Add one by clicking New:</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image-3.png" class="kg-image"></figure><p>Assign a Device ID - it can be completely arbitrary what you call it, I'm going to call mine <code>test-device</code>, leave the Authentication Type as Symmetric Key, and Auto-generate its keys:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image-5.png" class="kg-image"></figure><p>Once created, it will show up in our device list:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image-6.png" class="kg-image"></figure><p>Drill into it and we access its keys, connection strings, manage whether it is allowed to connect to the IoT Hub and more. Note there are two keys so that if we think a key has been compromised, we can rotate the keys so as to invalidate the compromised one.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image-7.png" class="kg-image"></figure><p>We'll come back after we have connected our device to check out some of the other functionality. For the moment, take note of your primary connection string as you'll need it in the next section.</p><h2 id="connecting-your-device-to-the-iot-hub">Connecting your device to the IoT Hub</h2><p>For the purpose of your device, it can be a Raspberry Pi, an Arduino, or you can use your plain ol' laptop. Anything that runs Node will work.</p><p>Create a new Node project with <code>npm init</code> and install these dependencies:</p><p><code>npm install azure-iot-device azure-iot-device-mqtt --save</code></p><p>Next, create an <code>index.js</code> file and create your IoT Hub client using your settings from when you created your hub:</p><pre><code class="language-js">import DeviceSDK from "azure-iot-device";
import Transport from "azure-iot-device-mqtt";

// UPDATE CONN_STRING WITH YOUR CONNECTION STRING
const CONN_STRING =
  "HostName=iot-hub-chardie.azure-devices.net;DeviceId=test-device;SharedAccessKey=XXXXXXXXXXXX";
const client = DeviceSDK.Client.fromConnectionString(
  CONN_STRING,
  Transport.Mqtt
);

client.open(err =&gt; {
  if (err) {
    console.err(err.toString());
    process.exit(-1);
  }

  console.log("Connected to IoT Hub!");
  // MORE AWESOME CODE TO GO HERE!
});</code></pre><p>Note we are using <code>Mqtt</code> as our protocol, which is extremely lightweight with low bandwidth requirements, but there are also <code>Https</code> and <code>Amqp</code> protocols available with their own pros and cons.</p><p>Run the script and make sure <code>Connected to IoT Hub!</code> is logged to the console.</p><p>If we go back to the Azure portal and drill into our device, we can select Device Twin - which is a virtual representation of our device - in order to see various properties and metadata. We can see that our device's connection state is <code>connected</code>:</p><pre><code class="language-json">{
  "deviceId": "test-device",
  ...
  "connectionState": "Connected",</code></pre><p>Now that our device is connected, we can start communicating between our device and the IoT Hub.</p><h2 id="device-to-cloud-communication">Device-to-cloud communication</h2><p>There are two ways for your device to communicate with your IoT Hub:</p><ul><li>Telemetry event messages</li><li>Device Twin Reported Properties</li></ul><h3 id="telemetry-event-messages">Telemetry event messages</h3><p>The most common IoT communication scenario is a device capturing some telemetry data and pushing it to an ingestion service. For simplicity's sake, we're going to simulate a temperature being recorded and publish it to our IoT Hub.</p><p>When our connection opens, we will set an interval of 5 seconds where we will generate a random temperature reading and push it to Azure.</p><pre><code class="language-js">client.open(err =&gt; {
  ...
  const sendTelemetry = () =&gt; {
    // Generate a random int between 0 - 4 to represent a temperature
    const temperature = Math.floor(Math.random() * Math.floor(5));

    var message = new DeviceSDK.Message(
      // SDK doesn't take care of serialization, we must stringify ourselves
      JSON.stringify({
        temperature
      })
    );

    client.sendEvent(message, function(err) {
      if (err) {
        console.error(err.toString());
        process.exit(-1);
      }
      console.log(`Message sent: ${message.data.toString()}`);
    });
  };

  setInterval(sendTelemetry, 5000);</code></pre><p>Our console should start logging messages like:</p><pre><code>Message sent: {"temperature":1}
Message sent: {"temperature":0}
Message sent: {"temperature":4}</code></pre><p>At this point, if we flip over to the Azure portal and bring up the IoT Hub Overview window, we can see our Device to cloud messages being captured:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image-8.png" class="kg-image"></figure><h3 id="device-twin-reported-properties">Device Twin Reported Properties</h3><p>Note that this method requires the Standard (or Free) tier of IoT Hub.</p><p>This method of communication is typically used to communicate some status of the device so that it can be captured in its device twin, which is its virtual representation in IoT Hub. Device twins can be queried to provide rich detail about the status your physical devices.</p><p>To update a reported property, retrieve the device twin and make an assignment like this:</p><pre><code class="language-js">client.open(err =&gt; {
  ...

  client.getTwin((twinErr, twin) =&gt; {
    if (twinErr) {
      console.error("Could not retrieve twin");
      process.exit(-1);
    }
      
    const propertiesPatch = {
        batteryLife: 50
    }

    twin.properties.reported.update(propertiesPatch, function(err) {
      if (err) {
        throw err;
      }
      console.log("Reported battery life"));
    });
  });
});</code></pre><p>Note <code>batteryLife</code> is completely arbitrary, your property could be named anything.</p><p>Open the Azure portal, drill into your IoT Device and click on Device Twin:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image-13.png" class="kg-image"></figure><p>You should see its reported properties now includes <code>batteryLife</code>:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image-15.png" class="kg-image"></figure><h2 id="cloud-to-device-communication">Cloud-to-device communication</h2><p>There three ways to control your IoT device:</p><ul><li>Cloud-to-device messages</li><li>Direct methods</li><li>Device Twin Desired Properties</li></ul><p>Note that all three methods require the Standard (or Free) tier of IoT Hub. Each method has different features which are detailed in Microsoft's documentation, but in brief:</p><p><strong>Cloud-to-device message - </strong>These are unidirectional messages sent without the expectation of a return value. For the AMQP and HTTPS protocols, the IoT Hub can be made aware the message was processed, but there is no expectation there should something returned which should be acted upon. Messages are stored on the IoT Hub for 48 hours, and will be delivered once the device connects.</p><p><strong>Direct methods</strong> - A direct method is an immediate invocation of a function that exists on the device where the function can return a value. This makes the communication fully bidirectional. If the device is not connected, the direct method will fail to invoke and will not retry. </p><p><strong>Device Twin Desired Properties -</strong> These are properties on the device's digital twin that will be communicated when the device connects to the IoT Hub, there is no expiration of this message. These are intended to put the device into a desired state.</p><h3 id="cloud-to-device-message">Cloud-to-device message</h3><p>Receiving a message is just a matter of registering a handler for the device client's <code>message</code> event:</p><pre><code class="language-js">client.open(function (err) {
  ...
  client.on("message", msg =&gt; {
    console.log(`Received message: ${msg.data.toString()}`);
  });</code></pre><p>While your Node app is running, bring up the Azure portal again, browse to your IoT Device and click the Message to Device link:</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image-9.png" class="kg-image"></figure><p>We'll add "Hi from Azure!" to our message body and click Send Message:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image-10.png" class="kg-image"></figure><p>After you click Send Message, your Node app should log out:</p><p><code>Received message: Hi from Azure!</code></p><p>MQTT has no concept of completing a message, but AMQP and HTTPS transports do, and this will inform IoT Hub that the message has been processed in case you need to inform some retry or compensation workflow:</p><pre><code class="language-js">client.on('message', function (msg) {  
  ...
  client.complete(msg, function (err) {
    if (err) {
        client.error(`Could not complete message ${err.toString()}`);
    } else {
        console.log('message has completed');
    }
  });
})</code></pre><h3 id="direct-methods">Direct methods</h3><p>Similar to handling cloud-to-device messages, setting up a direct method involves creating a handler with the client's <code>onDeviceMethod</code> method. The following sample simulates receipt of a direct method message which invokes the device's <code>setFanStatus</code> method. The request payload is interrogated for the <code>status</code> value to apply to the fan, turns the fan on or off, and then sends the response to the IoT to confirm the fan is now in the desired state.</p><pre><code class="language-js">client.open(function (err) { 
  ...
  const setFanStatus = (request, response) =&gt; {
    console.log(`Received: ${JSON.stringify(request.payload)}`);

    if (request.payload.status === "on") {
      fan.turnOn();
    } else if (request.payload.status === "off") {
      fan.turnOff();
    }

    const fanPayload = {
      currentStatus: fan.status
    };

    response.send(200, fanPayload, respErr =&gt; {
      if (respErr) {
        console.error(respErr.ToString);
        process.exit(-1);
      }
      console.log("response to setFanStatus sent");
    });
  };

  client.onDeviceMethod("setFanStatus", setFanStatus);
}</code></pre><p>With your Node app running, open the Azure portal and drill into your IoT Device and click Direct Method:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image-11.png" class="kg-image"></figure><p>In the Direct Method window, assign "setFanStatus` to Method Name, and add the following to the Payload:</p><pre><code class="language-json">{
  status: "on"
}</code></pre><p>After clicking Invoke Method, you should receive a Result indicating the current status of the fan is <code>on</code>:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image-12.png" class="kg-image"></figure><p>Going back to your Node app, the console should have logged confirmation it received a message and that it responded with a confirmation:</p><pre><code>Received: {"status":"on"}
response to setFanStatus sent</code></pre><h3 id="device-twin-desired-properties">Device Twin Desired Properties</h3><p>Desired properties put a device into a desired state. In this example, we'll push down a message to alter the device's telemetry data push interval.</p><p>Using the device client, we request the device twin from IoT Hub, and when it is retrieved, we will set up a handler to handle change events to our custom desired property. Note that the handler will be invoked not only when the property has changed, but also when your application starts up with the current value of the desired property.</p><pre><code class="language-js">client.open(err =&gt; {
  ...

  client.getTwin((twinErr, twin) =&gt; {
    if (twinErr) {
      console.error("Could not retrieve twin");
      process.exit(-1);
    }

    twin.on("properties.desired.pushInterval", pushInteval =&gt; {
      console.log(`Received update to pushInterval: ${pushInteval}`);
    });
  });
});</code></pre><p>In the above sample, we begin to observe a <code>pushInterval</code> property. This is something I came up with, it could be named anything. When the value is changed in device twin, the change will be raised on the device where it can be handled however desired.</p><p>With your Node app running, open the Azure portal, drill into your Iot Device and click on Device Twin:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image-13.png" class="kg-image"></figure><p>You will be presented with a JSON document, which is the virtual representation of your device. Under <code>properties.desired</code>, add a new property called <code>pushInterval</code> and give it a value of <code>10</code>:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2020/01/image-14.png" class="kg-image"></figure><p>Click Save, and the change will be raised if your device is presently connected, or will be raised as soon as your device connects to the IoT Hub.</p><p>You can observe changes made to <code>pushInterval</code> in the desired properties' <code>$metadata</code> property.</p><p>Flip back to your Node app, and the desired property update for <code>pushInterval</code> should be logged out:</p><p><code>Received update to pushInterval: 10</code></p>]]></content:encoded></item><item><title><![CDATA[Control power with an IoT Power Relay]]></title><description><![CDATA[<p>While controlling power to an appliance with an RF outlet is undeniably cool, there's a much simpler way to supply electricity. It involves a fairly inexpensive <a href="https://dlidirect.com/collections/frontpage/products/iot-power-relay">IoT Power Relay by Data Loggers</a>.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/12/image-25.png" class="kg-image"></figure><p>This is the fastest and simplest way to control power - no soldering or special tools required. Grab</p>]]></description><link>https://blog.hardiegras.myds.me/control-power-with-an-iot-relay/</link><guid isPermaLink="false">5e0b746565cac10001b1fff3</guid><category><![CDATA[iot]]></category><dc:creator><![CDATA[Chris Hardie]]></dc:creator><pubDate>Tue, 31 Dec 2019 16:34:02 GMT</pubDate><content:encoded><![CDATA[<p>While controlling power to an appliance with an RF outlet is undeniably cool, there's a much simpler way to supply electricity. It involves a fairly inexpensive <a href="https://dlidirect.com/collections/frontpage/products/iot-power-relay">IoT Power Relay by Data Loggers</a>.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/12/image-25.png" class="kg-image"></figure><p>This is the fastest and simplest way to control power - no soldering or special tools required. Grab a couple jumper cables and screw them into the pin connector (the green box in the image above) and you're pretty much good to go.</p><p>Connect your board and write high to the data pin to flip the relay and start supplying power to two outlets that are normally off. There is also another outlet that is normally on, but will be unpowered while the data pin is high. Finally there is an outlet that is always on, which is convenient if you need a place to plug your board into.</p><p>This is a pretty rugged device, and it handles a higher load than the RF outlets, which will burn out if you connect an appliance over 1200W. </p><p>A good sample of tutorials can be found on <a href="http://www.digital-loggers.com/iot2faqs.html">Digital Loggers site.</a></p>]]></content:encoded></item><item><title><![CDATA[Backup Ghost CMS instance to Google Drive]]></title><description><![CDATA[<p>My Synology NAS is setup for RAID1 which writes all data to both my drives, ensuring if one drive fails the other will take over and keep serving. However, it's always a good idea to have an off-site backup in case something truly disastrous happens like a fire or theft</p>]]></description><link>https://blog.hardiegras.myds.me/backup-ghost-cms-instance/</link><guid isPermaLink="false">5de2d2c7f3ecb10001d203ee</guid><category><![CDATA[Synology]]></category><category><![CDATA[Ghost CMS]]></category><dc:creator><![CDATA[Chris Hardie]]></dc:creator><pubDate>Mon, 30 Dec 2019 20:17:51 GMT</pubDate><content:encoded><![CDATA[<p>My Synology NAS is setup for RAID1 which writes all data to both my drives, ensuring if one drive fails the other will take over and keep serving. However, it's always a good idea to have an off-site backup in case something truly disastrous happens like a fire or theft that would leave you with nothing. To that end, I'll be sending daily backups to my Google Drive.</p><h2 id="creating-your-backup">Creating your backup</h2><p>When creating a copy of your files, it's best to stop your docker container during the process so your SQLite database doesn't corrupt if a copy operation is initiated while Ghost attempts to write to the database.</p><blockquote>Note if you use MySQL or MariaDB as your Ghost database, you should investigate how to create a backup using their CLI or APIs.</blockquote><p>The following bash script does three things:</p><p>1) it stops the docker container named "blog"</p><p>2) it creates a compressed tarball archive of all our Ghost files found at <code>volume1/docker/blog</code> and deposits them at <code>/volume1/Share/blog-backup</code></p><p>3) it re-starts the docker container</p><!--kg-card-begin: markdown--><pre><code>#!/bin/sh

docker stop blog
sudo tar -czvf /volume1/Share/blog-backup/blog-backup.tar.gz /volume1/docker/blog
docker start blog
</code></pre>
<!--kg-card-end: markdown--><p>Place the above script at <code>/volume1/public/blog-backup.sh</code></p><h2 id="scheduling-the-backup-job">Scheduling the backup job</h2><p>Synlogy includes a Task Scheduler which, like <code>cron</code>, will run tasks based on a schedule you setup.</p><p>Open up Control Panel -&gt; Task Scheduler and enter "Blog backup" as the task name, and leave <code>root</code> for the owner (I'm sure I'm violating some common sense security principle here) </p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/12/image-15.png" class="kg-image"></figure><p>I set up my task to run every morning at 4 a.m.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/12/image-16.png" class="kg-image"></figure><p>Finally I indicate the location of the script file I want to execute: <code>/volume1/public/blog-backup.sh</code></p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/12/image-17.png" class="kg-image"></figure><p>Once your script executes, you should see a tarball archive in your backup diretory:</p><p><code>ls -la /volume1/Share/blog-backup/</code></p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/12/image-18.png" class="kg-image"></figure><h2 id="moving-your-backup-to-google-drive">Moving your backup to Google Drive</h2><p>Syncing files with Google Drive - and a number of other providers - is facilitated by Cloud Sync, an official Synology package. Download it from the Package Center and it will step you through setting up a sync:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/12/image-19.png" class="kg-image"></figure><p>You will need to authorize Cloud Sync to access your Google Drive. First sign in with the account linked to your drive:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/12/image-20.png" class="kg-image"></figure><p>And then grant permission for Cloud Sync to use it:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/12/image-21.png" class="kg-image"></figure><p>Synology then steps in to link everything together:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/12/image-22.png" class="kg-image"></figure><p>Record the appropriate local and remote paths between your NAS and Google Drive:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/12/image-23.png" class="kg-image"></figure><p>I haven't bothered with "Schedule settings", by default it will copy things over in real time, which is fine for me given the small size of the backup.</p><p>Once you finish your sync job, your backup should get pushed up automatically to Google Drive:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/12/image-24.png" class="kg-image"></figure>]]></content:encoded></item><item><title><![CDATA[Control RF outlets with your Rasperry Pi and Node.js]]></title><description><![CDATA[<p>This is a neat project that lets you deliver power to devices using Raspery Pi-controlled RF outlets. I've used these outlets to control the Xmas tree, playroom lights, a gerry-rigged sous-vide, etc...</p><p>Tim Leland has a <a href="https://timleland.com/wireless-power-outlets/">detailed blog post</a> with instructions to set up a website on the Raspberry Pi</p>]]></description><link>https://blog.hardiegras.myds.me/control-rf-outlets-with-your-rasperry-pi-and-node-js/</link><guid isPermaLink="false">5df694e18d30f600014311e5</guid><category><![CDATA[raspberrypi]]></category><category><![CDATA[nodejs]]></category><category><![CDATA[iot]]></category><dc:creator><![CDATA[Chris Hardie]]></dc:creator><pubDate>Sun, 15 Dec 2019 21:49:06 GMT</pubDate><content:encoded><![CDATA[<p>This is a neat project that lets you deliver power to devices using Raspery Pi-controlled RF outlets. I've used these outlets to control the Xmas tree, playroom lights, a gerry-rigged sous-vide, etc...</p><p>Tim Leland has a <a href="https://timleland.com/wireless-power-outlets/">detailed blog post</a> with instructions to set up a website on the Raspberry Pi to control Etekcity outlets. We'll be controlling the outlets via a Node.js app, so this post will show you the most direct way to get setup with the bare minimum. </p><h2 id="hardware">Hardware</h2><ul><li>Raspberry Pi - I have run this on a Raspberry Pi 3 and Zero W</li><li><a href="https://www.amazon.com/gp/product/B00DQELHBS/ref=as_li_tl?ie=UTF8&amp;camp=1789&amp;creative=390957&amp;creativeASIN=B00DQELHBS&amp;linkCode=as2&amp;tag=timlelcom-20&amp;linkId=QLQ3ESJONX6AGUSB">Etekcity outlets</a></li><li><a href="https://www.amazon.com/gp/product/B00M2CUALS/ref=as_li_tl?ie=UTF8&amp;amp;camp=1789&amp;amp;creative=390957&amp;amp;creativeASIN=B00M2CUALS&amp;amp;linkCode=as2&amp;amp;tag=timlelcom-20&amp;amp;linkId=ZDOQ7BU6VPWMTWN5">RF receiver and transmitter</a></li></ul><p>You will need a breadboard or protoype board and wire up the receiver and transmitter to the appropriate pins:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/12/image-9.png" class="kg-image"></figure><h2 id="raspberry-pi-pre-requisites">Raspberry Pi pre-requisites</h2><ul><li>Install Raspbian </li><li>Run <code>sudo apt-get update</code></li><li>Run <code>sudo apt-get upgrade</code></li><li>Install <a href="https://projects.drogon.net/raspberry-pi/wiringpi/download-and-install/">Wiring Pi</a> by running <code>sudo apt-get install wiringpi</code></li></ul><!--kg-card-begin: markdown--><blockquote>
<p>Note about Wiring Pi - the developer behind it is apparently going to cease development on it due to how he has been treated by the community. This is really unfortunate... He has released a final version for Raspberry Pi 4, but there won't be any versions past that. Eventually we'll need to find an alternative...</p>
</blockquote>
<!--kg-card-end: markdown--><h2 id="leland-s-rfoutlet-repo">Leland's rfoutlet repo</h2><p>Tim Leland's repository has two files we are interested in:</p><ul><li>RFSniffer - identifies RF codes </li><li>codesend - transmits RF codes</li></ul><p>We'll pull down his repo: <code>git clone <a href="https://github.com/timleland/rfoutlet.git">https://github.com/timleland/rfoutlet.git</a></code></p><p>I've created a <code>outlet-test</code> folder for my project, and I'm going to copy those two files over:</p><p><code>cp ./rfoutlet/RFSniffer ./outlet-test/RFSniffer</code></p><p><code>cp ./rfoutlet/codesend ./outlet-test/codesend</code></p><h3 id="reading-rf-outlet-codes">Reading RF outlet codes</h3><p>As Leland details in his blog post, getting the RF codes for your outlets starts by executing <code>./outlet-test/RFSniffer</code> which will will make your Pi start listening for codes to be transmitted. Then it is just a matter of hitting the On and Off buttons on the remote for each outlet and recording them. I typically write the On and Off codes directly on the outlets with a sharpie.</p><h3 id="transmitting-rf-outlet-codes">Transmitting RF outlet codes</h3><p><code>codesend</code> requires elevated privileges:</p><p><code>sudo chown root.root ./outlet-test/codesend</code></p><p><code>sudo chmod 4755 ./outlet-test/codesend</code></p><p>If everything worked, you should be able to transmit the RF codes you recorded earlier and start turning your outlets on and off:</p><p><code><code>./outlet-test/codesend</code> 5239689</code></p><h2 id="i-thought-there-was-going-to-be-a-node-js-app">I thought there was going to be a Node.js app?</h2><p>In order to transmit the RF codes via Node.js, we're going to spawn a shell and invoke the <code>codesend</code> executable that way:</p><pre><code class="language-js">import { exec } from 'child_process';
import { config } from './config'

exec(`./codesend ${config.onSignal}`);

setTimeout(() =&gt; exec(`./codesend ${config.offSignal}`), 10000);</code></pre><p>The above code will turn the outlet on, and then turns it off 10 seconds after. It's a little dirty, but I'm now able to integrate with SignalR, react to IoT cloud-to-device messages, control power based on a sensor reading, etc... </p><h2 id="conclusion">Conclusion</h2><p>This is a fun little project, and I have a number of little Pi Zero W's floating around the house controlling various lights and devices. With a little soldering know how and Adafruit's awesome <a href="https://www.adafruit.com/product/3203">Prototype Board for the Pi Zero</a>, it's pretty straightforward to solder on the RF transmitter and create a permanent solution for power control.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/12/image-10.png" class="kg-image"></figure>]]></content:encoded></item><item><title><![CDATA[Writing (and debugging) a Hello World app in Node on a Raspberry Pi]]></title><description><![CDATA[<p>This post follows my post on <a href="https://blog.hardiegras.myds.me/setting-up-a-raspberry-pi-for-development">Setting up a Raspberry Pi for development</a> and will demonstrate how to create a simple Hello World app in Node and debug it using Chrome's DevTools.</p><h2 id="configuring-chrome-devtools">Configuring Chrome DevTools</h2><p>You will need to make Chrome on your laptop aware that it should be monitoring</p>]]></description><link>https://blog.hardiegras.myds.me/writing-and-debugging-a-hello-world-app-in-node-on-a-raspberry-pi/</link><guid isPermaLink="false">5de32153f3ecb10001d203f2</guid><category><![CDATA[nodejs]]></category><category><![CDATA[raspberrypi]]></category><dc:creator><![CDATA[Chris Hardie]]></dc:creator><pubDate>Sun, 01 Dec 2019 02:55:00 GMT</pubDate><content:encoded><![CDATA[<p>This post follows my post on <a href="https://blog.hardiegras.myds.me/setting-up-a-raspberry-pi-for-development">Setting up a Raspberry Pi for development</a> and will demonstrate how to create a simple Hello World app in Node and debug it using Chrome's DevTools.</p><h2 id="configuring-chrome-devtools">Configuring Chrome DevTools</h2><p>You will need to make Chrome on your laptop aware that it should be monitoring a particular IP and port on your LAN for a debugging stream. </p><p>Enable discovering network targets by navigating to <code>chrome://inspect</code>. Ensure <code>Discover network targets</code> is enabled and click the <code>Configure</code> button. Enter your Raspberry Pi's IP address and port, and click <code>Done</code>. </p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/12/image.png" class="kg-image"></figure><h2 id="coding-our-hello-world-app">Coding our Hello World app</h2><p>At this point you should have a share mounted on your Pi exposed to your laptop. I will be using VS Code to build the app. We will need to ssh into the Raspberry Pi using Putty or similar. We will create and enter a <code>HelloWorld</code> directory under our mount point:</p><p><code>mkdir /home/pi/Documents/Repos/HelloWorld</code></p><p><code>cd /home/pi/Documents/Repos/HelloWorld</code></p><p>We will create a Node skeleton app with a typical:</p><p><code>npm init</code> (just accept all the defaults)</p><p>In VS Code, click <code>File -&gt; Open Folder</code> and navigate to <code>\\&lt;RPi_IP&gt;\PiShare\HelloWorld</code></p><p>Open <code>package.json</code> for editing and add a <code>start-with-debug</code> script:</p><pre><code>"scripts": {
    "test": "echo \"Error: no test specified\" &amp;&amp; exit 1"
    "start-with-debug": "node --inspect=0.0.0.0:9229 --inspect-brk index.js"
  },</code></pre><p><code>start-with-debug</code> does a couple things. The <code>--inspect</code> flag makes Node listen for a debugging client. <code>--inspect-brk</code> forces your app to break on the very first line of code, allowing you to attach you debugging client in time.</p><p>Create an <code>index.js</code> file with the following:</p><pre><code class="language-js">console.log("Hello");
debugger;
console.log("World");</code></pre><h2 id="debugging-our-hello-world-app">Debugging our Hello World app</h2><p>From within our Raspberry Pi shell, we execute <code>npm run start-with-debug</code> This will start our app, listen for a debugging client, and also break on the first line until a debugging client attaches.</p><p>If we switch back to our Chrome window on our laptop, we will see it has observed a debugging stream we can inspect:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/12/image-2.png" class="kg-image"></figure><p>Click the <code>inspect</code>link and we will be greeted with a familiar debugging window:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/12/image-3.png" class="kg-image"></figure><p>There are three breakpoints set here. The <code>--inspect-brk</code> flag put a breakpoint on the first line. I added a <code>debugger</code> statement on the second line. Finally I added a manual breakpoint on the third line. Each one of them was respected and were broken on as expected.</p>]]></content:encoded></item><item><title><![CDATA[Setting up a Raspberry Pi for development]]></title><description><![CDATA[<h2 id="install-raspbian-os">Install Raspbian OS</h2><p>First thing to do is burn an image of the latest Raspbian - Raspberry Pi's official operating system - onto an micro SD card. Don't skimp on the card, you want something that is reliable and fast. I got a pack of SanDisk Ultra 16GB and have</p>]]></description><link>https://blog.hardiegras.myds.me/setting-up-a-raspberry-pi-for-development/</link><guid isPermaLink="false">5de27e05f3ecb10001d201c2</guid><category><![CDATA[raspberrypi]]></category><category><![CDATA[iot]]></category><dc:creator><![CDATA[Chris Hardie]]></dc:creator><pubDate>Sat, 30 Nov 2019 19:59:50 GMT</pubDate><content:encoded><![CDATA[<h2 id="install-raspbian-os">Install Raspbian OS</h2><p>First thing to do is burn an image of the latest Raspbian - Raspberry Pi's official operating system - onto an micro SD card. Don't skimp on the card, you want something that is reliable and fast. I got a pack of SanDisk Ultra 16GB and have had no issues, including when running Windows IoT Core which requires a lot speed.</p><p>I won't go over how to install Raspbian, as the official site does a great job and the recommended versions are updated regularly:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.raspberrypi.org"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Installing operating system images - Raspberry Pi Documentation</div><div class="kg-bookmark-description">This section includes some simple guides to setting up the software on your Raspberry Pi. We recommend that beginners start by downloading and installing NOOBS.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.raspberrypi.org/wp-content/themes/mind-control/images/favicon.png"><span class="kg-bookmark-publisher">Raspberry Pi Documentation</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://www.raspberrypi.org/wp-content/themes/mind-control/images/octocat.jpg"></div></a></figure><p>You can either install NOOBS or download the latest version of Raspbian directly (which is Raspbian Buster at the time of writing). The Lite version is fine for IoT, which omits things like the desktop, which we won't need but would still consume resources.</p><h2 id="enable-ssh">Enable SSH</h2><p>You can hook up a keyboard and monitor to your Pi and access your terminal that way, but it is easiest just to <a href="https://www.raspberrypi.org/documentation/configuration/wireless/headless.md">set things up headless</a>.</p><p>After you have burned the Raspbian OS image to your SD card, open the card and simply create an empty file called <code>ssh</code> at the root. Your Pi will read that on boot and will enable SSH. </p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-57.png" class="kg-image"></figure><h2 id="configuring-wifi">Configuring wifi</h2><p>To configure wifi, create a <code>wpa_supplicant.conf</code> file at the root. This file is read by the Pi when it boots and allows you to configure a wifi connection.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-58.png" class="kg-image"></figure><p>The contents of <code>wpa_supplicant.conf</code> will be:</p><pre><code>ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=&lt;Insert country code here&gt;

network={
 ssid="&lt;Name of your WiFi&gt;"
 psk="&lt;Password for your WiFi&gt;"
}</code></pre><p>Note the country code is based on the <a href="https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2#Officially_assigned_code_elements">ISO 3166-1 alpha-2 code</a> standard.</p><h2 id="accessing-your-raspberry-pi">Accessing your Raspberry Pi</h2><p>If everything worked, you should now see a new <code>raspberrypi</code> device on your LAN.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-62.png" class="kg-image"></figure><p>Using <a href="https://www.putty.org/">Putty </a>(or some other ssh client), you can now shell into your Pi:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-63.png" class="kg-image"></figure><p>You will be asked if you are certain you want to connect to this new device, click "Ok". Once you have connected, you will be asked for these credentials:</p><pre><code>username: pi
password: raspberry</code></pre><p>You should now have shell access to your Pi:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-64.png" class="kg-image"></figure><h2 id="update-the-hostname">Update the hostname</h2><p>If you plan on having multiple Pis on your network, they should each have a unique hostname so you can properly identify them. You need to update the name in two places:</p><p><code>sudo nano /etc/hostname</code><em> </em>There is a single line with <code>raspberrypi</code> written. Update it to whatever you wish your new hostname to be.</p><p><code>sudo nano /etc/hosts</code><em> </em>Find the line with <code>127.0.1.1 raspberrypi</code> and update <code>raspberrypi</code> to your new hostname.</p><p>Here are what my files look like after renaming the hostname to <code>newpi</code>:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-66.png" class="kg-image"></figure><p>Reboot your Pi and the new hostname should be present in your LAN:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-67.png" class="kg-image"></figure><h2 id="assigning-a-static-ip-optional-">Assigning a static IP (optional)</h2><p>As a convenience, we can ensure that when the Pi connects to the local network, it is assigned the same, predictable address.</p><blockquote>Take care to keep track of the IP addresses you assign! If the same IP address is assigned to multiple devices, it will cause no shortage of issues and hair pulling.</blockquote><blockquote>As an alternative to a static IP, you can use something like DNSMasq which is installed on my router with dd-wrt firmware. By setting <code>dhcp-host=C8:37:FB:5E:02:F3,newpi</code> as an option, I can now ssh into it by specifying its address like <code>newpi.hardie.lan</code> Ensure you don't connect multiple devices on your network with the same hostname you have specified in a DNSMasq option!</blockquote><p>To configure a static IP, we need to edit the <code>dhcpcd.conf</code> config file:</p><p><code>sudo nano /etc/dhcpcd.conf</code></p><p>There is a commented out template for setting up a static IP under "Example static IP configuration:"</p><p>We will use the template to assign a static IP to our wifi adapter:</p><!--kg-card-begin: markdown--><pre><code>interface wlan0
static ip_address=192.168.1.128/24
static routers=192.168.1.1
static domain_name_servers=192.168.1.1 8.8.8.8
</code></pre>
<!--kg-card-end: markdown--><p>Double check your router address, and then reboot your device. It will now always be assigned the static IP per the new configuration.</p><blockquote>I'm unsure how applicable this is to other router firmware as I have been using dd-wrt exclusively for well over a decade, but if I assign a static IP to my Pi like this, dd-wrt will show a <code>*</code> for the hostname. Under Services, I need to configure a static lease and register the hostname with the MAC address.</blockquote><h2 id="setting-up-a-file-share-for-windows">Setting up a file share for Windows</h2><p>If you are a crackerjack <code>vim</code> user, you can do all your coding inside your shell. But if you're <a href="https://stackoverflow.blog/2017/05/23/stack-overflow-helping-one-million-developers-exit-vim/">like me and a few other people</a>, you probably want a more fully-featured IDE. My setup has me coding using VS Code on my Windows laptop, but writing to my Pi where I can execute my code.</p><p>I need to create the folder I am going to mount:</p><p><code>sudo mkdir -p /home/pi/Documents/Repos</code></p><p>Change the owner to our <code>pi</code> account, which we'll also setup to be our networking account shortly:</p><p><code>sudo chown -R pi /home/pi/Documents/Repos/</code></p><p>In preparation of installing some dependencies, it's a good idea to do a system update:</p><p><code>sudo apt-get update</code></p><p>Once that is done, install a couple Samba dependencies:</p><p><code>sudo apt-get install samba samba-common-bin</code></p><p>If asked if you want to install a <code>dhcp-client</code>, answer yes. Once installed, open Samba's config file:</p><p><code>sudo nano /etc/samba/smb.conf</code></p><p>Ensure the workgroup property is set to <code>WORKGROUP</code> which is Window's default.</p><pre><code># Change this to the workgroup/NT-domain name your Samba server will part of
workgroup = WORKGROUP</code></pre><p>Scroll to the bottom and add the following:</p><pre><code>[PiShare]
 comment=Raspberry Pi Share
 path=home/pi/Documents/Repos
 browseable=Yes
 writeable=Yes
 only guest=no
 create mask=0777
 directory mask=0777
 public=no</code></pre><p>We need a user account for authenticating a request to connect to our share. We will use the existing <code>pi</code> account:</p><p> <code>sudo smbpasswd -a pi</code></p><p>Enter your Samba password twice and then resart Samba:</p><p><code>sudo systemctl restart smbd</code></p><p>Before we open the share, let's put a file in it:</p><p><code>sudo touch <code>/home/pi/Documents/Repos</code>/hi-there.txt</code></p><p>Open Windows Explorer and enter the IP of the Pi (in my case <code>192.1.68.1.128</code> followed by the name of the share (<code>PiShare</code>): <code>\\192.168.1.128\PiShare</code></p><p> You will be asked for your network credentials, remember to qualify your username with your hostname you setup earlier. The default would be <code>raspberrypi</code> but in my case I updated my hostname to <code>newpi</code> so my qualified username is <code>newpi\pi</code> . My password is the Samba password I set a couple steps ago.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-69.png" class="kg-image"></figure><p>If everything we well, you should now be able to browse the share:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-70.png" class="kg-image"></figure><h2 id="installing-git">Installing Git</h2><p>Source control is mandatory for any serious dev work. I have Gitlab setup on my Synology DiskStation, so I need to install git on my Pi and some ssh keys to connect.</p><p><code>sudo apt-get install git</code></p><p>How you connect to git - https or ssh - will depend on your installation. I use ssh keys so at this point I would copy my keys over to <code>~/.ssh</code> so that I can authenticate with Gitlab and pull down my projects to be run on my Pi.</p><h2 id="installing-node-on-raspberry-pi-optional-">Installing Node on Raspberry Pi (optional)</h2><p>I do a lot of Javascript programming, and many of my IoT projects are written in Node, so Node is a standard part of my setup. Python is already installed in the Pi, so if you are going to use that, you can skip this section entirely.</p><p>Installing Node is a little bit more involved. I don't use <code>apt-get</code> as I've never got it to work properly on a Pi Zero, and there is also a conflicting <code>node</code> package on Debian (the OS Raspbian is based on) so we would need to install <code>nodejs</code> and create a symlink, which seems like a bit of bother.</p><p>To that end, the following instructions should be universal for setting up Node on any Pi version. But to know what version of Node to download, we need to get the processor architecture of our device:</p><p><code>uname -m</code></p><p>That will print out something like <code>arm71</code> or <code>armv6l</code>. Next we visit the <a href="https://nodejs.org/dist/">Node download page</a> and find the latest LTS version. My Pi 3 is advertising a <code>arm71</code> processor, so I will download and extract the package like so:</p><p><code>wget https://nodejs.org/dist/v12.13.1/node-v12.13.1-linux-armv7l.tar.gz</code></p><p><code>tar -xzf node-v12.13.1-linux-armv7l.tar.gz</code></p><p>Copy the extracted contents to <code>/usr/local</code>:</p><p><code>sudo cp -r node-v12.13.1-linux-armv7l/* /usr/local</code></p><p>Check installation went alright:</p><p><code>node --version</code></p><p>You should get something like <code>v12.13.1</code></p><p></p>]]></content:encoded></item><item><title><![CDATA[Deriving a union type from a string array in Typescript]]></title><description><![CDATA[<p>I've run into a few occasions where I require the same data at both compile-time and run-time. Usually this occurs when I'm writing some sort of React Higher-Order Component:</p><!--kg-card-begin: markdown--><pre><code class="language-ts">const myData = [&quot;foo&quot;, &quot;bar&quot;, &quot;baz&quot;]

interface Props {
  myData: &quot;foo&quot; | &quot;bar&quot; | &quot;</code></pre>]]></description><link>https://blog.hardiegras.myds.me/deriving-a-union-type-from-an-array-in-typescript/</link><guid isPermaLink="false">5de141bac7f9bd00017d78f6</guid><category><![CDATA[Typescript]]></category><dc:creator><![CDATA[Chris Hardie]]></dc:creator><pubDate>Fri, 29 Nov 2019 17:08:43 GMT</pubDate><content:encoded><![CDATA[<p>I've run into a few occasions where I require the same data at both compile-time and run-time. Usually this occurs when I'm writing some sort of React Higher-Order Component:</p><!--kg-card-begin: markdown--><pre><code class="language-ts">const myData = [&quot;foo&quot;, &quot;bar&quot;, &quot;baz&quot;]

interface Props {
  myData: &quot;foo&quot; | &quot;bar&quot; | &quot;baz&quot;
}

class MyClassCore&lt;Props&gt; { ... }

export const MyClass withMyData(mydata, MyClassCore);
</code></pre>
<!--kg-card-end: markdown--><p>In the above contrived example, I need to maintain both the string array and the union type. An improvement would be to derive the union type from the array so that if we need to update the data, we would only need to update the source array and the changes would apply to the union type automatically.</p><p>The easiest way to do this is by first creating an immutable tuple of literals using the <a href="https://github.com/Microsoft/TypeScript/pull/29510"><code>as const</code> assertion</a>. </p><!--kg-card-begin: markdown--><pre><code class="language-ts">const myData = [&quot;foo&quot;, &quot;bar&quot;, &quot;baz&quot;] as const; // readonly [&quot;foo&quot;, &quot;bar&quot;, &quot;baz&quot;]
</code></pre>
<!--kg-card-end: markdown--><p>From here you have a couple ways to derive the union type. The first is to "<a href="https://stackoverflow.com/questions/44480644/typescript-string-union-to-string-array#comment101185261_45486495">use the property type from the numeric index signature</a>" with this syntax:</p><!--kg-card-begin: markdown--><pre><code class="language-ts">type MyDataUnion = myData[number]; // &quot;foo&quot; | &quot;bar&quot; | &quot;baz&quot;
</code></pre>
<!--kg-card-end: markdown--><p>A colleague had another way of doing the above that reads a little more straightforward to me:</p><!--kg-card-begin: markdown--><pre><code class="language-ts">type ExtractArrayItemType&lt;T&gt; = T extends ArrayLike&lt;infer U&gt; ? U : never;
type MyDataUnion = ExtractArrayItemType&lt;typeof myData&gt;; // &quot;foo&quot; | &quot;bar&quot; | &quot;baz&quot;
</code></pre>
<!--kg-card-end: markdown--><p>Because the array is immutable, Typescript can safely infer that the type is <code>"foo" | "bar" | "baz"</code> - the narrowest type - instead of <code>string[]</code> or even <code>("foo" | "bar" | "baz")[]</code> as those types are mutable.</p>]]></content:encoded></item><item><title><![CDATA[Create a type-safe function guard in Typescript]]></title><description><![CDATA[<p>It is not uncommon to write guards in Javascript to prevent invocation of a function that could possibly be undefined or null:</p><!--kg-card-begin: markdown--><pre><code class="language-js">if (someFunction) {
  someFunction();
}
</code></pre>
<!--kg-card-end: markdown--><p>As a minor convenience, I usually write a function that will enact that guard for me:</p><!--kg-card-begin: markdown--><pre><code class="language-js">function guardInvoke(func, ...args) {
  if (func) {
      func(args)
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>Then</p>]]></description><link>https://blog.hardiegras.myds.me/create-a-type-safe-function-guard-in-typescript/</link><guid isPermaLink="false">5ddc100ea9496000013720e9</guid><category><![CDATA[Typescript]]></category><dc:creator><![CDATA[Chris Hardie]]></dc:creator><pubDate>Mon, 25 Nov 2019 17:50:46 GMT</pubDate><content:encoded><![CDATA[<p>It is not uncommon to write guards in Javascript to prevent invocation of a function that could possibly be undefined or null:</p><!--kg-card-begin: markdown--><pre><code class="language-js">if (someFunction) {
  someFunction();
}
</code></pre>
<!--kg-card-end: markdown--><p>As a minor convenience, I usually write a function that will enact that guard for me:</p><!--kg-card-begin: markdown--><pre><code class="language-js">function guardInvoke(func, ...args) {
  if (func) {
      func(args)
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>Then I would write invoke my functions like this: <code>guardInvoke(myFunc, 1, 2, 3)</code></p><p>Moving to Typescript, we can still use the guard, but we should declare more types to make the function more type-safe. To do this I leverage some black magic types from the core Typescript <code>lib.es54.d.ts</code> library</p><h2 id="diving-into-parameterst-and-returntypet">Diving into Parameters&lt;T&gt; and ReturnType&lt;T&gt;</h2><p><code>Paramters&lt;T&gt;</code>: Obtain the parameters of a function type in a tuple</p><p><code>ReturnType&lt;T&gt;</code>: Obtain the return type of a function type</p><p>The type definitions are similar and interesting, but let's have a look at <code>Parameters&lt;T&gt;</code>:</p><!--kg-card-begin: markdown--><pre><code class="language-ts">type Parameters&lt;T extends (...args: any) =&gt; any&gt; = T extends (...args: infer P) =&gt; any ? P : never;
</code></pre>
<!--kg-card-end: markdown--><p>This can be pretty intimidating for the uninitiated, so let's break this down:</p><p><code>type Parameters&lt;T extends (...args: any) =&gt; any&gt;</code> </p><p>All this means is <code>T</code> must extend <code>(...args: any) =&gt; any</code> - in other words <code>T</code> must be any function. Why doesn't <code>T</code> simply extend the <code>Function</code> type? We are going to need a reference to <code>...args</code> in the next part.</p><p><code>T extends (...args: infer P) =&gt; any ? P : never;</code> </p><p>This is a <a href="https://www.typescriptlang.org/docs/handbook/advanced-types.html#conditional-types">conditional type</a> and will select one of two possible types based on the condition (just like a ternary!)</p><p>What is a bit unusual (at least when I first started looking at these) is that the condition appears to be gratuitous. Why do we need to evaluate the condition that <code>T</code> is a function? We <em>already</em> constrained type <code>T</code> to a function in the <code>Parameters</code> declaration, the compiler would complain as soon as you attempt to type <code>T</code> to anything but a function!</p><p>The answer is that while the evaluation of the condition will always result to true, conditional types can provide us <a href="https://www.typescriptlang.org/docs/handbook/advanced-types.html#type-inference-in-conditional-types">type inference</a>. The conditional type can include <code>infer</code> to determine types when the conditional evaluation succeeds. So given the conditional type declares parameter  <code>...args</code> with <code>infer P</code>, we will return type <code>P</code> when the condition evaluates to true (which it always will!) This is how <code>Parameters&lt;T&gt;</code> identifies parameter types in functions.</p><p><code>ReturnType&lt;T&gt;</code> is very similar, but worth a look as well.</p><h2 id="where-s-that-guard-you-were-talking-about">Where's that guard you were talking about?</h2><p>Based on those core types, I was able to re-create my guard function with type-safety in Typescript:</p><!--kg-card-begin: markdown--><pre><code class="language-ts">/**
 * Invokes a function if not undefined or null
 * @param func - a possibly undefined or null function
 * @param args - the function argument
 * @returns The function's return, or undefined if function is undefined or null
 */
export function guardedInvoke&lt;F extends ((...args: any[]) =&gt; any) | undefined | null&gt;(
    func: F,
    ...args: F extends ((...args: any[]) =&gt; any) ? Parameters&lt;F&gt; : any[]
): (F extends ((...args: any[]) =&gt; any) ? ReturnType&lt;F&gt; : never) | void {
    if (func) {
        return func(...args);
    }
}
</code></pre>
<!--kg-card-end: markdown--><p><code>guardInvoke</code> will take a function argument, and make sure it isn't a bottom value before invoking it. Using the same black magic of conditional types with type inference leveraged by <code>Parameters&lt;T&gt;</code>, we're able to get compiler safety and IDE hinting.</p><!--kg-card-begin: markdown--><pre><code class="language-ts">let addNumbers: ((...args: number[]) =&gt; number) | undefined;
let concat: ((arg1: string, arg2: string, arg3: string) =&gt; string) | undefined;

console.log(guardedInvoke(addNumbers, 1, 2)); // prints undefined
console.log(guardedInvoke(concat, &quot;foo&quot;, &quot;bar&quot;)); //prints undefined

addNumbers = (...args: number[]) =&gt;
    args.reduce((previous, current) =&gt; previous + current, 0);

concat = (a: string, b: string, c: string): string =&gt; `${a}${b}${c}`;

console.log(guardedInvoke(addNumbers, 1, 2, 3)); // prints 6
console.log(guardedInvoke(concat, &quot;foo&quot;, &quot;bar&quot;, &quot;baz&quot;)); // prints &quot;foobarbaz&quot;

guardInvoke(addNumbers, &quot;foo&quot;); // ERROR
guardedInvoke(concat, 1, 2);// ERROR
</code></pre>
<!--kg-card-end: markdown--><p>UPDATE: Now that version 3.7 is available, optional calling is available to us via the new <code>?.</code> operator</p><p><code>addNumbers?.(1, 2, 3) // Will only be invoked if addNumbers exists</code>  </p>]]></content:encoded></item><item><title><![CDATA[Secure and serve your internal resources with Synology's DiskStation Reverse Proxy and Let's Encrypt]]></title><description><![CDATA[<p>A reverse proxy is a proxy that provides access to internal resources to external clients transparently. I have a couple resources on my home network that I would like to be made available to the outside world:</p><ul><li><a href="https://blog.hardiegras.myds.me/setting-up-ghost-cms-on-your-synology-nas/">this blog I just setup</a></li><li>the feed from an IP camera I installed</li></ul>]]></description><link>https://blog.hardiegras.myds.me/use-synologys-reverse-proxy-to-expose-internal-services/</link><guid isPermaLink="false">5dd99775289c5200013d4e6b</guid><category><![CDATA[Synology]]></category><dc:creator><![CDATA[Chris Hardie]]></dc:creator><pubDate>Sat, 23 Nov 2019 22:21:04 GMT</pubDate><content:encoded><![CDATA[<p>A reverse proxy is a proxy that provides access to internal resources to external clients transparently. I have a couple resources on my home network that I would like to be made available to the outside world:</p><ul><li><a href="https://blog.hardiegras.myds.me/setting-up-ghost-cms-on-your-synology-nas/">this blog I just setup</a></li><li>the feed from an IP camera I installed in the garage (I'm a bit OCD and there have been occasions where I have forgotten to close the garage door - this lets me quickly check that the garage is indeed secure from my phone)</li></ul><p>Reverse proxies can be installed on routers, but the router I have runs dd-wrt firmware, and it is a bit cumbersome to get it setup. Fortunately Synology includes a reverse proxy with its DiskStation operating system, and it's a breeze to setup.</p><h2 id="port-forwarding">Port forwarding</h2><p>The first step is to forward HTTPS requests to the router onwards to your DiskStation. How to do this varies from router to router, but you want to forward HTTPS requests on port 443 to your DiskStation's IP address (mine is 192.168.1.25) on the same port. My rule looks like:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-44.png" class="kg-image"></figure><h2 id="domain-name">Domain name</h2><p>You'll need to get a domain name for a few reasons:</p><ul><li>you'll need subdomains to support routing to different resources (e.g. camera.my-domain.com and blog.my-domain.com)</li><li>you'll probably want to use a Dynamic DNS service to keep your domain pointed at your DiskStation when your ISP changes your IP address, which is typical for residential accounts</li><li>you'll need a domain to procure a SSL certificate required for secure HTTPS communication</li></ul><p>Getting a domain name is out of scope of this post, but if you're ok with not using a completely custom domain, you can piggyback off one of Synology's domains and leverage their <a href="https://www.synology.com/en-global/knowledgebase/DSM/help/DSM/AdminCenter/connection_ddns">DDNS service</a>. (At the time of writing this, I'm using hardiegras.myds.me for my domain.)</p><h2 id="configuring-the-reverse-proxy">Configuring the Reverse Proxy</h2><p>I need to create a routing table so that the proxy knows where to direct requests. I want to route:</p><ul><li>blog.hardiegras.myds.me -&gt; 192.168.1.25:2368</li><li>camera.hardiegras.myds.me -&gt; 192.168.1.178:30001</li></ul><p>"blog" and "camera" are subdomains I came up with myself, they can be anything. </p><p>Open Control Panel -&gt; Application Portal and open the Reverse Proxy tab:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-45.png" class="kg-image"></figure><p>Note that I can target any resource that's on my home network, not just resources on the DiskStation.</p><h2 id="ssl-certificate">SSL Certificate</h2><p>The HTTPS protocol requires an SSL certificate installed on the DiskStation. In addition to that, the certificate must be signed by a trusted authority in order to avoid your sites being served with a "Not Secure" warning. Fortunately, Synology has made this very easy to do.</p><p>Open Control Panel -&gt; Security, and open the Certificates tab:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-46.png" class="kg-image"></figure><p>Click on the Add button, select "Add a new certificate" and click Next:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-47.png" class="kg-image"></figure><p>Select "Get a certificate from Let's Encrypt" and click Next</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-48.png" class="kg-image"></figure><p>Enter your domain qualified with the subdomain from the reverse proxy's routing table:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-55.png" class="kg-image"></figure><p>Once you click Apply, Let's Encrypt will generate your certificate and it will be installed automatically onto your DiskStation:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-50.png" class="kg-image"></figure><p>Note the expiration date - the certificate will need to be renewed every few months.</p><p>Final step is to click Configure and map the certificate to the service you want to expose:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-51.png" class="kg-image"></figure><p>Now your DiskStation will serve HTTPS requests with a certificate from Let's Encrypt, a trusted authority. Browsers will now show a padlock when users visit to show that everything has been secured properly:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-52.png" class="kg-image"></figure>]]></content:encoded></item><item><title><![CDATA[Setting up Ghost CMS on your Synology NAS]]></title><description><![CDATA[<!--kg-card-begin: html--><blockquote>Note that much of this setup is a modified version of what is described in Dmitry Fisenko's <a href="https://blog.fisenko.page/deploying-ghost-in-docker/">blog post</a></blockquote><!--kg-card-end: html--><p>Standing up Ghost CMS on my Synology NAS required a few dependencies</p><!--kg-card-begin: html--><ul>
    <li style="text-decoration: line-through">MariaDB 10</li>
    <li style="text-decoration: line-through">phpMyAdmin</li>
    <li>Docker</li>
</ul><!--kg-card-end: html--><blockquote>UPDATE: Turns out the docker image always uses SQLite for a database, MariaDB is not</blockquote>]]></description><link>https://blog.hardiegras.myds.me/setting-up-ghost-cms-on-your-synology-nas/</link><guid isPermaLink="false">5dd96ab20e9a310001f21200</guid><category><![CDATA[Docker]]></category><category><![CDATA[Ghost CMS]]></category><category><![CDATA[Synology]]></category><dc:creator><![CDATA[Chris Hardie]]></dc:creator><pubDate>Sat, 23 Nov 2019 17:39:21 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: html--><blockquote>Note that much of this setup is a modified version of what is described in Dmitry Fisenko's <a href="https://blog.fisenko.page/deploying-ghost-in-docker/">blog post</a></blockquote><!--kg-card-end: html--><p>Standing up Ghost CMS on my Synology NAS required a few dependencies</p><!--kg-card-begin: html--><ul>
    <li style="text-decoration: line-through">MariaDB 10</li>
    <li style="text-decoration: line-through">phpMyAdmin</li>
    <li>Docker</li>
</ul><!--kg-card-end: html--><blockquote>UPDATE: Turns out the docker image always uses SQLite for a database, MariaDB is not required at all.</blockquote><h2 id="install-ghost">Install Ghost</h2><p>Once your database and account have been created, open Docker and download the Ghost image from the Docker registry:<br></p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-33.png" class="kg-image"></figure><p><br>Once you launch your image, you will need to specify some settings:<br></p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-34.png" class="kg-image"></figure><p><br>Under Advanced Settings I turned on auto-restart:</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-35.png" class="kg-image"></figure><p><br><br>Under Volume, we are going to specify a mount point our Ghost CMS container will use to durably persist static content. Click "Add Folder" and then create a new directory called "blog" under "/docker", and mount to "/var/lib/ghost/content"</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-36.png" class="kg-image"></figure><p><br>Under network, we select "Use the same network as Docker host". This will expose our database on 127.0.0.1.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-37.png" class="kg-image"></figure><p><br>Click on environment and set this variable:</p><!--kg-card-begin: markdown--><pre><code>url=https://domain.com
</code></pre>
<!--kg-card-end: markdown--><p><br><br></p><p>After you have created the container you can check its status in the "Container" tab<br></p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-39.png" class="kg-image"></figure><p>Double click the container and you'll be able to view logs which will help troubleshoot any isssues. If your container launched successfully, the tail of the log will indicate Ghost's URL:<br></p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-40.png" class="kg-image"></figure><p><br><br>Append the port from the log onto the IP address of the NAS to get your base URL - in my case its 192.168.1.25:2368</p><p>Navigating to 192.168.1.25:2368/ghost will allow you to configure Ghost.</p><figure class="kg-card kg-image-card"><img src="https://blog.hardiegras.myds.me/content/images/2019/11/image-31.png" class="kg-image"></figure>]]></content:encoded></item></channel></rss>