<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Kartik Nanda, Engineering AI</title>
	<atom:link href="https://www.kartiknanda.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.kartiknanda.com</link>
	<description>AI algorithms, how-to guides, thoughts</description>
	<lastBuildDate>Tue, 18 Aug 2020 00:36:44 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=5.5</generator>

 
	<item>
		<title>Nice to meet you!</title>
		<link>https://www.kartiknanda.com/hi-how-are-you/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=hi-how-are-you</link>
					<comments>https://www.kartiknanda.com/hi-how-are-you/#respond</comments>
		
		<dc:creator><![CDATA[Kartik Nanda]]></dc:creator>
		<pubDate>Tue, 28 Jul 2020 19:51:15 +0000</pubDate>
				<category><![CDATA[Meet Me]]></category>
		<guid isPermaLink="false">http://www.kartiknanda.com/?p=1096</guid>

					<description><![CDATA[<p>Hey, I&#8217;m Kartik. Nice to meet you, and thanks for visiting my page. You will find here many of the projects that I have worked on, and, am working on. For any project, I have tried to document it as I have worked on it. I have tried to capture, and link to, what I found to be the most&#8230;</p>
<p>The post <a rel="nofollow" href="https://www.kartiknanda.com/hi-how-are-you/">Nice to meet you!</a> appeared first on <a rel="nofollow" href="https://www.kartiknanda.com">Kartik Nanda, Engineering AI</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Hey, I&#8217;m Kartik. Nice to meet you, and thanks for visiting my page. </p>



<p>You will find here many of the projects that I have worked on, and, am working on. For any project, I have tried to document it as I have worked on it. I have tried to capture, and link to, what I found to be the most relevant sources &#8211; for learning and knowledge, as well as code. In most cases, the code is available on my <a href="https://github.com/kartiknan">github </a>page. </p>



<p>If you&#8217;d like to know more about me, click <a href="http://www.kartiknanda.com/what-do-you-do/">here</a>, or just drop me a line. I look forward to hearing from you &#8211; so, how are you?</p>
<p>The post <a rel="nofollow" href="https://www.kartiknanda.com/hi-how-are-you/">Nice to meet you!</a> appeared first on <a rel="nofollow" href="https://www.kartiknanda.com">Kartik Nanda, Engineering AI</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.kartiknanda.com/hi-how-are-you/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Kartik, what do you do?</title>
		<link>https://www.kartiknanda.com/what-do-you-do/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=what-do-you-do</link>
					<comments>https://www.kartiknanda.com/what-do-you-do/#respond</comments>
		
		<dc:creator><![CDATA[Kartik Nanda]]></dc:creator>
		<pubDate>Tue, 28 Jul 2020 18:02:00 +0000</pubDate>
				<category><![CDATA[Highlights]]></category>
		<category><![CDATA[Meet Me]]></category>
		<guid isPermaLink="false">http://www.kartiknanda.com/?p=1100</guid>

					<description><![CDATA[<p>“Kartik, what do you do?” &#8211; asked you. Lets see &#8211; nowadays I design AI algorithms. I have designed Integrated Circuits, solar powered irrigation pumps in India, have founded a company, have some US patents. But what do I do? The 42nd time someone asked me this, I went deeper. As an&#160;undergraduate student at the Indian Institute of Technology (IIT),&#8230;</p>
<p>The post <a rel="nofollow" href="https://www.kartiknanda.com/what-do-you-do/">Kartik, what do you do?</a> appeared first on <a rel="nofollow" href="https://www.kartiknanda.com">Kartik Nanda, Engineering AI</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>“Kartik, what do you do?” &#8211; asked you. Lets see &#8211; nowadays I design AI algorithms. I have designed Integrated Circuits, solar powered irrigation pumps in India, have founded a company, have some <a href="https://patents.justia.com/inventor/kartik-nanda">US patents</a>. But what do I do? The 42nd time someone asked me this, I went deeper.</p>



<p>As an&nbsp;undergraduate student at the <a href="http://www.iitk.ac.in/">Indian Institute of Technology (IIT), Kanpur</a>, I remember this sinking feeling at the end of every semester – I don’t see myself doing that for the rest of my life! Four years and something like forty courses, and at the end of it, I still had no idea what I wanted to do – every course had been a struggle.</p>



<p>So I did the only logical thing – enrolled for a masters. It wasn’t until the second semester of <a href="https://cse.nd.edu/">graduate school at Notre Dame</a> that I found a course that I just “got” – Algorithms. Of course, realization has taken many more happy years to sink in.</p>



<p>That interest in algorithms took me to signal processing early in my career. I designed delta sigma ADCs (or sigma delta, if you must!), digital filters, DSPs. Since these don’t exist in air, what I was actually doing was building mixed signal ICs. I started to think of myself as an IC guy (really! <a href="https://www.researchgate.net/profile/Samares_Kar">Prof. Kar’s</a> course!!).&nbsp;</p>



<p>But algorithms wasn’t done with me yet – I moved on to solar, or more specifically, generating electricity from solar. From designing ICs, to coding algorithms for building irrigation pumps that run directly from a solar PV panel. Solar, though, is more than just a product for me. It has enabled me to return to India (where I was born), to rural India – what a learning experience that has been. I am now also more conscious of the environment, and of how we live our lives.&nbsp;</p>



<p>My focus nowadays is&nbsp;Artificial Intelligence (AI), specifically deep learning using neural nets. Projects include looking for anomalies in audio signals, audio keyword recognition, identifying anomalies in images taken by a drone. All of these are algorithms &#8211; AI is but another tool. A powerful tool though, that makes it possible to do things we couldn&#8217;t earlier. I believe that AI complements human intelligence, not replace it. In other words, do things that humans cannot do or find too painful (expensive, tedious) to do. </p>



<p>So, what do I do? – I design algorithms and build products around them. That is me, in a nutshell. I call Austin, Texas home, am married and have two wonderful daughters. I like the outdoors &#8211; in the pre-Corona days you could find me on Lady Bird Lake at least three evenings every week. </p>



<p>OK, what do <em>you </em>do? Can I help you with your next thing? Do write &#8230;</p>



<p></p>
<p>The post <a rel="nofollow" href="https://www.kartiknanda.com/what-do-you-do/">Kartik, what do you do?</a> appeared first on <a rel="nofollow" href="https://www.kartiknanda.com">Kartik Nanda, Engineering AI</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.kartiknanda.com/what-do-you-do/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Setting up the Pi is easy</title>
		<link>https://www.kartiknanda.com/raspberry-pi-setup/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=raspberry-pi-setup</link>
					<comments>https://www.kartiknanda.com/raspberry-pi-setup/#respond</comments>
		
		<dc:creator><![CDATA[Kartik Nanda]]></dc:creator>
		<pubDate>Fri, 24 Jul 2020 01:32:30 +0000</pubDate>
				<category><![CDATA[AI on Pi]]></category>
		<category><![CDATA[Highlights]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[raspberry pi]]></category>
		<guid isPermaLink="false">http://www.kartiknanda.com/?p=1075</guid>

					<description><![CDATA[<p>This is the first in a series of posts on how to run AI on the Raspberry Pi (AI on Pi). The first step &#8211; setting up the Raspberry Pi. The final goal is to use the Pi to run a deep learning application. This could be vision related – example, recognizing an event from images/video feed, using Convolutional Neural&#8230;</p>
<p>The post <a rel="nofollow" href="https://www.kartiknanda.com/raspberry-pi-setup/">Setting up the Pi is easy</a> appeared first on <a rel="nofollow" href="https://www.kartiknanda.com">Kartik Nanda, Engineering AI</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>This is the first in a series of posts on how to run AI on the Raspberry Pi (AI on Pi). The first step &#8211; setting up the Raspberry Pi. The final goal is to use the Pi to run a deep learning application. This could be vision related – example, recognizing an event from images/video feed, using Convolutional Neural Nets (CNNs). It could also be audio related, so NLP using RNNs, but that is for a later time.</p>



<h2>The main goals:</h2>



<ul><li>Capture images / video using a camera (sensor) connected to the Pi</li><li>AI algo executes on the Pi</li><li>Result is communicated to “Hub” using BT, Wifi etc. Hub could be another computer, IoT Hub, or the Cloud.</li></ul>



<p>Since I have never worked with the Raspberry Pi previously, let’s start at the very beginning.</p>



<h3>Hardware:</h3>



<ul><li>Raspberry Pi 4, 4GB version (bought the <a href="https://www.canakit.com/raspberry-pi-4-starter-kit.html">Canakit Raspberry Pi 4 Starter Kit</a>)</li></ul>



<ul><li>OV5647 5.0MP Camera Module (bought at <a href="https://dlscorp.com/shop/ov5647-5-0-mp-raspberry-pi-compatible-camera-modules/">dlscorp.com</a> site)<br>The OV5647 module is the older version. There is a new <a href="https://www.raspberrypi.org/products/camera-module-v2/">official camera module</a>, but I decided to go with the older version for two reasons – cost and 5MP (vs 8MP in V2) is plenty for AI applications. Important details – needs to be a CSI module, and comes with a replaceable lens mount (M12)</li></ul>



<ul><li>Camera lens (bought different lenses from dlscorp)</li></ul>



<ul><li>Monitor – used an old monitor I had from years ago. The Pi connects to a monitor using a micro-HDMI port. The kit includes a micro-HDMI to HDMI cable, but I needed to buy a micro-HDMI to VGA adapter for my Monitor.</li></ul>



<ul><li>Keyboard and mouse – used an old keyboard I had laying around</li></ul>



<h3>Software:</h3>



<ul><li>Used the standard NOOBS (preloaded on the micro SD card), and the full Raspbian install. Nothing fancy, yet.</li></ul>



<h2>Step 1: Connect the System</h2>



<div class="wp-block-image"><figure class="alignright size-medium is-resized"><img loading="lazy" src="http://www.kartiknanda.com/wp-content/uploads/2020/07/raspberry_pi_setup-225x300.jpg" alt="Raspberry Pi setup with a camera facing out a window" class="wp-image-1076" width="243" height="331"/></figure></div>



<p>There are many excellent walk throughs – see <a href="https://projects.raspberrypi.org/en/projects/raspberry-pi-getting-started">here</a> or <a href="https://www.youtube.com/watch?v=BpJCAafw2qE">here</a>. The basics are simple enough – connect the display, keyboard and mouse, and the power, and follow the onscreen prompts. There are a couple of things I would like to point out though. One, setup SSH and VNC access to the Pi, so you can connect to it from a remote computer and don’t need the dedicated mouse, keyboard and monitor. This will be especially useful during deployment of the application. The second thing – <a href="https://www.raspberrypi.org/documentation/raspbian/updating.md">update the software</a>. Use “sudo apt-get update” and “sudo apt-get upgrade”.</p>



<p>Once the system is operational, do take the time to explore, browse the web etc. Marvel at a simple yet very complete computer.</p>



<h2>Step 2: Python Setup</h2>



<p>There are other languages, of course, but Python is the language of choice. Python2 and Python3 come pre-installed, with a bunch of packages. The first thing is the virtual environment setup – extremely important given the stand-alone nature of the intended application. The application should not break because of an update a couple of years out. I used <a href="https://virtualenv.pypa.io/en/latest/">virtualenv</a> – <a href="https://www.youtube.com/watch?v=N5vscPTWKOk">here</a> is a good intro.</p>



<p>I spent some time (a little) trying to research the “best” editor but realize that the Pi is not necessarily where I will develop the code. That can be done on a laptop or in the cloud. So, for now I am using Thonny, which came pre-installed. I have not yet written a lot of python on the Pi, will see how it goes over the next few days/weeks – might yet change my mind. </p>



<p>Try out a simple program, ensure that python is up and running. In the next post, the camera setup.</p>
<p>The post <a rel="nofollow" href="https://www.kartiknanda.com/raspberry-pi-setup/">Setting up the Pi is easy</a> appeared first on <a rel="nofollow" href="https://www.kartiknanda.com">Kartik Nanda, Engineering AI</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.kartiknanda.com/raspberry-pi-setup/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Edge or Cloud &#8211; Where should AI live?</title>
		<link>https://www.kartiknanda.com/edge-or-cloud-where-should-ai-live/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=edge-or-cloud-where-should-ai-live</link>
					<comments>https://www.kartiknanda.com/edge-or-cloud-where-should-ai-live/#respond</comments>
		
		<dc:creator><![CDATA[Kartik Nanda]]></dc:creator>
		<pubDate>Sat, 19 Oct 2019 18:36:48 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Highlights]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Cloud AI]]></category>
		<category><![CDATA[Edge IoT]]></category>
		<category><![CDATA[Edge or Cloud]]></category>
		<category><![CDATA[IoT]]></category>
		<guid isPermaLink="false">http://www.kartiknanda.com/?p=107</guid>

					<description><![CDATA[<p>Where should the AI live? Does it have to be in the Cloud? Or does it live in the Edge? Is it even an option (sometimes it's not)? Is there a hybrid solution, that is, bits and pieces live on the Edge, and the Cloud? This article examines various criteria that have an impact on this decision</p>
<p>The post <a rel="nofollow" href="https://www.kartiknanda.com/edge-or-cloud-where-should-ai-live/">Edge or Cloud &#8211; Where should AI live?</a> appeared first on <a rel="nofollow" href="https://www.kartiknanda.com">Kartik Nanda, Engineering AI</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>This is a question that has come up many times during AI product discussions &#8211; where will the AI live? Does it have to be in the Cloud? Or does it live in the Edge? Is it even an option (sometimes it&#8217;s not)? Is there a hybrid solution, that is, bits and pieces live on the Edge, <em>and </em>the Cloud. </p>



<p>There is not much online that compares these today, partly because its still early days. There was a panel discussion on this topic at the Texas Wireless Summit (<a href="https://www.youtube.com/watch?v=yorYWRrl4Jk">link to the YouTube video</a>). The excellent panelists covered some of the aspects I talk about here.</p>



<p>The table below summarizes some key considerations and suggests which is better &#8211; the Cloud or the Edge. Below we examine each, and present our rationale. There is not a mathematical proof, but rather intuitive arguments, with some examples. We consider two examples. First is the facial recognition on mobile devices &#8211; we can unlock our phone, the laptop just by looking at it. The device uses its camera to &#8220;see&#8221; us, and uses an AI model running on the device itself to recognize the face, and unlock. Why does this model sit on the Edge?</p>



<p>The second example is a drone searching for forest fires. Consider two possible implementations. Option 1 &#8211; the images are streamed into the Cloud, where the AI model looks for the fire. Option 2 &#8211; the AI sits on the drone itself, and the drone then only communicates the location of the fire. </p>



<figure class="wp-block-image"><img src="http://www.kartiknanda.com/wp-content/uploads/2019/10/AI_Edge_or_Cloud_Blog_Tbl1-1024x265.jpg" alt="Comparing AI deployment at the Edge Vs in the Cloud" class="wp-image-110"/><figcaption>Where should AI live &#8211; at the Edge or in the Cloud? Here we look at various criteria, and suggest if the Edge or the Cloud comes out ahead</figcaption></figure>



<h2>Accuracy</h2>



<p>Accuracy is a simple one. By &#8220;accuracy&#8221; we mean the accuracy of the AI model&#8217;s output. The Cloud has more resources, and in general, that translates into higher accuracy. What are these Resources? &#8211; compute and memory, so a bigger, more complex model. But it could also be access to other data (sensor fusion, or data from other deployments), or historical data, or other information that is not available at the Edge.</p>



<p>A drone with a model in the Cloud is likely better at detecting fires.</p>



<h2>Time</h2>



<p>Time &#8211; how long before we get the output result &#8211; is slightly more complicated. For the same computation, the Edge will be faster than the Cloud. Why? &#8211; the Edge is the data generation point. If the model sits in the Cloud, the data has to be uploaded, then inferred, and the results downloaded back to the Edge. However, if the results are not going back to the Edge and instead stay in the Cloud, then time might be the same.</p>



<p>For the phone unlock example, the Edge is a better place for the AI. The action &#8211; unlock the phone &#8211; is at the Edge. </p>



<h2>Reliability</h2>



<p>Sending data to the Cloud has a &#8220;variable&#8221; time aspect as well. Communication depends on many factors, like network availability, signal strength, data routing, traffic. Some are controllable, others harder. In a worst scenario, what if the data does not get through &#8211; will it be an issue that can be recovered from? </p>



<p>As an example, if the face recognition unlock on the mobile phone ran in the Cloud, it likely wouldn&#8217;t be a feature! </p>



<h2>Power</h2>



<p>There is a simple rule &#8211; the farther the data has to travel, the more power it needs. The most power-hungry part of an IoT (Internet of Things) device is the radio (eg, see <a href="http://diposit.ub.edu/dspace/bitstream/2445/97601/1/660493.pdf">Modeling Power Consumption for IoT devices</a>). The energy consumed of course depends on the amount of data sent. As an example, assume a drone looking for a fire. Option one is to send a video stream and process it in the Cloud. Another would be for the AI model to sit on the drone, and for the drone to only send the location of the fire. From the power perspective, the later will be far better.</p>



<h2>Cost</h2>



<p>Closely tied to power is the cost. This is not obvious, because cost can include many things. However, purely from a cost-of-data angle, streaming the video is more costly than processing it at the Edge (on the drone, in the example above). </p>



<p>The broader question about cost is harder. One aspect is one-time costs Vs recurring costs. While the Edge device is more a one-time cost, the Cloud presents a recurring cost. That&#8217;s only one aspect though, and cost has to be evaluated on a product by product basis. </p>



<h2>Security</h2>



<p>There are two ways to look at security. One is the security of the data. It is most secure, and most easily secured, if it never leaves the Edge. Once the data is on the internet, or in the Cloud, it is only as secure as the encryption/protocols. </p>



<p>The second is securing the Intellectual Property (IP) &#8211; the AI model for instance. This is more secure if its in the Cloud. </p>



<h2>Privacy</h2>



<p>While &#8220;security&#8221; looks at the problem from the Provider&#8217;s perspective (company building the product), &#8220;privacy&#8221; looks at it from the Consumer&#8217;s perspective (person or entity using the product). It is a major concern, and becoming more important every day. Imagine if every time face recognition AI unlocks my phone, my image is uploaded to the Cloud. Then another AI algorithm uses it to &#8220;read&#8221; my emotional state, and sends suggestions (ads). That is a possibility (easy) with Cloud-based AI, but not as much with Edge-based AI. </p>



<p>The easiest way to keep our personal data personal is to keep it on our devices, and not send it to the Cloud. </p>



<p>So there&#8217;s the short list of considerations to keep in mind while planning your AI deployment, and figuring out where it belongs &#8211; the Cloud or the Edge. It is not a comprehensive list, and there are many other considerations like time-to-market, managing deployment, cost of building the solution, technology etc that are not covered. They make more sense though if examined within the scope of the AI project. Feel free to <a href="/contact-us">reach  out</a> with thoughts or if you need help to get started on your project.</p>
<p>The post <a rel="nofollow" href="https://www.kartiknanda.com/edge-or-cloud-where-should-ai-live/">Edge or Cloud &#8211; Where should AI live?</a> appeared first on <a rel="nofollow" href="https://www.kartiknanda.com">Kartik Nanda, Engineering AI</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.kartiknanda.com/edge-or-cloud-where-should-ai-live/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI and IoT &#8211; Marriage made in the Cloud</title>
		<link>https://www.kartiknanda.com/ai-and-iot/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=ai-and-iot</link>
					<comments>https://www.kartiknanda.com/ai-and-iot/#respond</comments>
		
		<dc:creator><![CDATA[Kartik Nanda]]></dc:creator>
		<pubDate>Mon, 22 Oct 2018 18:42:05 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Distributed AI]]></category>
		<category><![CDATA[Edge IoT]]></category>
		<category><![CDATA[Internet of Things]]></category>
		<category><![CDATA[smart home]]></category>
		<category><![CDATA[Solar Monitoring]]></category>
		<category><![CDATA[Solar PhotoVoltaic]]></category>
		<guid isPermaLink="false">http://www.kartiknanda.com/?p=70</guid>

					<description><![CDATA[<p>Whether you realize this or not, we are so connected to the “Cloud”. From my home thermostat, and doorbell, to now the microwave – everything connects to the internet. It&#8217;s not limited to the home either. From as big as the electricity grid to entire factory floors, to as small as your wrist health-monitor – everything is (or will be)&#8230;</p>
<p>The post <a rel="nofollow" href="https://www.kartiknanda.com/ai-and-iot/">AI and IoT &#8211; Marriage made in the Cloud</a> appeared first on <a rel="nofollow" href="https://www.kartiknanda.com">Kartik Nanda, Engineering AI</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Whether you realize this or not, we are so connected to the “Cloud”. From my home thermostat, and doorbell, to now the microwave – everything connects to the internet. It&#8217;s not limited to the home either. From as big as the electricity grid to entire factory floors, to as small as your wrist health-monitor – everything is (or will be) connected.<br></p>



<p>Two things make this all possible – the Internet-of-Things (IoT) and Artificial Intelligence (AI). IoT is the physical interface, the senses. The internet senses the World through the IoT devices. In simpler terms, they collect the data – a camera that takes pictures or a microphone that listens. Or it could be simpler sensors like a thermometer (measuring temperature).<br></p>



<p>But merely collecting all this data is not useful. (The quantity of this data is <a href="https://www.forbes.com/sites/bernardmarr/2018/05/21/how-much-data-do-we-create-every-day-the-mind-blowing-stats-everyone-should-read/#43db8a7060ba">huge</a> and increasing.) Something has to interpret and convert this data into actionable <em>information</em>. Something, that can extract meaningful information from this data. That is where Artificial Intelligence (AI) steps in.</p>



<p>Today the AI lives in the cloud (for the most part). IoT collects and sends data to the Cloud, where the AI interprets it, and decides the outcome. Let&#8217;s see how this works with examples.</p>



<h2>AI and IoT – a marriage made in the Cloud</h2>



<p>In the context of IoT, the goal of AI is:</p>



<ul><li>to learn a model from the data, and,</li><li>use the model to interpret new data and provide an actionable outcome (a decision).</li></ul>



<p>We’ll consider two examples – a voice assistant, like Alexa or Siri, and a smart front door. The Voice Assistant understands human language. For the smart front door, the goal is that the door should recognize me and unlock itself, without me having to use a key.&nbsp;</p>



<h3>Example: The Voice Assisant</h3>



<p>The Voice Assistant (VA) responds to spoken commands. It uses a microphone (the IoT sensor) to listen. The AI makes sense of the sounds &#8211; the words, sentences and context. It then provides the information or acts on the command. (For example, playing specific music or provide driving directions).<br></p>



<div class="wp-block-image"><figure class="alignright size-medium"><img loading="lazy" width="300" height="187" src="https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig1-300x187.jpg" alt="Key steps in developing and using an AI solution" class="wp-image-1248" srcset="https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig1-300x187.jpg 300w, https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig1-1024x639.jpg 1024w, https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig1-768x479.jpg 768w, https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig1-1536x958.jpg 1536w, https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig1-370x231.jpg 370w, https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig1-760x474.jpg 760w, https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig1.jpg 1672w" sizes="(max-width: 300px) 100vw, 300px" /><figcaption>AI and IoT &#8211; the green boxes are IoT sensors</figcaption></figure></div>



<p>Figure shows how this works. The IoT gathers the data (1) – in the case of the Voice Assistant, the data is sounds. This data is then sent to the Cloud (2). Here the data trains the AI (3) and build a model. The model is then deployed (4). <br></p>



<p>The model for the VA is complicated. It needs to understand the entire language, spoken with many different accents. Once trained though, the same model applies to all the VAs in the network. </p>



<p>For the Voice Assistant, the model (AI) learns and is deployed in the cloud itself. <br>The AI receives and interprets all spoken words in the Cloud (Cloud-AI). Why keep it in the Cloud? Mostly because it is a large model. It is also being constantly updated, and keeping it in the Cloud means only one model needs to be updated. The main issue is the lack of privacy. Anything I say, the VA is listening to and sending to the Cloud where it may be interpreted.<br></p>



<h3>Example: The Smart Front Door</h3>



<p>The goal with the smart Front Door is it recognizes me and responds by unlocking the door for me, and only me. It needs to sense my presence, and let’s say it uses a camera (the IoT sensor). AI provides the intelligence. It learns what I look like and then uses that model to recognize me and open the door.</p>



<p>The Front Door system design may follow the same approach as the VA. The camera (IoT) gathers the data, which is sent to the Cloud. There the AI learns to recognize me. But here things become different. The AI has to recognize my face for my door, and my neighbor&#8217;s face for their door. That means the AI trains differently for different doors.</p>



<p class="has-text-align-center has-very-light-gray-background-color has-background">This is a critical difference &#8211; in the VA case, the same AI (one trained model) could serve all IoT devices in the network. But in the Front Door example, every door has a unique model.</p>



<p>So for the Front Door, it might make sense to instead deploy the AI in the IoT device itself (IoT-Edge), rather than the Cloud. The Front door only needs to decide Me-Vs-NotMe. So while the model may train in the Cloud (esp. the Not-Me part), it is deployed at the IoT-Edge.<br></p>



<p>There are other good reasons for the different approaches (Cloud AI Vs Edge AI). Two of the main ones are privacy, and the size of the model. Large, typically complicated models can not fit on the resource-limited IoT devices. But, I also do not want a digital representation of my face stored on the cloud.</p>



<h2>Heterogenous IoT Networks</h2>



<p>We looked at two examples above &#8211; the VA and front door &#8211; both using single IoT devices. In general though, IoT networks consist of many devices. They are also not always homogenous, that is, not all the attached systems are identical. They may have some common features, but some unique ones as well. Think of a factory floor or even the car &#8211; both have many different types of sensors (IoT sense devices). <br></p>



<p>This complicates the AI. It is no longer possible to train the AI on one IoT device, and map the model to all the IoT devices in the network.<br></p>



<p>To illustrate this, imagine if my front door now has a fingerprint sensor in addition to the camera. The AI for the camera cannot map to the finger print sensor. Yet the two have to work together on the same problem, namely to recognize me and unlock the door.</p>



<div class="wp-block-image"><figure class="alignleft size-aldo-thumb-med"><img loading="lazy" width="370" height="208" src="https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig2-370x208.png" alt="IoT network devices are not always identical. They may have common and unique features" class="wp-image-1252" srcset="https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig2-370x208.png 370w, https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig2-270x152.png 270w" sizes="(max-width: 370px) 100vw, 370px" /></figure></div>



<p>The figure shows a general scenario &#8211; it depicts two different IoT sensors. They both have some common features, and some unique ones. One solution is to learn a superset of all possible features in the cloud. The result is a large, complicated model. This needs far more resources in the IoT edge devices, or compromises on privacy by keeping the AI in the Cloud (figure below, left).</p>



<h2>A Distributed AI Model (Edge-AI)</h2>



<div class="wp-block-image"><figure class="aligncenter size-large"><img loading="lazy" width="1024" height="410" src="https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig4-1024x410.png" alt="AI and the Internet of Things: Cloud-AI Vs Distributed AI" class="wp-image-1254" srcset="https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig4-1024x410.png 1024w, https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig4-300x120.png 300w, https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig4-768x308.png 768w, https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig4-1536x616.png 1536w, https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig4-370x148.png 370w, https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig4-760x305.png 760w, https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig4.png 1881w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption>AI in the Cloud Vs Distributed AI</figcaption></figure></div>



<p>Another solution is a Distributed AI model. This implements the AI on the network (above figure, right) &#8211; which includes the Cloud and the different IoT devices. Parts of the solution &#8211; the common features &#8211; are mapped to the Cloud. The unique ones are modeled and learnt locally on the IoT sensor devices. <br></p>



<p>This provides both benefits &#8211; privacy and resource management. Since the model is local to the IoT edge device, private data is not sent or stored in the cloud. This also utilizes the available resources better. The IoT devices store only local models, and not the entire systems&#8217; model. The result &#8211; a smaller footprint at each IoT device. Much of the AI now resides on the network devices. This reduces the computations and data requirements in the cloud. <br></p>



<p>This is a Distributed AI solution (or Edge-AI). As IoT networks become more prevalent, we will start to see more AI move to the Edge. Lets illustrate with an example next.</p>



<h2>Example &#8211; Solar Monitoring with Distributed AI</h2>



<p>The past few years have seen a tremendous growth in rooftop solar plants. Many residences today generate their own electricity from the Sun. Here&#8217;s a common question that comes up all too often from owners:<br></p>



<blockquote class="wp-block-quote"><p>&#8220;I have installed a rooftop solar plant for electricity. How do I know if the energy output from my plant is actually correct, as designed?&#8221;<br></p><cite>Solar Customer</cite></blockquote>



<p>Data there is plenty of &#8211; solar plants provide daily energy output data. The problem is this energy output number changes from day to day, based on many factors like the weather. What is lacking (and what the owner is looking for) is information. Is the output <em>correct</em>?</p>



<h3>AI and Solar Monitoring&nbsp;<br></h3>



<p class="has-text-align-center has-very-light-gray-background-color has-background">The goal with AI, is to learn a model of the Plant, and to monitor it by comparing actual output against the predicted output.<br></p>



<p>One solution is to model the plant in the Cloud (the Cloud-AI solution). The first step is to take all the data into the cloud. The second step is to learn a model of the plant &#8211; certain plant specific details such as the shade profile. The third step is to predict the output under the given environmental conditions. The final step is comparing the actual output from the plant and the predicted output. The result is actionable information &#8211; is the plant output good or not.</p>



<p>This solution works, but has two issues. The first is privacy &#8211; all the data goes to the Cloud. The second is scalability. Imagine learning the models for hundreds of thousands of small residential solar plants in the Cloud. A better solution: if the models could be learnt locally in the IoT device themselves.<br></p>



<div class="wp-block-image"><figure class="aligncenter size-aldo-thumb-masonry-big"><img loading="lazy" width="760" height="334" src="https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig5-760x334.png" alt="Using Distributed AI (Edge-AI) for Solar Monitoring" class="wp-image-1255" srcset="https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig5-760x334.png 760w, https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig5-300x132.png 300w, https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig5-1024x450.png 1024w, https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig5-768x337.png 768w, https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig5-1536x675.png 1536w, https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig5-370x163.png 370w, https://www.kartiknanda.com/wp-content/uploads/2020/08/Edge_AI_Blog_Fig5.png 1987w" sizes="(max-width: 760px) 100vw, 760px" /><figcaption>Solar Monitoring &#8211; The Distributed AI approach</figcaption></figure></div>



<p>Edge-AI solves both of these issues. The AI solution maps on to the entire network (see figure). Certain elements that are common to all the plants &#8211; the weather for example &#8211; are modeled in the Cloud. The Plant specific modeling however, is local. <br></p>



<p>The result is better on many fronts. One, the distributed model uses resources more optimally by processing data locally. Data privacy is much better since all plant modeling is local. The network scales easily. Adding a new plant is easy since every plant learns its own model, and only its own model.&nbsp;</p>



<h2>Closing Thoughts</h2>



<p>Products and solutions based on a network of IoT sensors are set to become prevalent. Some applications include the electricity smart-Grid, a smart-home, individualized health care. AI will be an integral part of any such product. And while today Cloud-based solutions dominate the AI landscape, such AI-IoT network products will need new distributed topologies. <br></p>



<p>Edge-AI is a distributed AI topology. It maps the AI on the entire network, not just in the Cloud. This provides two distinct advantages over AI in the Cloud. The first is privacy &#8211; with models that are learnt locally. Second is the lean implementation, an optimal use of the available resources. <br></p>



<p>Distributed AI will drive the next wave of Intelligent Products.</p>
<p>The post <a rel="nofollow" href="https://www.kartiknanda.com/ai-and-iot/">AI and IoT &#8211; Marriage made in the Cloud</a> appeared first on <a rel="nofollow" href="https://www.kartiknanda.com">Kartik Nanda, Engineering AI</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.kartiknanda.com/ai-and-iot/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
