What is tiny ML is very interesting an importation topic to learn. Through TinyML there is a quick admissibility of IoT.
We now live in a world where Machine Learning models are everywhere. You probably use these models more than you know during the day. Machine learning models are used in everyday tasks like as surfing through social media, snapping a picture, and checking the weather.
We all know how time-consuming it is to train these models. However, doing inference on these models is typically computationally costly. We need computer platforms that can handle the rate at which machine learning services are being used. As a result, the majority of these models are run on massive data centres with clusters of CPUs and GPUs (even TPUs in some cases).
You want the machine learning magic to happen instantaneously when you capture a photo. You don’t want to wait for the image to be transmitted to a data centre, processed, and then returned to you. You want the machine learning model to operate locally in this situation.
You want your devices to answer promptly when you say “Alexa” or “OK, Google.” Waiting for the gadget to send your voice to the servers, which will process it and extract the information. This takes time and has a negative impact on the user experience. Again, you want the machine learning model to operate locally in this situation.
TinyML is a branch of Machine Learning and Embedded Systems that investigates the sorts of models that can be run on small, low-power devices such as microcontrollers. It enables model inference at the edge to be low-latency, low-power, and low-bandwidth. While a typical consumer CPU uses between 65 and 85 watts, and a typical consumer GPU consumes between 200 and 500 watts, a typical microcontroller consumes milliwatts or microwatts. That is around a thousand times less power use. Because of their low power consumption, TinyML devices may operate unplugged on batteries for weeks, months, and even years while executing ML applications on the edge.
Advantages of TinyML
- Low Latency: Since the model runs on the edge, the data doesn’t have to be sent to a server to run inference. This reduces the latency of the output.
- Low Power Consumption:As we discussed before, microcontrollers consume very little power. This enables them to run without being charged for a really long time.
- Low Bandwidth:As the data doesn’t have to be sent to the server constantly, less internet bandwidth is used.
- Privacy: Since the model is running on the edge, your data is not stored in any servers.
How to satrt onTinyML?
- Hardware:The Arduino Nano 33 BLE Sense is the suggested hardware for deploying Machine Learning models on edge. It contains a 32-bit ARM Cortex-M4F microcontroller running at 64MHz with 1MB of program memory and 256KB RAM. This microcontroller provides enough horsepower to run TinyML models. The Arduino Nano 33 BLE Sense also contains colour, brightness, proximity, gesture, motion, vibration, orientation, temperature, humidity, and pressure sensors. It also contains a digital microphone and a Bluetooth low energy(BLE) module. This sensor suite will be more than enough for most applications.
- Machine Learning Framework: There are only a handful of frameworks that cater to TinyML needs. Of that, TensorFlow Liteis the most popular and has the most community support. Using TensorFlow Lite Micro, we can deploy models on microcontrollers.
- Learning Resources: Since TinyML is an emerging field, there aren’t many learning materials as of today. But there are a few excellent materials like Pete Warden and Daniel Situnayake’s book, “TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-Power”.
The Big Future of the TinyML
According to a forecast, by 2030, an approximate number of 2 Bn devices will reach the market through TinyML techniques, benefiting the economy by being cost-effective and creating intelligent devices. In economic terms, TinyML can get more than $70 Bn in the next five years. TinyML is here to change the scenario of applications in IoT devices and change the future of intelligent devices.
The latest blog on various topics including Artificial intelligence, Machine learning, Internet of Things, is also for helping scholars who are pursuing Ph.D. in various fields. The content is also available in the form of video in our channel the description is given below
TechDoctorIn Channel was developed for Learning New About Artificial Intelligence, Machine Learning and With Innovative Project Ideas. TechDoctorIn Channel has the Following Playlists
- Futuristic Versions of Artificial Intelligence (AI)
- Lecture Series Artificial Intelligence (AI)
- Tricks for Maths and Science
- Practical Machine Learning
- Useful Facts
- How To Do Ph.D. In Three Years
- Lecture Series : Digital Electronics
- Lecture Series: Tanner Tool
Channel link: https://www.youtube.com/c/TechDoctorIN
Google Scholar: https://scholar.google.com/citations?user=AyrId_EAAAAJ&hl=en
TechDoctorIn is a very useful educational channel run by Dr. Pawan Whig Senior IEEE Member. The content is verified by him at it is very useful.