And now that we know it and want to be able to get it, we may be facing the biggest challenge; how do we implement it?
Being connected and maintaining powerful data processing in cloud services will not be stable enough. Data is heavy weighing, and more data creates latency and in the worst-case downtime that is completely devastating, for example, for a self-driving car, medical devices or clinical decision support systems.
The latest within AI, and what we work with is called “edge technologies”. That means smaller, smarter systems connected to the larger cloud AI. The smaller, smarter micro AI system is extremely able to calculate which data sources and bits that should be sent to the larger cloud AI for valuation and which should not. With the help of the micro AI, the system can run without the risk for latency or downtime.
Shortly, several processes, components, tools and products, as well as bodies, will be chipped and equipped with such similar micro-edge AI. Exciting, isn’t it? I don’t know about you, but I am looking forward to following this development.