When The Cluster Becomes The Minimum Unit Of Computation
The Jetson Mate from Seeed Studio

When The Cluster Becomes The Minimum Unit Of Computation

One trend I have been following at the (device) edge is the seeming desire of developers in the ecosystem to want to assemble collections of single-board computers and modules into local clusters.

Having noticed this trend, and watched it develop over time, the question I keep asking myself is: "Why bother?" For the most part, there are downsides in regards to I/O, software overhead and cost when it comes to trying to use clusters to perform a workload versus, say, a single very fast server. So what is going on here?

No alt text provided for this image

For the most part, these projects tend to be built by hobbyists having fun in their home with no particular objective other than learning.

But aside from the entertainment value of building such clusters, a broader trend seems to have emerged: A fundamental shift in the nature of computing where "the node" is being replaced by "the cluster" as the minimum unit of compute at every level from the data center down to the developer.

In talking with industry professionals and CTOs building and managing deployments of real devices doing real work for commercial use cases, the cluster is showing up at every level of the conversation. On the device, on the gateway, on the co-location site and in the cloud: Everything is now clusters from end-to-end.

One potential reason for this trend may be the specific economics of disruptive innovations. A highly expensive, closed industrial gateway (for example), may only used by a handful of customers and often has software ecosystem support which dramatically lags that found in more agile, less-performant, lower-cost, cheaper hardware often preferred by developers and educators.

No alt text provided for this image

When deciding between buying a "hefty industrial gateway" with longer support cycles and minimal software ecosystem, many developers are choosing to cluster their cheap, useable, highly available prototyping boards instead. With tools like Kubernetes, this approach to deploying computation may evolve to become the norm.

Another potential reason for this trend is the increasing sophistication of the market. What started out as IoT, where a single gateway might have been expected to control a handful of sensors has ballooned into a situation where we have many gateways being controlled by many edge servers being controlled by many edge servers.

Put these market forces together - and we might all need to get used to thinking in clusters sooner rather than later.




Toby McClean

Building customer and shareholder value through data-driven solutions ♾

3y

Nice article Rex, you mention the industrial gateway, what type of enclosures to see be made available for using this approach in environments that need IP ratings. Anything happening in this area?

Like
Reply

To view or add a comment, sign in

Insights from the community

Explore topics