I'm an embedded systems dev. Firmware cannot crash, especially when it's running an engine or a pacemaker. To design firmware with microservices, performing basic functions over whatever network connection you might have, would be insane. Separation of concerns is not the same as distribution of concerns. Whenever you add a communication channel, you add a failure point and a delay, a measurable and minimum delay. Maybe that delay is small when you run 1000 microservices on the same machine, but then when it's time to "scale" across the network you increase your latency by orders of magnitude, even when you scale within the same datacenter. To distribute concerns across the network is a valid design pattern, but it's not like waterfall v.s. agile where you maximize the "best thing" about a process, because the "beat thing" about software is not the network. Networking is a tool that has utility and tradeoffs and it always increases complexity.

Replies (3)

The term 'microservices ' implies going ham with little networked backends while pretending like distributing logic in this way has no costs. Independently developed and deployed backends have a time and place, but it certainly is a costly pattern that should be used as a last resort. I have yet to see a case at any scale where backend code should not be developed together in a monolithic codebase, even when it is deployed in a distributed architecture for various needs (job processing vs request/reply backends).
The notion of microservices is a lazy event pipelining architecture. It's better to just use a RTC microkernel framework inside your application and break out scalable pieces with proxy placeholders or a pub-sub rally point like over zmq or redis.
Default avatar
G Force G 4 months ago
Micro services seems to be just another instance of Conway's Law. Some organizations are composed of many many single responsibility teams and so they evangelize that the way to do things is their way.