following architecture:
I'm running a couple of dozen NodeJS fat processes (50+Mb RES ram each, some >150Mb), all different workloads. They are all managed by a supervisord instance.
Each process connects to a RabbitMQ broker and to a PostgreSQL RDBMS, and is responsible of one or two message queues. Each message received is mapped to a function, and after processing, a reply message is sento back to the broker where it's routed to the originating web browser (mostly) and/or to other NodeJS consumers. The browsers are connected directly to rabbitMQ trough a STOMP over WS adapter.
All of this is the backend of a B2C web property; it works great, response times are consistent and I'm sure I can scale horizontally and outsource each important piece when the time comes.
Now, What would be the best practice if I wanted to decommission the fat NodeJS processes?
- Do I need a couple of dozen Go executables or only one?
- The couple of dozen proceses are currently there to allow me to fix/upgrade a small part of the site without disturbing anything else. How can I achieve this with Go?
Finally, the main reason I'm considering decomissioning the nodeJS processes is because I want to reduce the RAM consumption and be able to scale with smaller VMs and/or use simple containers (not multilayered ones, but simple, one image each). Another consideration is _npm install_; each deploy/upgrade cycle it's just a waste of resources and it increases the build time considerably.
- Would you consider this as valid reasons?
Thanks all.
So your question is if you should have different go processes on the MQ?
I've heard of RabbitMQ but I'm still not sure I understand its uses or how to implement it
Обсуждают сегодня