Node.js 处理 CPU 密集型运算策略 beta 1 –How to handle CPU-intensive computing with Node.js, in draft

As a note for thoughts of researching Node.js multi-threading solutions.

根据 这里,结合 clusterchild_process ,得有此文。
The post is with reference here,and cluster and child_process.

Node.js 与 浏览器的 Javascript 解析器都有同样的问题,就是当遇到高密集度运算时,会出现 UI/进程 阻塞的情况——这里不用论述 Node.js 的架构,一言以敝之,Node.js 根据时势,主动选择了基于单线程的 Event-based 架构。
If you have front-end development background, it should be not hard to understand Javascript process blocking issue as the single-threaded nature of the language in browser. And this happens to Node.js too. It's going to be a nightmare if you are building a high profile server -- surely event-based architecture will not save you.

所以解决方案也是同样的道理,就是要么使用 setTimeout 让 CPU 单线程定时喘口气,或者 *节制地* 的使用多线程并发计算,或者分发到其他 Node.js 节点进行计算。当然,我们更进一步,就是引入“服务分级”。
So, as experience we got from front-end JS and other load balancing solutions, we solve it via setTimeout and/or multi-threading and/or cluster distributing.

基本思维导图 from Paul
Mindmap from Paul


Node.js handles CPU intensive computing


Here are exacted texts. 

  • Node.js handles CPU intensive computing
    • setTimeout
      • Give CPU a breath once in a while
      • Use new Date().getTIme() to calculate running time
    • Multi-threading
      • Create worker
        var cluster = require('cluster');
          exec: 'worker.js',
          args: ['--use', 'https'],
          silent: true
        var numCPUs = require('os').cpus().length;
        var workers = [];
        if (cluster.isMaster) {
          for (var i = 0; i < numCPUs; i++) {
      • Pub/Sub (Send/Listen) Message
        worker[0].send('hi there');
        workers[0].on('message', function(msg) {
    • Multi-servers (Hardware or VMs)
      • Create Server Listener
        var net = require('net');
        var server = net.createServer(function(c) {
        server.listen(8124, function() {});
      • Pub/Sub (Send/Listen) Message
        var client = net.connect({port: 8124}, function() {
        client.on('data', function(data) {

About "Service Grade"

“服务等级”是借用 Grade of service/QoS 的概念。
Yes, it comes from Grade of Service and QoS.

Basic ideas

再好的服务器资源划分,也不可能完美的解决所有的请求。所以我们可能要给已经创建的集群服务节点进一步分类出子集群,不同的子集群处理不同优先级的任务。同时,我们给Master节点添加任务 dispatching 层。在这个层里,我们给每个任务按照优先级进行归类,以分发到不同的子集群进行处理。
There is almost no server solution for handing all requests perfectly. So we may need to categorize our cluster nodes into sub-clusters, to enable them to handle tasks according to their priorities. At the meantime, we also need to add a task dispatcher to the master/controller node to group and distribute the tasks of task list as well.

Categorizing standard

To make things simple, we borrow ideas of time management to group our computing tasks into 4 kinds according to their Importance and Urgence. Thus, the top priority is Important & Urgent task, second is Important & No-urgent, Urgent & No-important goes to the third, and No-urgent & No-important is the last. We need to make sure the high priority tasks are always proceeded first. The policy applying to that is all low priority tasks should be suspendible once any new higher priority tasks coming in with unconditional task prioritizing principle.


Don't want to research more with cluster failure/collaborations here.


Leave a Reply

Your email address will not be published. Required fields are marked *