tag:blogger.com,1999:blog-5428232882395758357.post6903882548259375256..comments2016-06-20T17:16:19.739-04:00Comments on more indirection: Thoughts on node.js and HaskellMore Indirectionhttp://www.blogger.com/profile/09749479501125035513noreply@blogger.comBlogger7125tag:blogger.com,1999:blog-5428232882395758357.post-58830059359554572452011-05-29T05:45:44.673-04:002011-05-29T05:45:44.673-04:00To Peter and Bill,
Since events and threads are d...To Peter and Bill,<br /><br />Since events and threads are dual, it trivial to get a cooperative multithreading environment built on top of node.js.<br /><br />To do this, systematically CPS transform your code. You don't have to CPS transform every little thing. You just need to pass in the continuation as the callback to the asynchronous calls, and any function that does this is itself asynchronous. From the perspective of the cooperative multithreading system, the asynchronous calls are the blocking calls. I recommend using a monad to handle this CPS transforming, but that is not necessary.<br /><br />Now all you need is forking and perhaps synchronization. The fork operation simply adds its argument to a ready queue. Now you simply wrap all the (primitive) asynchronous calls with a function that registers that passed in continuation and then dequeues a new continuation from the ready queue and executes that, or, if the ready queue is empty, it returns to node.js. Since this is a cooperative system, you'll likely need a yield operation which would clearly be "blocking". There are at least two that you'd want. A "soft" one simply enqueues the continuation its given and dequeues a new one. A "hard" one which registers its continuation as an "on idle" or "on timeout" event handler. MVars or channels are easy to implement in this context.<br /><br />The fact that Javascript doesn't guarantee TCO is not a big problem here. As long as you make a "blocking" call in every loop, using yield if necessary, the stack growth will be bounded. Compute-intensive workloads will need to have yields sprinkled to guarantee responsiveness.<br /><br />This is more or less what GHC is doing where the calls to yield are in the memory allocator. As this and GHC demonstrate, there is nothing inherently faster (or slower) about using events rather than threads.Derek Elkinshttps://www.blogger.com/profile/13447153951050085981noreply@blogger.comtag:blogger.com,1999:blog-5428232882395758357.post-45462319047130118372011-03-22T04:28:15.437-04:002011-03-22T04:28:15.437-04:00I took a hard look at Node.JS, while I don't t...I took a hard look at Node.JS, while I don't think it will be the next big thing I think they did break some new ground on a few points and that is useful. <br /><br />I also think it is useful to every so often say "hey lets try something different and see if it works better". And of course you don't know if it will until you try it for a while.Zach Kessinhttps://www.blogger.com/profile/04276155117746098546noreply@blogger.comtag:blogger.com,1999:blog-5428232882395758357.post-27543953234842718132011-01-26T07:58:55.939-05:002011-01-26T07:58:55.939-05:00The GHC IO manager was always async, the new one j...The GHC IO manager was always async, the new one just uses epoll instead of select.Max Bolingbrokehttps://www.blogger.com/profile/05003540528496327090noreply@blogger.comtag:blogger.com,1999:blog-5428232882395758357.post-75827684470023303892011-01-23T01:17:48.690-05:002011-01-23T01:17:48.690-05:00In that case, Warp should be even faster, accordin...In that case, Warp should be even faster, according to these benchmarks:<br /><br />http://docs.yesodweb.com/blog/announcing-warp/Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5428232882395758357.post-1101477700765768522011-01-22T03:34:24.114-05:002011-01-22T03:34:24.114-05:00You're absolutely right in that external libra...You're absolutely right in that external libraries which make blocking calls can negate some of the gains. The beauty of the GHC I/O manager design is that we can still use those libraries. In most async frameworks you need to rewrite every single library that makes blocking system calls to work with the framework, otherwise the event dispatch thread will get blocked. In GHC you can do so only when you need to due to performance reasons. The way GHC manages this feat is by having a small (dynamically resized) thread pool per core that gets used for blocking calls.Johan Tibellhttps://www.blogger.com/profile/06875432206357419172noreply@blogger.comtag:blogger.com,1999:blog-5428232882395758357.post-50549595120885781832011-01-22T00:06:01.602-05:002011-01-22T00:06:01.602-05:00I've worked with asynchronous framework for a ...I've worked with asynchronous framework for a while. Based on my experience, I found that when multiple asynchronous calls are involved in handled one user request, it's a mess. Callbacks after callbacks make it's difficult to see the business process by reading the code. So I agree gawi that Node.js is not providing us with a sane programming model however it brings high-scalability.Xing Shi Caihttps://www.blogger.com/profile/18374608418452082405noreply@blogger.comtag:blogger.com,1999:blog-5428232882395758357.post-48732158227143106962011-01-21T21:34:44.621-05:002011-01-21T21:34:44.621-05:00The same could be said for Erlang. The way I see i...The same could be said for Erlang. The way I see it, Node.js is not providing us with a sane programming model however it brings high-scalability in the real world. Other languages must follow. Node.js is increasing the pressure.gawihttps://www.blogger.com/profile/06014738535270033223noreply@blogger.com