Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Near the top there's a link to a paper which seems to go into more technical detail: https://dl.acm.org/doi/10.1145/3434299

Context around the link:

> In our POPL 2021 paper (which won a Distinguished Paper award), we proposed a specific heap scheduler and showed that it yields provable work and space bounds.

I haven't read it yet, but this is where their claims of being provably space-efficient seems to come from.

EDIT: A quick read later, section 6 is where they prove their claim. It's efficient when compared to equivalent sequential execution.



> It's efficient when compared to equivalent sequential execution.

It looks like they're comparing against MLton, which is interesting since it performs whole-program optimisation. That makes their speedups even more impressive!


MPL is based on MLton, so they also do whole-program optimisation (the same one, in fact).


Yes, but I imagine traditional compiler optimisations like inlining, unrolling, etc. are easier to apply to serial code.


Well, to the compiler, the parallelism probably just looks like an opaque call to a higher-order function. The bulk of the program will still be serial code, which will be subject to all the normal optimisations. I talked a bit with one of the main authors of MPL, and he mentioned that the existing optimisation passes in MLton just kept working without any nontrivial modifications.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: