Giuseppe Bilotta<p>Remember when I mentioned we had ported our <a href="https://fediscience.org/tags/fire" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>fire</span></a> propagation <a href="https://fediscience.org/tags/cellularAutomaton" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cellularAutomaton</span></a> from <a href="https://fediscience.org/tags/Python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Python</span></a> to <a href="https://fediscience.org/tags/Julia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Julia</span></a>, gaining performance and the ability to parallelize more easily and efficiently?</p><p>A couple of days ago we had to run another big batch of simulations and while things progressed well at the beginning, we saw the parallel threads apparently hanging one by one until the whole process sat there doing who know what.</p><p>Our initial suspicion was that we had come across some weird <a href="https://fediscience.org/tags/JuliaLang" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>JuliaLang</span></a> issue with <a href="https://fediscience.org/tags/multithreading" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>multithreading</span></a>, which seemed to be confirmed by some posts we found on the Julia forums. We tried the workarounds suggested there, to no avail. We tried a different number of threads, and this led to the hang occurring after a different percent completion. We tried restarting the simulations skipping the ones already done. It always got stuck at the same place (for the same number of threads).</p><p>So, what was the problem?</p><p>1/n</p>