-
Hi I was wondering if you could help me with an issue I am having with the use of egs-parallel on my Mac. I run the command:
Usually when I just run my 'cavity -i inputfile' command, I get a dose output for a set of square water voxels I have defined by getting a box and splitting it up along the x axis and using the ausgab object with labels to get dose values. From this parallel run I don't seem to be able to get any output at all and the runtime just finishes instantly. Are there any settings I need to change within the egsinp file or any command options I need to put in my script to get some results from using egs-parallel When I downloaded Fred's parallel.sh script: https://github.com/aguadopd/egsnrc-parallel-bash/blob/master/parallel.sh and putting my code in egs_chamber instead:
The above used 'combine' in my run control settings #--------------- Changing this to 'first' in the run control settings, and changing back to cavity it seemed to have worked so I ran:
which seems to have combined these together. I still am not sure exactly which settings to use in the run control - presently:
And my Mac has 10 CPUs available (hence the 10 different parallel parts above). I noticed doing:
does this which leaves the heap of .egslog files I mentioned above. Am I doing this correctly / any advice for the correct settings would be greatly appreciated :) |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
I'm glad that I imagine that egs-parallel and egs-parallel-clean have just not been tested on a mac. We just obtained new mac systems here so hopefully we'll be able to work on that in the future. |
Beta Was this translation helpful? Give feedback.
When you parallelize, it automatically gives each job different seeds, and divides up starting location in a phase-space source (if you're using one). The number of histories you provide (ncase) is the total, so as you add more cores, each job gets less histories to do.
Generally all the defaults should work fine! Until you get over 1000 cores - if you start having issues related to the lock file for 1000+ cores, I can help you switch to a different method.
One person worked on utilizing MPI, but it is not something we've really played around with and generally not needed:
#511