You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems to me that 'ntp submit X' method is sound. However, there are definately a few improvements that need to be made.
ntp run local. For testing this is great at the moment.
I have a question though. It queries DAS to find out what files are available for a particular sample, but then reads files from /hdfs/dpm/... (In the case of SingleElectron) Is this just preferential treatment to datasets already stored here, then as secondary access through xrootd or do all datasets required need to be stored here? I'm assuming the former.
ntp run grid
Personally, I would quite like this working as a backup to run condor. Issue #212 needs to be fixed, but I also think there is something to be gained from creating an 'ntp run grid --check_nTuples' option, similar to previously.
ntp run condor
Main problem as in #214. We also need to be careful as recently I have been experiencing problems with condor simply completing the job but not actually transferring the output back. This problem is also odd though I don't know if it affects anyone else. This is not so much of a problem with MC, but more so with data. This is in addition to the usual problems of jobs hanging.
The text was updated successfully, but these errors were encountered:
I have a question though. It queries DAS to find out what files are available for a particular sample, but then reads files from /hdfs/dpm/... (In the case of SingleElectron) Is this just preferential treatment to datasets already stored here, then as secondary access through xrootd or do all datasets required need to be stored here? I'm assuming the former.
That is correct. CMSSW uses a file called storage.xml (unique to the site) to identify where the file is located. The steps are:
try local file system (here /hdfs)
try xrootd
retry step 2 X times
fail
Personally, I would quite like this working as a backup to run condor. Issue #212 needs to be fixed, but I also think there is something to be gained from creating an 'ntp run grid --check_nTuples' option, similar to previously.
Noted. The reason I left it out for the moment is that it would require quite a bit of effort to provide more than is currently available via crab submit
It seems to me that 'ntp submit X' method is sound. However, there are definately a few improvements that need to be made.
ntp run local. For testing this is great at the moment.
I have a question though. It queries DAS to find out what files are available for a particular sample, but then reads files from /hdfs/dpm/... (In the case of SingleElectron) Is this just preferential treatment to datasets already stored here, then as secondary access through xrootd or do all datasets required need to be stored here? I'm assuming the former.
ntp run grid
Personally, I would quite like this working as a backup to run condor. Issue #212 needs to be fixed, but I also think there is something to be gained from creating an 'ntp run grid --check_nTuples' option, similar to previously.
ntp run condor
Main problem as in #214. We also need to be careful as recently I have been experiencing problems with condor simply completing the job but not actually transferring the output back. This problem is also odd though I don't know if it affects anyone else. This is not so much of a problem with MC, but more so with data. This is in addition to the usual problems of jobs hanging.
The text was updated successfully, but these errors were encountered: