You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current condor submission is still a bit wonky and needs improvement
balanced jobs: currently the grouping function will split jobs in groups of N, but the last group will be the smallest. The solution to this problem is
faster AT output: currently the ntuple layer has to finish to 100% before AT can run. Instead we should start AT jobs as soon as their input files are ready (i.e. make them dependent on a sub-group of ntuple jobs).
only central AT for data: no need to run other AT modes for data
add metadata:
each job should store the input files, output files, lumi JSON used (data only)
run create cutflow, create lumi_json (data-only) and scan number of processed events as the last step of an ntuple job
ship the libraries directories = ['lib', 'biglib', 'module']
only zip data and interface directories from src
we can adopt this for the creation of the cmssw.tar.gz. which in turn should reduce the compilation time to 0.
Both ntp create tarball and ntp setup from=tarball need adjustments.
The current condor submission is still a bit wonky and needs improvement
create cutflow
,create lumi_json
(data-only) and scan number of processed events as the last step of an ntuple jobPlease add missing items
The text was updated successfully, but these errors were encountered: