diff --git a/docs/tutorial2023_unfolding/figures/impacts.png b/docs/tutorial2023_unfolding/figures/impacts.png deleted file mode 100644 index ab96a533398..00000000000 Binary files a/docs/tutorial2023_unfolding/figures/impacts.png and /dev/null differ diff --git a/docs/tutorial2023_unfolding/figures/impacts_zh_75_150.png b/docs/tutorial2023_unfolding/figures/impacts_zh_75_150.png new file mode 100644 index 00000000000..566209b6c7a Binary files /dev/null and b/docs/tutorial2023_unfolding/figures/impacts_zh_75_150.png differ diff --git a/docs/tutorial2023_unfolding/figures/migration_matrix_zh.pdf b/docs/tutorial2023_unfolding/figures/migration_matrix_zh.pdf new file mode 100644 index 00000000000..be77e43b78b Binary files /dev/null and b/docs/tutorial2023_unfolding/figures/migration_matrix_zh.pdf differ diff --git a/docs/tutorial2023_unfolding/figures/r_zh_75_150.png b/docs/tutorial2023_unfolding/figures/r_zh_75_150.png deleted file mode 100644 index 0477ceb80e5..00000000000 Binary files a/docs/tutorial2023_unfolding/figures/r_zh_75_150.png and /dev/null differ diff --git a/docs/tutorial2023_unfolding/figures/scan_plot_r_zh_75_150.pdf b/docs/tutorial2023_unfolding/figures/scan_plot_r_zh_75_150.pdf new file mode 100644 index 00000000000..ead1ca65e8f Binary files /dev/null and b/docs/tutorial2023_unfolding/figures/scan_plot_r_zh_75_150.pdf differ diff --git a/docs/tutorial2023_unfolding/figures/scan_plot_r_zh_75_150.png b/docs/tutorial2023_unfolding/figures/scan_plot_r_zh_75_150.png new file mode 100644 index 00000000000..0d7edede94f Binary files /dev/null and b/docs/tutorial2023_unfolding/figures/scan_plot_r_zh_75_150.png differ diff --git a/docs/tutorial2023_unfolding/figures/scan_plot_r_zh_75_150_blinded.pdf b/docs/tutorial2023_unfolding/figures/scan_plot_r_zh_75_150_blinded.pdf new file mode 100644 index 00000000000..5d9c9df686f Binary files /dev/null and b/docs/tutorial2023_unfolding/figures/scan_plot_r_zh_75_150_blinded.pdf differ diff --git a/docs/tutorial2023_unfolding/figures/scan_plot_r_zh_75_150_blinded.png b/docs/tutorial2023_unfolding/figures/scan_plot_r_zh_75_150_blinded.png new file mode 100644 index 00000000000..6b55fda9940 Binary files /dev/null and b/docs/tutorial2023_unfolding/figures/scan_plot_r_zh_75_150_blinded.png differ diff --git a/docs/tutorial2023_unfolding/figures/scan_r_zh_250_400.png b/docs/tutorial2023_unfolding/figures/scan_r_zh_250_400.png deleted file mode 100644 index 27347e43b2d..00000000000 Binary files a/docs/tutorial2023_unfolding/figures/scan_r_zh_250_400.png and /dev/null differ diff --git a/docs/tutorial2023_unfolding/figures/scan_r_zh_75_150_blinded.png b/docs/tutorial2023_unfolding/figures/scan_r_zh_75_150_blinded.png deleted file mode 100644 index 80e759f35ac..00000000000 Binary files a/docs/tutorial2023_unfolding/figures/scan_r_zh_75_150_blinded.png and /dev/null differ diff --git a/docs/tutorial2023_unfolding/figures/stxs_zh.pdf b/docs/tutorial2023_unfolding/figures/stxs_zh.pdf new file mode 100644 index 00000000000..f4f184e14e1 Binary files /dev/null and b/docs/tutorial2023_unfolding/figures/stxs_zh.pdf differ diff --git a/docs/tutorial2023_unfolding/figures/stxs_zh.png b/docs/tutorial2023_unfolding/figures/stxs_zh.png index 485b24f192b..8ff6751deea 100644 Binary files a/docs/tutorial2023_unfolding/figures/stxs_zh.png and b/docs/tutorial2023_unfolding/figures/stxs_zh.png differ diff --git a/docs/tutorial2023_unfolding/unfolding_exercise.md b/docs/tutorial2023_unfolding/unfolding_exercise.md index 5e2905932dd..9d393b648ca 100644 --- a/docs/tutorial2023_unfolding/unfolding_exercise.md +++ b/docs/tutorial2023_unfolding/unfolding_exercise.md @@ -32,7 +32,7 @@ cd combine-unfolding-tutorial-2023 ## Exercise outline -The hands-on exercise is split into eight parts: +The hands-on exercise is split into seven parts: 1) Counting experiment @@ -40,25 +40,23 @@ The hands-on exercise is split into eight parts: 3) Shape analysis with control regions -4) Extract the expected intervals for POIs measurements +4) Extract the expected intervals -5) Impacts +5) Impacts for multiple POIs -6) Unfold to the gen-level quantities (freeze the TH NPs, scan, multiply predicted cross-sections) +6) Unfold to the gen-level quantities -7) Extract POIs correlations from FitDiagnostics +7) Extract POIs correlations from the FitDiagnostics output > Throughout the tutorial there are a number of questions and exercises for you to complete. These are shown in the boxes like this one. -All the code required to run the different parts is available in python scripts. - -There's a set of combine (.txt) datacards which will help you get through the various parts of the exercise. The exercises should help you become familiar with the structure of fitting datacards. - +Note that the general recomendation on unfolding in `Combine` are available [here](https://cms-analysis.github.io/HiggsAnalysis-CombinedLimit/part3/regularisation/), which also includes recommendations on regularisation techniques and when to use it, which is completely is not discussed in this tutorial at all. + ## Analysis overview In this tutorial we will look at the cross section measurements of on of the SM Higgs processes VH, in $H\to b\bar{b}$ (VHbb) final state. -The measurement is performed within Simplified Template Cross Section framework, which provides the prediction in the bins of gen-level quantities $p_{T}(V)$ and number of additional jets. The maximum likelihood based unfolding is performed to measure the cross section in the gen-level bins defined by STXS scheme. At the reco level we defined appropriate categories to match the STXS bins as close as possible. +The measurement is performed within the Simplified Template Cross Section framework, which provides the prediction in the bins of gen-level quantities $p_{T}(V)$ and number of additional jets. The maximum likelihood based unfolding is performed to measure the cross section in the gen-level bins defined by STXS scheme. At the reco-level we defined appropriate categories to match the STXS bins as close as possible. ![](figures/simplifiedXS_VH_1_2.png) @@ -66,9 +64,10 @@ In this tutorial we will focus on the ZH production, with Z boson decaying to ch ## Simple datacards, one-bin measurement -When constructing the reco-level for any differential analysis the main goal is to match the gen-level bins as closely as possible. In the simplest case it can be done with the cut-based approach, i.e. applying the selection on the corresponding reco-level variables: $p_{T}(Z)$ and $n_{\text{add. jets}}$. Due to the good lepton $p_{T}$ resolution we can follow the original STXS scheme quite closely with the reco-level selection, with one exception, it is not possible to access the very-low transverse momenta bin $p_{T}(Z)<75$ GeV. +When constructing the reco-level for any differential analysis the main goal is to match the gen-level bins as closely as possible. In the simplest case it can be done with the cut-based approach, i.e. applying the selection on the corresponding reco-level variables: $p_{T}(Z)$ and $n_{\text{add. jets}}$. +Due to the good lepton $p_{T}$ resolution we can follow the original STXS scheme quite closely with the reco-level selection, with one exception, it is not possible to access the very-low transverse momenta bin $p_{T}(Z)<75$ GeV. -In `counting/regions` dicrectory you can find the datacards with for five reco-level categories, each targetting a corresponding gen-level bin. Below you can find an example of the datacard for reco-level bin with $p_{T}(Z)$>400 GeV, +In `counting/regions` dicrectory you can find the datacards with five reco-level categories, each targetting a corresponding gen-level bin. Below you can find an example of the datacard for reco-level bin with $p_{T}(Z)$>400 GeV, ``` imax 1 number of bins @@ -91,7 +90,7 @@ where you can see the contributions from various background processes, namely Z+ One of the most important stages in the analysis design, is to make sure that the reco-level categories are pure with the corresponding gen-level processes. -To explicitly check it, one can plot the contributions of gen-level bins in all of the reco-level bins. We propose to use the script provided in the tutorial git-lab page. +To explicitly check it, one can plot the contributions of gen-level bins in all of the reco-level bins. We propose to use the script provided in the tutorial git-lab page. This script uses `CombineHarvester` to loop over detector level bins, and get the rate at which each of the signal processes (generator-level bins) contributes to that detector-level bin; which is then used to plot the migration matrix. ```shell python scripts/get_migration_matrix.py counting/combined_ratesOnly.txt @@ -99,31 +98,33 @@ python scripts/get_migration_matrix.py counting/combined_ratesOnly.txt ``` ![](figures/migration_matrix_zh.png) -Now that we checked the response matrix we can attempt the maximum likelihood unfolding. We can use the `multiSignalModel` physics model available in `Combine`. To prepare the workspace we can run the following command: - +Now that we checked the response matrix we can attempt the maximum likelihood unfolding. We can use the `multiSignalModel` physics model available in `Combine`, which assigns a parameter of interest `poi` to a process `p` within a bin `b` using the syntax `--PO 'map=b/p:poi[init, min, max]'` to linearly scale the normalisation of this process under the POI variations. To create the workspace we can run the following command: ```shell -text2workspace.py -m 125 counting/* -P HiggsAnalysis.CombinedLimit.PhysicsModel:multiSignalModel --PO verbose --PO 'map=.*/.*ZH_lep_PTV_75_150_hbb:r_zh_75_150[1,-5,5]' --PO 'map=.*/.*ZH_lep_PTV_150_250_0J_hbb:r_zh_150_250noj[1,-5,5]' --PO 'map=.*/.*ZH_lep_PTV_150_250_GE1J_hbb:r_zh_150_250wj[1,-5,5]' --PO 'map=.*/.*ZH_lep_PTV_250_400_hbb:r_zh_250_400[1,-5,5]' --PO 'map=.*/.*ZH_lep_PTV_GT400_hbb:r_zh_gt400[1,-5,5]' -o ws_counting.root +text2workspace.py -m 125 counting/combined_ratesOnly.txt -P HiggsAnalysis.CombinedLimit.PhysicsModel:multiSignalModel --PO verbose --PO 'map=.*/.*ZH_lep_PTV_75_150_hbb:r_zh_75_150[1,-5,5]' --PO 'map=.*/.*ZH_lep_PTV_150_250_0J_hbb:r_zh_150_250noj[1,-5,5]' --PO 'map=.*/.*ZH_lep_PTV_150_250_GE1J_hbb:r_zh_150_250wj[1,-5,5]' --PO 'map=.*/.*ZH_lep_PTV_250_400_hbb:r_zh_250_400[1,-5,5]' --PO 'map=.*/.*ZH_lep_PTV_GT400_hbb:r_zh_gt400[1,-5,5]' -o ws_counting.root ``` -where we use `--PO 'map=bin/process:poi[init, min, max]'`. So in the example above a signal POI is assigned to each gen-level bin independent on reco-level bin. This allows to take into account the non-trivial acceptance effects. One can also perform bin-by-bin unfolding using the mapping to the bin names rather that processes, e.g. `'map= vhbb_Zmm_gt400_13TeV/.*:r_reco_zh_gt400[1,-5,5]'`, but this method is not recommended and can be used for tests. +In the example given above a signal POI is assigned to each gen-level bin independent on reco-level bin. This allows to take into account the non-trivial acceptance effects. One can also perform bin-by-bin unfolding using the mapping to the bin names rather that processes, e.g. `'map= vhbb_Zmm_gt400_13TeV/.*:r_reco_zh_gt400[1,-5,5]'`, but this method is not recommended and can be used only for tests as another way to ensure that the migration matrix is close to diagonal. -To extract the measurement let's run the initial fit first: +To extract the measurement let's run the initial fit first using the `MultiDimFit` implemented in `Combine` to extract the best-fit values and uncertainties on all floating parameters: ```shell -combineTool.py -M MultiDimFit --datacard ws_counting.root --setParameters r_zh_250_400=1,r_zh_150_250noj=1,r_zh_75_150=1,r_zh_150_250wj=1,r_zh_gt400=1 --redefineSignalPOIs r_zh_75_150,r_zh_150_250noj,r_zh_150_250wj,r_zh_250_400,r_zh_gt400 --saveWorkspace -t -1 --X-rtd FAST_VERTICAL_MORPH +combineTool.py -M MultiDimFit --datacard ws_counting.root --setParameters r_zh_250_400=1,r_zh_150_250noj=1,r_zh_75_150=1,r_zh_150_250wj=1,r_zh_gt400=1 --redefineSignalPOIs r_zh_75_150,r_zh_150_250noj,r_zh_150_250wj,r_zh_250_400,r_zh_gt400 -t -1 ``` +> With the option `-t -1` we set `Combine` to fit asimov dataset instead of actual data. +> The `--setParameters =` set the initial value of parameter named . +> `--redefineSignalPOIs r_zh_75_150,r_zh_150_250noj,r_zh_150_250wj,r_zh_250_400,r_zh_gt400` set the POIs to the comma-separated list, instead of the default one `r`. -While the uncertainties on the POI can be extracted in multiple ways, the most robust way is to run the likelihood scans for POI corresponding to each gen-level bin, it allows to spot discontinuities in the likelihood shape in case of the problems with the model. +While the uncertainties on the POI can be extracted in multiple ways, the most robust way is to run the likelihood scans for a POI corresponding to each gen-level bin, it allows to spot discontinuities in the likelihood shape in case of problems with the model. ```shell -combineTool.py -M MultiDimFit --datacard ws_counting.root -t -1 --setParameters r_zh_250_400=1,r_zh_150_250noj=1,r_zh_75_150=1,r_zh_150_250wj=1,r_zh_gt400=1 --redefineSignalPOIs r_zh_75_150,r_zh_150_250noj,r_zh_150_250wj,r_zh_250_400,r_zh_gt400 --saveWorkspace --algo=grid --points=100 -P r_zh_75_150 --floatOtherPOIs=1 -n scan_r_zh_75_150 +combineTool.py -M MultiDimFit --datacard ws_counting.root -t -1 --setParameters r_zh_250_400=1,r_zh_150_250noj=1,r_zh_75_150=1,r_zh_150_250wj=1,r_zh_gt400=1 --redefineSignalPOIs r_zh_75_150,r_zh_150_250noj,r_zh_150_250wj,r_zh_250_400,r_zh_gt400 --algo=grid --points=100 -P r_zh_75_150 --floatOtherPOIs=1 -n scan_r_zh_75_150 ``` Now we can plot the likelihood scan and extract the expected intervals. ```shell -python scripts/plot1DScan.py higgsCombinescan_r_zh_75_150.MultiDimFit.mH120.root -o r_zh_75_150 +python scripts/plot1DScan.py higgsCombinescan_r_zh_75_150.MultiDimFit.mH120.root -o r_zh_75_150 --POI r_zh_75_150 ``` -> Repeat for all POIs +* Repeat for all POIs ## Shape analysis with control regions @@ -131,7 +132,7 @@ One of the advantages of the maximum likelihood unfolding is the flexibility to The datacards for this part of the exercise located `full_model_datacards/`, where you can find a separate datacard for each region within `full_model_datacards/regions` directory and also a combined datacard `full_model_datacards/comb_full_model.txt`. -As you will find the datacards also contain several background processes. To control them properly we will add the regions enriched in the respective backgrounds. Then we can define a common set rate parameters for signal and control regions to scale the rates or other parameters affecting their shape. +As you will find the datacards also contain several background processes. To control them properly we will add the regions enriched in the respective backgrounds. Then we can define a common set of rate parameters for signal and control regions to scale the rates or other parameters affecting their shape. For the shape datacards one has to specify the mapping of histograms and channels/processes as given described below: @@ -142,26 +143,33 @@ Then the `shape` nuisance parameters can be defined in the systematics block in In the realistic CMS analysis there are hundreds of nuisance parameters corresponding to various source of systematics. -When we unfold to the gen-level observable we should remove the nuisances affecting the rate of the gen-level bins, i.e. the `lnN` NPs: `THU_ZH_mig*, THU_ZH_inc` and keep only the acceptance `shape` uncertainties: `THU_ZH_acc` and `THU_ggZH_acc`. This can be achieved by freezing the respective nuisance parameters with the option `--freezeParameters par_name1,par_name2`. Alternatively you can create a group following the syntax given below at the end of the combined datacard, and freeze the parameters with the `--freezeNuisanceGroups group_name` option. +When we unfold to the gen-level observable we should remove the nuisances affecting the rate of the gen-level bins, i.e. the `lnN` NPs: `THU_ZH_mig*, THU_ZH_inc` and keep only the acceptance `shape` uncertainties: `THU_ZH_acc` and `THU_ggZH_acc`, which do not scale the inclusive cross sections by construction. +This can be achieved by freezing the respective nuisance parameters with the option `--freezeParameters par_name1,par_name2`. Alternatively you can create a group following the syntax given below at the end of the combined datacard, and freeze the parameters with the `--freezeNuisanceGroups group_name` option. + ``` [group_name] group = uncertainty_1 uncertainty_2 ... uncertainty_N ``` -Now we can create the workspace using the same `multiSignalmodel` +Now we can create the workspace using the same `multiSignalmodel`: ```shell -text2workspace.py -m 125 full_model_datacards/comb_full_model.txt -P HiggsAnalysis.CombinedLimit.PhysicsModel:multiSignalModel --PO verbose --PO 'map=.*/.*ZH_lep_PTV_75_150_hbb:r_zh_75_150[1,-5,5]' --PO 'map=.*/.*ZH_lep_PTV_150_250_0J_hbb:r_zh_150_250noj[1,-5,5]' --PO 'map=.*/.*ZH_lep_PTV_150_250_GE1J_hbb:r_zh_150_250wj[1,-5,5]' --PO 'map=.*/.*ZH_lep_PTV_250_400_hbb:r_zh_250_400[1,-5,5]' --PO 'map=.*/.*ZH_lep_PTV_GT400_hbb:r_zh_gt400[1,-5,5]' --for-fits --no-wrappers --X-pack-asympows --optimize-si.png-constraints=cms --use-histsum -o ws_full.root +text2workspace.py -m 125 full_model_datacards/comb_full_model.txt -P HiggsAnalysis.CombinedLimit.PhysicsModel:multiSignalModel --PO verbose --PO 'map=.*/.*ZH_lep_PTV_75_150_hbb:r_zh_75_150[1,-5,5]' --PO 'map=.*/.*ZH_lep_PTV_150_250_0J_hbb:r_zh_150_250noj[1,-5,5]' --PO 'map=.*/.*ZH_lep_PTV_150_250_GE1J_hbb:r_zh_150_250wj[1,-5,5]' --PO 'map=.*/.*ZH_lep_PTV_250_400_hbb:r_zh_250_400[1,-5,5]' --PO 'map=.*/.*ZH_lep_PTV_GT400_hbb:r_zh_gt400[1,-5,5]' --for-fits --no-wrappers --X-pack-asympows --optimize-simpdf-constraints=cms --use-histsum -o ws_full.root ``` - -> Following the instructions given earlier, create the workspace and run the initial fit with `-t -1` and set the name `-n .BestFit`. +> As you might have noticed we are using a few extra versions `--for-fits --no-wrappers --X-pack-asympows --optimize-simpdf-constraints=cms --use-histsum` to create a workspace. They are needed to construct a more optimised pdf using the `CMSHistSum` class implemented in Combine to significantly lower the memory consumption. + +* Following the instructions given earlier, create the workspace and run the initial fit with `-t -1` and set the name `-n .BestFit`. Since this time the datacards include shape uncertainties as well as additional categories to improve the background description the fit might take much longer, but we can submit condor jobs and have results ready to look at in a few minutes. + ```shell combineTool.py -M MultiDimFit -d ws_full.root --setParameters r_zh_250_400=1,r_zh_150_250noj=1,r_zh_75_150=1,r_zh_150_250wj=1,r_zh_gt400=1 --redefineSignalPOIs r_zh_75_150,r_zh_150_250noj,r_zh_150_250wj,r_zh_250_400,r_zh_gt400 -t -1 --X-rtd FAST_VERTICAL_MORPH --algo=grid --points=50 --floatOtherPOIs=1 -n .scans_blinded --job-mode condor --task-name scans_zh --split-points 1 --generate P:n::r_zh_gt400,r_zh_gt400:r_zh_250_400,r_zh_250_400:r_zh_150_250wj,r_zh_150_250wj:r_zh_150_250noj,r_zh_150_250noj:r_zh_75_150,r_zh_75_150 ``` -The job submission is handled by the `CombineHarvester`, the combination of options `--job-mode condor --task-name scans_zh --split-points 1 --generate P:n::r_zh_gt400,r_zh_gt400:r_zh_250_400,r_zh_250_400:r_zh_150_250wj,r_zh_150_250wj:r_zh_150_250noj,r_zh_150_250noj:r_zh_75_150,r_zh_75_150` will submit the jobs to HTCondor for POI. -You can add `--dry-run` option to create the submissions files first and check them, and then submit the jobs with `condor_submit condor_scans_zh.sub`. +> The option `--X-rtd FAST_VERTICAL_MORPH` is added here and for all `combineTool.py -M MultiDimFit ...` to speed up the minimisation. + +> The job submission is handled by the `CombineHarvester`, the combination of options `--job-mode condor --task-name scans_zh --split-points 1 --generate P:n::r_zh_gt400,r_zh_gt400:r_zh_250_400,r_zh_250_400:r_zh_150_250wj,r_zh_150_250wj:r_zh_150_250noj,r_zh_150_250noj:r_zh_75_150,r_zh_75_150` will submit the jobs to HTCondor for POI. You can add `--dry-run` option to create the submissions files first and check them, and then submit the jobs with `condor_submit condor_scans_zh.sub`. + +> If you are running the tutorial from a cluster where HTCondor is not available you can also submit the jobs to the slurm system, just change the `--job-mode condor` to `--job-mode slurm`. After all jobs are completed we can combine the files for each POI: @@ -171,13 +179,12 @@ do hadd -k -f scan_${p}_blinded.root higgsCombine.scans_blinded.${p}.POINTS.*.MultiDimFit.mH120.root done ``` - And finally plot the likelihood scans ```shell python scripts/plot1DScan.py scan_r_zh_75_150_blinded.root -o scan_r_zh_75_150_blinded --POI r_zh_75_150 --json summary_zh_stxs_blinded.json ``` -![](figures/scan_r_zh_75_150_blinded.png) +![](figures/scan_plot_r_zh_75_150_blinded.png) ## Impacts @@ -187,7 +194,7 @@ One of the important tests before we move to the unblinding stage is to check th combineTool.py -M Impacts -d ws_full.root -m 125 --robustFit 1 --doInitialFit --redefineSignalPOIs r_zh_75_150,r_zh_150_250noj,r_zh_150_250wj,r_zh_250_400,r_zh_gt400 ``` -Note that it is important to add the option `--redefineSignalPOIs [list of parameters]`, to produce the impacts for all POIs we defined when the workspace was created with the `multiSignalModel`. +> Note that it is important to add the option `--redefineSignalPOIs [list of parameters]`, to produce the impacts for all POIs we defined when the workspace was created with the `multiSignalModel`. After the initial fit is completed we can perform the likelihood scans for each nuisance parameter as shown below @@ -204,18 +211,17 @@ combineTool.py -M Impacts -d ws_full.root -m 125 --redefineSignalPOIs r_zh_75_15 plotImpacts.py -i impacts.json -o impacts_r_zh_75_150 --POI r_zh_75_150 ``` -![](figures/impacts.png) -> Do you observe differences in impacts plots for different POIs, do these differences make sense to you? - +![](figures/impacts_zh_75_150.png) +* Do you observe differences in impacts plots for different POIs, do these differences make sense to you? ## Unfolded measurements Now that we studied the NP impacts for each POI, we can finally extract the measurements. -Note, that in this exercise we are skipping couple of checks that have to be done before the unblinding. Namely the goodness of fit test and the post-fit plots of folded observables. Both of these checks were detailed in the previous exercises, you can find the description under the following links. +Note, that in this exercise we are skipping further checks and validation that you should do on your analysis for the purposes of the tutorial. Namely the goodness of fit test and the post-fit plots of folded observables. Both of these checks were detailed in the previous exercises, you can find the description under the following links. At this stage we'll run the `MultiDimFit` again scanning each POI to calculate the intervals, but this time we'll remove the `-t -1` option to extract the unblinded results. -Also since we want to unfold the measurements to the gen-level observables, i.e. extract the cross sections, we have to remove the theoretical uncertainties affecting the rates of signal processes, we can simply do this be freezing them `--freezeNuisanceGroups `. +Also since we want to unfold the measurements to the gen-level observables, i.e. extract the cross sections, we have to remove the theoretical uncertainties affecting the rates of signal processes, we can simply do this be freezing them `--freezeNuisanceGroups `, using the `group_name` you assigned earlier in the tutorial. Plot the scans and collect the measurements in the json file `summary_zh_stxs.json`. @@ -223,7 +229,7 @@ Plot the scans and collect the measurements in the json file `summary_zh_stxs.js python scripts/plot1DScan.py scan_r_zh_75_150.root -o r_zh_75_150 --POI r_zh_75_150 --json summary_zh_stxs.json ``` -![](figures/r_zh_75_150.png) +![](figures/scan_plot_r_zh_75_150.png) Repeat the same command for other POIs to fill the `summary_zh_stxs.json`, which can then be used to create the cross section plot as shown below. @@ -249,3 +255,4 @@ python scripts/plotCorrelations_pois.py -i fitDiagnostics.full_model.root:fit_s ``` ![](figures/correlationMatrix_pois.png) +