Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I modified the Intt_HitUnpacking to make it able to handle different coll… #1036

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

ChengWeiShih
Copy link

…ision type.

The build-in switch TRACKING::pp_mode is used to set the collision type. And if the pp_mode == false, currently we disable the time_bucket information for the TrkrHitSets, and we apply the bco_diff cut to reject the background hits.

@sphenix-jenkins-ci
Copy link

For repository maintainers, please start the CI check manually (feedback)

This is an automatic message to assist manually starting CI check for this pull request, commit 3857daa378bf42f0a13bc0cf06b4efacd455dfe2. macros pull request require a manual start for CI checks, in particular selecting which coresoftware and calibrations versions to check against this macros pull request.

sPHENIX software maintainers: please make your input here and start the Build:

build

Note:

  1. if needed, fill in the pull request ID for the coresoftware pull request, e.g. origin/pr/1697/merge for PR#1697 in sha_coresoftware. Default is to check with the master branch.
  2. click Build button at the end of the long web page to start the test

Automatically generated by sPHENIX Jenkins continuous integration
sPHENIX             jenkins.io

{
inttunpacker->writeInttEventHeader(true);
inttunpacker->set_bcoFilter(true); // default : false
inttunpacker->runInttStandalone(true); // default: false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why should this be set in AuAu? We will have the Gl1 enabled, so how else will we determine the time bucket properly?

Copy link
Author

@ChengWeiShih ChengWeiShih Jan 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the AuAu data, currently we don't use the timebucket, which means we consider the hits of 3 crossings (trigger_bco -1, trigger_bco, trigger_bco + 1) as the hits from the same crossing. Which is actually different from the default setting of InttCombinedRawDataDecoder. We therefore need to change the flags.


The reason why we don't separate the hits bco by bco like what we do for pp data set is because 1. the collision date is low, 2. the coarse delay of INTT may not be fine tuned, which means there can be some hits falling into the next bco timing (trigger_bco + 1) .

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is more about the hits in one strobe length (one F4A event). It's nothing to do with the GL1.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the AuAu data, currently we don't use the timebucket, which means we consider the hits of 3 crossings (trigger_bco -1, trigger_bco, trigger_bco + 1) as the hits from the same crossing

I'm not sure I understand this statement, the timebucket in all of the hitsets should then just be -1, 0, or 1. Why would we just not fill it at all? That is a good quality check to make sure that the hits look like we expect them. I don't understand how it has nothing to do with the GL1 because inherently any triggered event has something to do with the GL1

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, the switch runInttStandalone sets the time_bucket of all the hits into 1.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the p+p, what you say is correct. We don't want to mix up the hits with different time bucket, as they can be from different collisions, therefore we should not merge them.

But for the AuAu run 54280, the number of crossings is 56, which means, there is no collision one-bco before and after the trigger BCO, so even if we merge the hits from trig_bco-1, hits from trig_bco and hits from trig_bco+1, there can still be one collision in this three-bco time window.

Sure, we can do what you say in AuAu data, just the same way as how we process the hits for the pp runs. But the caveat is if there are some hits with incorrect hit timing assignments (hit timing falling to the next bco (trig_bco+1)), we might introduce more clusters than what it should be. Like the example in the following.

Assuming there are 310 hits in the trigger bco (trig_bco), due to the imperfect INTT coarse delay setting, some hits from the trig_bco actually end up with having the hit timing of (trig_bco + 1) (incorrect hit timing assignment).
Therefore what we see in the data would be like:
300 hits in trig_bco -> (310 - 10, as there are 10 hits with incorrect hit timing assignment)
12 hits in trig_bco + 1 -> (10 hits from the trig_bco & assume 2 noise hits in trig_bco + 1)

if we simply don't care about the incorrect hit timing assignment, and we do the clustering time_bucket by time_bucket, we will end up with having more clusters than what it should be. (say there can be one cluster made of two hits, but if one of the hits are with the hit timing of trig_bco+1, this cluster will be segmented into two clusters due to the incorrect hit timing assignment.)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we set all the hits time bucket to 0, doesn't that introduce unnecessary background then? You are basically trading off the hits that were incorrectly assigned to time bucket GL1 BCO + 1 (that should have been in GL1 BCO) at the expense of adding in all hits from all BCOs not associated to the GL1. So then you will cluster in a single time bucket, but your background will be much higher?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you are right, this is the trade off, to have all the signal hits considered in the clustering, the noise hits are also included. I am not sure whether we can assign the syst. unc. for this by doing the clustering w/ and w/o the time_bucket.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I would suggest preparing a set of slides that describes this issue to discuss in the tracking meeting either tomorrow or Friday. I think we should discuss the issue as a group to decide what is the best way forward - has this been already discussed in an INTT meeting?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the late reply. I was in the TPS annual meeting. Yes, I think I can give a presentation in the tracking meeting. Say Monday? This has been discussed in the INTT meeting. We then came up with such a way to handle the AuAu clustering.

@@ -88,10 +88,16 @@ void Intt_HitUnpacking(const std::string& server="")
inttunpacker->Verbosity(verbosity);
inttunpacker->LoadHotChannelMapRemote("INTT_HotMap");
inttunpacker->set_triggeredMode(!isStreaming);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It also occured to me that I was wrong about using the pp_mode flag, we should really just use the result of the database query which sets triggered mode here. So in fact, what is different about the needed flags that are set in your PR that is not already addressed by the set_triggeredMode flag?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The modification here is mainly for accounting for the incorrect hit timing assignment, as explained above.

Copy link
Author

@ChengWeiShih ChengWeiShih Jan 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The trigger mode flag is for the time_bucket peak offset. The way how we align the timing peak locations for streaming data and triggered data are diffferent.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, can we group everything together so that there is only one flag that sets whether or not the hit unpacker is triggered or streaming, and that function then does all of the necessary setup?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My point is, the issue being discussed is not a pp vs Au Au issue, it is really a triggered vs streaming mode issue, right?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it can only happen in Au+Au, I meant, for the p+p trigger mode, you 100% don't want to mix-up the hits with different crossings (due to the polarities of the bunches.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants