Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add PandA HDF writer #90

Closed
wants to merge 8 commits into from
Closed

Add PandA HDF writer #90

wants to merge 8 commits into from

Conversation

jsouter
Copy link
Contributor

@jsouter jsouter commented Dec 5, 2023

Adds in HDF writer for PandA, would not merge yet as relies on a few parts of PandABlocks-ioc that are in flux (@evalott100's working on this, namely a FlushNow record, and I think a few of the PVs like FullFileName, FileName etc will change soon).
Largely duplicates code from the AreaDetector HDF writer, happy to try and move things around so that it could be made more generic.
I've written some additional tests against a real detector running off the PandABlocks-ioc, not included here.
Also, are we happy for the PandA implementation to be outside of the epics subdirectory if it's currently using exclusively epics backend signals?

hdf: PandaHDF,
directory_provider: DirectoryProvider,
name_provider: NameProvider,
**scalar_datasets_paths: str,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that we should probably check the ":CAPTURE" pv for each dataset, then only emit the StreamResource for the dataset if it isn't "No"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, that makes sense

@jsouter
Copy link
Contributor Author

jsouter commented Dec 6, 2023

Have moved scalar datasets onto the PandaHDF so that the capturing PVs can be connected along with the others. If we don't like the use of setattr/getattr we can also do something like this 7e38e15, but maybe overloading children() is ugly too.

Comment on lines 10 to 29
class PandaHDF(Device):
def __init__(self, prefix: str, name: str = "", **scalar_datasets: str) -> None:
# Define some signals
self.file_path = epics_signal_rw(str, prefix + ":HDF5:FilePath")
self.file_name = epics_signal_rw(str, prefix + ":HDF5:FileName")
self.full_file_name = epics_signal_r(str, prefix + ":HDF5:FullFileName")
self.num_capture = epics_signal_rw(int, prefix + ":HDF5:NumCapture")
self.num_written = epics_signal_r(int, prefix + ":HDF5:NumWritten_RBV")
self.capture = epics_signal_rw(
bool, prefix + ":HDF5:Capturing", prefix + ":HDF5:Capture"
)
self.flush_now = epics_signal_rw(bool, prefix + ":HDF5:FlushNow")
self.scalar_datasets = scalar_datasets
for ds_name, ds_path in self.scalar_datasets.items():
setattr(
self,
"capturing_" + ds_name,
epics_signal_r(bool, prefix + ":" + ds_path + ":CAPTURE")
)
super(PandaHDF, self).__init__(name)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just spotted this, we need to follow this pattern for areaDetector as there is no PVI. However for PandA there should already be something to look at which contains this. What we need is a stub that declare the types, something like

class PcapBlock(Device):
active: SignalR[bool]
and but for the HDF PVs. Something like:

class DataBlock(Device):
    num_capture: SignalRW[int]
    file_name: SignalRW[str]
    full_file_path: SignalR[str]
    ...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see, I've rewritten to take advantage of this (tests are broken for now, will create some dummy PVI objects so can test with a sim and fix this, but I've tested against the real detector and the functionality seems the same.)

Copy link
Contributor Author

@jsouter jsouter Dec 8, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May be more Ophyd-like to set the to_capture parameter to be a Dict of capture signals, currently you pass PandaHDFWriter a dict of block: list of signal names e.g. {'pcap': ['samples', 'ts_end']} and it finds e.g. ts_end_capture in the samples pvi block from PVI and creates a signal that can be checked during collect_stream_docs using setattr/getattr

@jsouter jsouter force-pushed the pandahdf branch 2 times, most recently from 6c02324 to f3bd571 Compare December 8, 2023 16:03
@mrakitin
Copy link
Member

mrakitin commented Mar 7, 2024

@jsouter, @coretl, is it OK if we rebase this PR branch on main? It looks like it's out-of-date and not mergeable to main as is. @jwlodek and I are planning to use this branch for our NSLS-II work. Thanks!

@mrakitin
Copy link
Member

mrakitin commented Mar 7, 2024

Please disregard my request above - we need to address #140 before rebasing...

@olliesilvester
Copy link
Contributor

olliesilvester commented Mar 8, 2024

@mrakitin I'm currently working on an updated version of this, see #141.

@olliesilvester
Copy link
Contributor

Closing in favour of https://github.com/bluesky/ophyd-async/pull/141/files

@jsouter jsouter deleted the pandahdf branch October 17, 2024 08:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants