-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"GC" - configurable disk usage beyond which old blocks get axed #80
Conversation
767a2dc
to
454858c
Compare
01415c6
to
ca3d461
Compare
fb2d765
to
7e03bcc
Compare
myceli/src/handlers.rs
Outdated
path: path.to_string(), | ||
cid: root_cid, | ||
})) | ||
pub fn import_file(path: &str, storage: &mut Rc<Storage>) -> Result<Message> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rc
not Arc
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's something I haven't really looked into changing, but I'm up for it if we want a different design.
There really are only 2 threads: this one (listener) and shipper.
Shipper gets its own copy of Storage (and therefore StorageProvider). Which honestly rubs me the wrong way.
It's been logically equivalent up to this point, because all the methods were &self
- the incremental_gc is the very first &mut self
... all the mutable state was held on-disk, so the 2 StorageProvider instances acted pretty much like they were the same since they secretly shared the same state. Right now it's still "OK" because nothing Shipper calls into interacts with that state (which is about disk usage).
If we keep that, I'm still not sure we really need Rc. I don't see it held anywhere else, so maybe a &Storage
would suffice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated: that Rc is gone now.
We still have 2 different StorageProvider instances happening, FWIW.
Upon further reflection, having 2 separate implementations of StorageProvider was a problem with this PR. Namely, incoming blocks from the remote side get added to the Shipper's instance, where GC is never run, and so if they put you over disk usage limit that would not trigger removing old blocks. It's not a tragedy, because next time the process starts up it would work OK, but still... So I did change that |
The config point to specify how much disk myceli is allowed to use: