Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"GC" - configurable disk usage beyond which old blocks get axed #80

Merged
merged 4 commits into from
Aug 21, 2023

Conversation

John-LittleBearLabs
Copy link
Collaborator

INFO - Removed "/home/lbl/.cache/myceli/storage.db/blocks/kmug1k2u6jgsmyb4tyaxxtoeq9xe770f76dwbjhgrxfgwnvsoaw5d" as usage (135085233) > max (127925248)

The config point to specify how much disk myceli is allowed to use:

    //How much storage space should Local Storage use? Measured in kiB. Default is 1 GiB
    pub disk_usage: u64,

path: path.to_string(),
cid: root_cid,
}))
pub fn import_file(path: &str, storage: &mut Rc<Storage>) -> Result<Message> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rc not Arc ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's something I haven't really looked into changing, but I'm up for it if we want a different design.
There really are only 2 threads: this one (listener) and shipper.

Shipper gets its own copy of Storage (and therefore StorageProvider). Which honestly rubs me the wrong way.

It's been logically equivalent up to this point, because all the methods were &self - the incremental_gc is the very first &mut self... all the mutable state was held on-disk, so the 2 StorageProvider instances acted pretty much like they were the same since they secretly shared the same state. Right now it's still "OK" because nothing Shipper calls into interacts with that state (which is about disk usage).

If we keep that, I'm still not sure we really need Rc. I don't see it held anywhere else, so maybe a &Storage would suffice.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated: that Rc is gone now.

We still have 2 different StorageProvider instances happening, FWIW.

@John-LittleBearLabs
Copy link
Collaborator Author

Upon further reflection, having 2 separate implementations of StorageProvider was a problem with this PR. Namely, incoming blocks from the remote side get added to the Shipper's instance, where GC is never run, and so if they put you over disk usage limit that would not trigger removing old blocks. It's not a tragedy, because next time the process starts up it would work OK, but still...

So I did change that Box<dyn StorageProvider> to an Arc<Mutex<dyn StorageProvider>> and switched to passing around the Arc rather than the config needed to create a StorageProvider.
@lightsofapollo

@John-LittleBearLabs John-LittleBearLabs merged commit 06a1f08 into main Aug 21, 2023
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants