Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
update-ability-decks: increase structural sharing (?)
The following micro-benchmark shows that repeatedly updating a hash is consistently more memory efficient than reconstructing it: #lang racket (define base (hash 'a 1 'b 2 'c 3)) (define f add1) (collect-garbage) (collect-garbage) (collect-garbage) (define m (current-memory-use)) (void (for/fold ([x base]) ([_i 1000]) (for/hash ([(k v) (in-hash x)]) (values k (f v))))) (println (- (current-memory-use) m)) (collect-garbage) (collect-garbage) (collect-garbage) (define m2 (current-memory-use)) (void (for/fold ([x base]) ([_i 1000]) (for/fold ([acc x]) ([k (in-hash-keys x)]) (hash-update acc k f)))) (println (- (current-memory-use) m2)) On my machine with _i up to 100, I consistently (no variation across runs at all) get 25984 ; reconstruction 21216 ; updating With _i up to 1000 as in the program above (same consistency): 240624 ; reconstruction 192640 ; updating Cranking _i up to 100000 and adding (time) around the outer (for/fold) shows that the latter method is actually _slower_ (by ~20ms of real time), but my hashes have less than 10 keys on average, and we know long GC pauses are part of the performance problem.
- Loading branch information