-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a better lazy binomial heap #1
Comments
Sounds interesting, I believe I get the idea. Having to force all suspensions is fine if you need to take out all elements, because then the cost is amortized. But if all you want is the first element after a series of insertions, you obviously don't want to force a suspension for each of those elements first, only as much as is necessary (logarithmic cost). But wouldn't a fix also impose logarithmic time on insertion? Insertion is currently constant time only, because nothing is forced and only a suspension is created. But I guess having really fast insertion is a bad trade-off. To me, in practice, logarithmic time is basically like constant time with a big constant anyway, but linear time worst-case operations seem worth avoiding. I unfortunately don't have time right now to make any additions. Please feel free to submit a modified version that eliminates the unnecessarily bad worst-case behavior. |
The *worst case* insertion time degrades from constant to logarithmic, but
the *amortized* time remains constant. It's really quite fine.
…On Thu, Dec 31, 2020, 8:48 PM Markus Mottl ***@***.***> wrote:
Sounds interesting, I believe I get the idea. Having to force all
suspensions is fine if you need to take out all elements, because then the
cost is amortized. But if all you want is the first element after a series
of insertions, you obviously don't want to force a suspension for each of
those elements first, only as much as is necessary (logarithmic cost).
But wouldn't a fix also impose logarithmic time on insertion? Insertion is
currently constant time only, because nothing is forced and only a
suspension is created. But I guess having really fast insertion is a bad
trade-off. To me, in practice, logarithmic time is basically like constant
time with a big constant anyway, but linear time worst-case operations seem
worth avoiding.
I unfortunately don't have time right now to make any additions. Please
feel free to submit a modified version that eliminates the unnecessarily
bad worst-case behavior.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#1 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAOOF7LQMIA6GZBSGWVO2KTSXUSVBANCNFSM4URCKMTQ>
.
|
Ah, yes, indeed, I was only thinking of the worst case here. It's nice that this worst-case logarithmic cost is amortized over the insertions of all elements, too, and thus vanishes to leave only a constant. |
Okasaki's lazy binomial heaps have good amortized bounds, but extraction is linear in the worst case. It's quite easy to make all operations worst-case logarithmic by representing the spine as a stream instead of a suspended list. Insertion can use a technique I think is due to Hinze and Paterson (see Finger Trees: A Simple General-purpose Data Structure). On insertion into a
One
node, the existing child of the node is forced, and then a suspension is created for the cascade. The amortized bounds remain the same, but the worst-case bounds improve as I described. The practical performance can be expected to improve as well, thanks in large part to caching. Specifically, with Okasaki's version, a bunch of insertions in a row will create a huge chain of suspensions. The first extraction must walk this chain (all over memory) to create the actual spine. With the stream version, there's at most one suspension for each vertebra of the spine.The text was updated successfully, but these errors were encountered: