-
Notifications
You must be signed in to change notification settings - Fork 683
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Collection: add Variables field to interact with global variables #1572
base: main
Are you sure you want to change the base?
Conversation
122aa9f
to
80e83a4
Compare
This is getting a bit unwieldy with Variable tests being added here in a follow-up commit. Split them off into variables.c. Signed-off-by: Timo Beckers <timo@isovalent.com>
abd91bc
to
d6f5e4b
Compare
Signed-off-by: Timo Beckers <timo@isovalent.com>
Signed-off-by: Timo Beckers <timo@isovalent.com>
…scalls Signed-off-by: Timo Beckers <timo@isovalent.com>
d6f5e4b
to
b9faaef
Compare
page := uintptr(offset / 4096) | ||
if offset != int64(page)*4096 { | ||
return unsafe.Pointer(nil), EINVAL | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't the kernel implement the same checks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure, need to look at kernel source for this. I borrowed this from the stdlib and will be removed after the next Go release.
ptr := unsafe.Pointer(unsafe.SliceData(make([]byte, size))) | ||
addr := int(uintptr(ptr)) | ||
if internal.Align(addr, os.Getpagesize()) != addr { | ||
return nil, 0, fmt.Errorf("allocated memory is not page-aligned: %d", addr) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is surprising that we reliably get a page-aligned piece of memory here. Runtime manages various size classes differently, and thresholds are subject to change.
Should we get an unaligned chunk, we could retry with +1 page size and re-slice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's the current table of size classes: https://github.com/golang/go/blob/master/src/runtime/sizeclasses.go. I can't find this mentioned anywhere explicitly, but isn't this just an extension of typical struct alignment rules? If this slice would be a 4k struct instead, it would need to be 4k-aligned to make sure any pointers within the struct would be correctly 8-byte aligned by extension. I don't think an allocator is allowed to return allocations at unaligned offsets, but I could be wrong of course.
We could test this further by allocating some ballast and/or fragmenting the heap before running the tests a large number of times.
// Since we use MAP_FIXED, the starting address of the mapping must be | ||
// page-aligned on most architectures anyway. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pardon my ignorance, how does page-misaligned memory mapping work on those architectures?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm paraphrasing man mmap(2)
here.
MAP_FIXED
Don't interpret addr as a hint: place the mapping at
exactly that address. addr must be suitably aligned: for
most architectures a multiple of the page size is
sufficient; however, some architectures may impose
additional restrictions. [...]
I briefly came across one such additional restriction somewhere, but I forgot, sorry.
Here's another snippet: https://pubs.opengroup.org/onlinepubs/009604499/functions/mmap.html (this seems to be some kind of POSIX spec)
[EINVAL]
The addr argument (if MAP_FIXED was specified) or off is not a multiple of the page size as returned by sysconf(), or is considered invalid by the implementation.
tl;dr - off
within the target fd must also be page-aligned. We're not using the offset parameter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Since we use MAP_FIXED, the starting address of the mapping must be | |
// page-aligned on most architectures anyway. | |
// Since we use MAP_FIXED, the starting address of the mapping must be | |
// page-aligned anyway. |
The original wording can be interpreted as some architectures being fine with non-page-aligned mmap starting address.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, the starting address being page-aligned for MAP_FIXED sounds universal indeed.
return 0, fmt.Errorf("read offset out of range") | ||
} | ||
|
||
n := copy(p, mm.b[off:]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice if ReadAt
copied properly aligned chunks in word-size units. Imagine I copy a struct
with counters that is concurrently updated by bpf program. Should the copy
decide to go one byte at a time, values can be seen that never were.
I suppose builtin copy
is probably doing exactly that, albeit non-contractual. (Very minor and can wait until later.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the copy decide to go one byte at a time, values can be seen that never were.
Without using atomics, the memory model(s) of the architectures we run on don't make any guarantees about the consistency of values under concurrent access. To my limited understanding, on x64, the smallest unit of memory that can be shared between cores/threads is a cache line, representing 8 bytes. If kernel space is writing and user space is reading concurrently, these cache lines can be stale by the time the reader fetches them. Afaik, this is only a real issue if a value spans multiple cache lines, like a float/complex128 or a buffer of arbitrary length. In general, struct alignment rules make sure values stay contained within a single cache line.
I'm not sure what change you're proposing to our use of copy(). I haven't looked at copy's implementation in detail, but ultimately we have very little control over how the compiler chooses to shuffle memory acesses around through several optimization passes, unless copy() contains some kind of memory barrier. (unlikely, need to check) If you're suggesting to batch our calls to copy(), that sounds ineffectual to me.
If you're worried about reading a uint64 that has 4 out of its 8 bytes gone stale, afaik that's not possible as long as you stay away from packed structs. Batching copy() won't save your bacon here either. If you need synchronization, pair sync/atomic
accesses in Go with __sync_
in C.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you're worried about reading a uint64 that has 4 out of its 8 bytes gone stale, afaik that's not possible as long as you stay away from packed structs.
This is exactly what I'd like to prevent (an aligned uint64
appearing as if it was partially updated when fetched via ReadAt
).
If copy goes one byte at a time, it could copy the first 4 bytes, get preempted, and get back to copying the rest later. That would be a conformant copy
implementation, albeit not a very fast one.
I'm not sure what change you're proposing to our use of copy().
No change proposed :) We might need to implement our own copy if it turns out that builtin misbehaves on some platforms. amd64 seems to be ok.
See individual commits for more detailed information.