-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tlsf_create_with_pool crashes with this memory size #9
Comments
Specifically, 8 bytes larger fails, printing an error. |
Changing > to >= on this line in tlsf_add_pool: Causes it to print an error instead of crashing. |
The crash is because fl is 25 in insert_free_block, which is one past the last index.
So another fix, of which I am unsure of the implications, is to simply increase FL_INDEX_COUNT by 1. Then I am able to allocate a block up to size (block_size_max() - tlsf_alloc_overhead()). |
For the record, that's an increase from 6536 to 6800, or 264 bytes (and some extra looping time in a few places) to support this (and none other) case where the pool is the maximum size. I'm evaluating this change
Which is much less wasteful. But there are two other recent seemingly similar commits which are really bugging me, and I'm not able yet to actually allocate a block that big. Is it possible these other recent commits are really insidiously related to this problem? Like, I don't understand mapping_search() at all. It's adding some junk to the size? I can't see how that's anything like rounding. maybe it's like rounding if the allocation is actually less than 'round'. But in this case (trying to allocate the entire pool as one block) it's enormously larger. Maybe the "insidious relation" is just that this stuff hasn't been tested in scenarios with really large parameters. But... check one of those commits "bug when in range [max size-align, max size], goes off end of array".. sounds so similar to your problem. |
This is the largest I can successfully allocate with that change: tlsf_block_size_max() - tlsf_block_size_max()/64 |
Yes, that's due to mapping_search() and the "rounding". it would seem that the upper limit under any circumstances is max-64MB, but of course that doesn't make sense. Just debug that function and you'll see. It's weird. |
It just seems that block_size_max is actually the block size of the bin which is one past the last bin in the table, and what you can successfully allocate is the bin size of the last bin in the table, which is one quantum lower. |
I dont think I agree. If it was true then the best you could do is block_size_max/2. But it's actually 64MB less than block_size_max due to mapping_search() weirdness. It's true that we can't make blocks equal to the maximum block size due to it just barely getting sized out of the bins, but that's why I proposed to fix it by dropping it down by 4 bytes. From what I've seen, if it weren't for other bugs (or logic I dont understand at all), allocating the theoretical maximum minus 4 bytes should work fine. |
The last bin is not I think the "weirdness" is as explained in the TLSF paper. http://www.gii.upv.es/tlsf/files/ecrts04_tlsf.pdf |
mmm mapping_search() is a bit misleading. It's not actually rounding. it seems like it's missing a Anyway, based on this, an allocation of almost block_size_max() would get "rounded" up to the power of two, which is too big (it's the original problem again, it's a power exactly too big for the bins) |
calling tlsf_create_with_pool with this exact size crashes on my machine:
The sizes which are 8 bytes bigger or smaller do not crash.
crashes in insert_free_block on this line:
current->prev_free = block;
Thread 1: EXC_BAD_ACCESS (code=2, address=0x100000019)
The text was updated successfully, but these errors were encountered: