You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've accidentally stumbled upon these lines in your code:
// TODO Some re-use opportunities are missed on negative sides of the block,
// but I don't really know how to fix it...
// You can check by "shaking" every vertex randomly in a shader based on its index,
// you will see vertices touching the -X, -Y or -Z sides of the block aren't connected
So I just wanted to share my findings as I was both able to solve it and more-or-less can explain why it's impossible in the way described in the paper.
I'm not sure whether that's even worth pursuing, just wanted to share, maybe will be valuable, worst case — it won't 😆
In the paper the edge reuse indexing goes like this:
So the 51 edge (for example) has 1. Same as 11. In order for you to utilize 100% reuse opportunities — this can't work, because imagine you are in the deck #0 (k == 0) — your 51 needs to reuse from 41 and your 11 needs to reuse from 81. You can visually put two cubes with the same j and k in a row an it becomes obvious.
But they both will have reuse index 1, so it makes it impossible. I believe this is done in the paper just in a simplified manner or that was an early work. But given you anyway use a whole nibble to store these reuse indices — actually you do not use any more space or lose anything if you index the edges separately, for example like this:
In order to do a remap — you need to change the transvoxel tables for the primary mesher. A simple script would do (I'm playing in Javascript, but still):
Then, given edges 0, 1, 2 are not reusable, when you create a new index, you just store it in cellStorage.indexes[edgeCode - 3]. When you want to retrieve it — just refer to whatever indexing you used. Myself I was too lazy to figure out some kind of fancy bitwise shift operations, so I literally just listed all possible combinations:
_reuseEdgeIndex(edgeCode, reuseDirection) {
if (edgeCode >= 9) {
return null
}
if (edgeCode == 0) {
// 0bZYX
if (reuseDirection == 0b001) {
return 3
} else if (reuseDirection == 0b100) {
return 6
} else if (reuseDirection == 0b101) {
return 9
}
}
if (edgeCode == 1) {
// 0bZYX
if (reuseDirection == 0b010) {
return 4
} else if (reuseDirection == 0b100) {
return 7
} else if (reuseDirection == 0b110) {
return 10
}
}
if (edgeCode == 2) {
// 0bZYX
if (reuseDirection == 0b001) {
return 5
} else if (reuseDirection == 0b010) {
return 8
} else if (reuseDirection == 0b011) {
return 11
}
}
if (edgeCode == 3) {
return 9
}
if (edgeCode == 4) {
return 10
}
if (edgeCode == 5) {
return 11
}
if (edgeCode == 6) {
return 9
}
if (edgeCode == 7) {
return 10
}
if (edgeCode == 8) {
return 11
}
return null
}
}
Maybe there is some clever way to do this, but I wanted to move forward & this one should solve it + it's fast enough, didn't affect any benchmarks. Optimized it a bit in style of:
Now I have 100% vertex reuse in every case except edges 81, 82, 83.
And btw, when I did this, I found out that handling corner cases differently is literally not needed at all, I was able to drop the corners from cellStorage as well as all the logic that differentiates the corners from non-corners altogether, because in such indexing scheme like above — corner indexing/reuse becomes kinda subset of edge, because every corner is anyway lying on some edge just with t == 0 or u == 0 and every edge now has a separate reuse index; In the end, with new indexing I threw away the whole corner logic and ended up with a super small loop for geometry:
It has a 100% full reuse for edge/corner cases, so if I shake vertices, you might see that only cracks are between chunks themselves (also I shake transitional cells — you might see a bit of more cracks there):
So in the end it is simpler / a lot shorter code; Didn't seem to affect performance, although given I'm in Javascript, profiling it doesn't make a lot of sense, need to rewrite to C++ first 😆 New modified tables are of same size too.
I understand it's anyway an unnecessary risk to implement any of it in current library (great job btw — it is amazing!). But just accidentally stumbled upon this comment and decided to share.
I'm learning gamedev right now; Started with terrain systems — so studying the paper. If you'd like to chat at any point — let me know what would be the preferrable way to do it. I do not have a lot to share that you could benefit from except for maybe this one finding.
But I'd love to hear about your experience with materials though — as I'm now learning this topic and a bit struggling, as I feel the approach in the paper won't work if I'd wanna blend between more complex materials, imagine grass (with geometry shader between vertex/fragment), etc.
The text was updated successfully, but these errors were encountered:
Thanks for posting this. I would need more time to understand that solution more in detail. For now I consider this is not an important issue so I'm not planning to fix that, but if someone finds a way to do it with no side-effects then a PR is welcome.
I've accidentally stumbled upon these lines in your code:
So I just wanted to share my findings as I was both able to solve it and more-or-less can explain why it's impossible in the way described in the paper.
I'm not sure whether that's even worth pursuing, just wanted to share, maybe will be valuable, worst case — it won't 😆
In the paper the edge reuse indexing goes like this:
So the
51
edge (for example) has1
. Same as11
. In order for you to utilize 100% reuse opportunities — this can't work, because imagine you are in the deck #0 (k == 0) — your51
needs to reuse from41
and your11
needs to reuse from81
. You can visually put two cubes with the samej
andk
in a row an it becomes obvious.But they both will have reuse index
1
, so it makes it impossible. I believe this is done in the paper just in a simplified manner or that was an early work. But given you anyway use a whole nibble to store these reuse indices — actually you do not use any more space or lose anything if you index the edges separately, for example like this:In order to do a remap — you need to change the transvoxel tables for the primary mesher. A simple script would do (I'm playing in Javascript, but still):
Then, given edges 0, 1, 2 are not reusable, when you create a new index, you just store it in
cellStorage.indexes[edgeCode - 3]
. When you want to retrieve it — just refer to whatever indexing you used. Myself I was too lazy to figure out some kind of fancy bitwise shift operations, so I literally just listed all possible combinations:Maybe there is some clever way to do this, but I wanted to move forward & this one should solve it + it's fast enough, didn't affect any benchmarks. Optimized it a bit in style of:
, but kept idea the same.
Now I have 100% vertex reuse in every case except edges 81, 82, 83.
And btw, when I did this, I found out that handling corner cases differently is literally not needed at all, I was able to drop the
corners
fromcellStorage
as well as all the logic that differentiates the corners from non-corners altogether, because in such indexing scheme like above — corner indexing/reuse becomes kinda subset of edge, because every corner is anyway lying on some edge just witht == 0
oru == 0
and every edge now has a separate reuse index; In the end, with new indexing I threw away the whole corner logic and ended up with a super small loop for geometry:It has a 100% full reuse for edge/corner cases, so if I shake vertices, you might see that only cracks are between chunks themselves (also I shake transitional cells — you might see a bit of more cracks there):
So in the end it is simpler / a lot shorter code; Didn't seem to affect performance, although given I'm in Javascript, profiling it doesn't make a lot of sense, need to rewrite to C++ first 😆 New modified tables are of same size too.
I understand it's anyway an unnecessary risk to implement any of it in current library (great job btw — it is amazing!). But just accidentally stumbled upon this comment and decided to share.
I'm learning gamedev right now; Started with terrain systems — so studying the paper. If you'd like to chat at any point — let me know what would be the preferrable way to do it. I do not have a lot to share that you could benefit from except for maybe this one finding.
But I'd love to hear about your experience with materials though — as I'm now learning this topic and a bit struggling, as I feel the approach in the paper won't work if I'd wanna blend between more complex materials, imagine grass (with geometry shader between vertex/fragment), etc.
The text was updated successfully, but these errors were encountered: