-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Atomic move operation for element reparenting & reordering #1255
Comments
First of all, thank you! I've been vocal about this issue about forever and part of one of the biggest discussions you've linked. As author of various "reactive" libraries and somehow veteran of the "DOM diffing field", I'd like to add an idea:
I understand a node can be moved from
On top of this I hope whatever solution comes to mind works well with DOM diffing, so that new nodes can even pass through the usual DOM dance when the parent is changed or they become live, removed nodes that won't land anywhere else would eventually invoke As quick idea to eventually signal a node is going to be moved in an atomic way, and assuming it's targeting also a live parent, I think something like
As I hope this answer of mine makes sense and maybe trigger some even better idea / API. edit on after thoughts another companion of the API should be reflected in MutationObserver, or better, MutationRecord ... so far we have The |
This would be a fantastic addition of functionality for web development in general and for web libraries in particular. Currently if developers want to preserve the state of a node when updating the DOM they need to be extremely careful not to remove that node from the DOM. Morphing (https://github.com/patrick-steele-idem/morphdom) is an idea that has developed around addressing this. I have created an extension to the original morphdom algorithm called idiomorph (https://github.com/bigskysoftware/idiomorph/) and the demo for idiomorph shows how it preserves a video in a situation when morphdom cannot. 37Signals has recently integrated idiomorph into Turbo 8 & Rails (https://radanskoric.com/articles/turbo-morphing-deep-dive-idiomorph) If you look at the details of the idiomorph demo you will see it's set up in a particular way: namely, the video cannot change the depth in the DOM at which it is placed, nor can any of the types of the parent nodes of the video change. This is a severe restriction on what sorts of UI changes idiomorph can handle. With the ability to reparent elements idiomorph could offer much better user experience, handling much more significant changes to the DOM without losing state such as video playback, input focus, etc. Note that it's not only morphing algorithms like idiomorph that would benefit from this change: nearly any library that mutates the DOM would benefit from this ability. Even virtual DOM based libraries, when the rubber meets the road, need to update the actual DOM and move actual elements around. This change would benefit them tremendously. Thank you for considering it! |
Add some complexity to selection/range: how to deal with Shadow DOM when the host moves around and selection is partially in shadow DOM? |
This is a very exciting proposal! In the Microsoft Teams Platform, we extensively use iframes to host embedded apps in the Teams Web/Desktop Clients. When a user navigates away from an experience powered by one of these embedded apps and comes back to it later, we provide the ability for them to keep their iframe cached in the DOM (in a hidden state) and then re-show it later when it's needed again. To implement this functionality, we had to resort to creating the embedded app frames under the body of our page and absolute position them in the right place within our UX. This approach has lots of obvious disadvantages (e.g. breaks the accessibility tree, requires us to run a bounds synchronization loop, etc.) and the only reason we had to resort to it was because moving the iframe in the DOM would reload the embedded app from scratch thus negating any benefits of caching the frame. This proposal would allow us to implement a much more ideal iframe caching solution! Note the location of the iframe in the DOM and its absolute positioning in this recording: |
The WHATNOT meetings that occurred after this issue was created deferred discussion about the topic. I wonder what next steps would be needed to move this issue forward. The next meeting is on March 28 (#10215). |
I hope we can get to it in the 28.3 WHATNOT. @domfarolino @past ? |
It's already on the agenda, so if the interested parties are attending we will discuss this. |
Are the imperative and declarative APIs meant to slowly replace the existing APIs over time? Or do we need to choose between one or the other because of potential overhead? |
If I understand the question, it's mainly for backwards compatibility. In some cases you might want the existing behavior or something subtle in your app relies on it, so we can't just change it under the hood. |
This would be very nice for React since we currently basically just live with things sometimes incorrectly resetting. A couple of notes on the API options:
The thing that does causes a change is the place where the move happens. But even then it's kind of random which one gets moved and which one implicitly moves by everything around it moving. We don't remove all children and then reinsert them. So sometimes things preserve state. A new API for insertion/move seems like a better option. We'd basically like to just always the same API for all moves - which can be thousands at a time. This means that this API would have to be really fast - similar to insertBefore. An API like Something new like |
One thing that's nice to nail down is whether re-ordering of child nodes is enough or we need to support re-parenting (i.e. parent node changing from one node to another). Supporting the latter is a lot more challenging than just supporting re-ordering. |
Definitely would prefer full re-parenting. I gave an htmx demo of an morph-based swap at Github where you could flip back and forth between two pages and a video keeps working: https://www.youtube.com/watch?v=Gj6Bez2182k&t=2100s The dark secret of that demo was that I had to really carefully structure the HTML in the first and second pages to make sure that the video stayed at the same depth w/ the same parent element types to make the video playing keep working. Would be far better for HTML authors if they could change the HTML structure entirely, just build page 1 the way they want and build page 2 the way they want, and we could swap elements into their new spots by ID. |
(For the purpose of brevity, I will begin using the SPAM acronym that we've been toying around with internally, which means "state-preserving atomic move". The most obvious example is an iframe that gets SPAM-moved doesn't lose its document or otherwise get torn down).
@sebmarkbage I understand your hesitation around a new subtree-associated-HTML-attribute — in that it would be over-broad, affecting tons of nested content that a framework might not own, possibly breaking parts of an app that doesn't expect SPAM moves to happen. But I'm curious if a new DOM API really gets you out from under that over-broadness, while still being useful? What would you expect I guess I had in mind that the imperative API would force-SPAM-move the "state-preservable" elements in the subtree that's moving, so that any nested iframes do not get their documents reset1. But if that API would not preserve nested iframe state, then the only way it would be possible to actually preserve that iframe's state in this case is if the application took care to apply an iframe-specific HTML attribute to it, specifying that it opts into SPAM moves:
But it sounded like that option didn't sit well with you because the application author would be one-by-one sprinkling these attributes to random iframes without understanding the context in which the SPAM move might actually take place, by a framework way higher up the stack. So how can we best enable the scenario where an
But I would love to get more thoughts on the subtree side-effects stuff in general. Footnotes
|
I don't think we can make this happen automatically based on a content attribute on an iframe. It most certainly needs to be a completely new DOM API. |
I am very much open to that, I'm just trying to consider what subtree side-effects are acceptable. That is, if |
An attribute + DOM API could work together in this case a bit, to ameliorate some of the compat concerns. For example: const nodeToAtomicallyMove = document.querySelector('......');
// Never trigger atomic moves on *this* specific sub-subtree, that was built by "old" content.
nodeToAtomicallyMove.querySelector('.built-by-legacy-app').preserve = 'none';
newParent.appendAtomic(nodeToAtomicallyMove); In this case, all |
That sounds like something that could be built by a user hand library, not something that needs to be built into browser's native API. We really need to keep this API proposal as simple & succinct as much as possible. |
Can you expand on why this is impossible? I can see the point why it might be preferable, but I think both directions are possible. |
and +1 to not limiting it to reordering. We'll end up just scratching the surface of the use-cases, coming back to where we started where we still need a full solution for reparenting. |
I'm also a bit at a loss as to why we'd discuss new attributes. That seems like a pretty severe layering violation? The way I see it:
|
I tend to agree with the conclusion, but I want to explain why the main reason to consider things like an iframe attribute, in case it raises something else. Outside "keep iframes from reloading", it's unclear exactly what the effects of this would be. For focus, we need to blur and refocus anyway, e.g. in case you're moving the element to an |
I think what Seb is saying is that React can decide if a move should be state preserving but if React added a "preserve-state" attribute to Our perspective is that the mover decides the move semantics rather than the tree. So any moves done by this embedded application won't preserve state b/c that is what the application was expecting and any moves done by React would preserve state becuase React was updated to signal this intent by using a novel API |
Bikeshedding and Implementation details aside, the specifics of how Speculative implementors on all platforms, please coordinate with the accessibility DRIs for your engine. Listing some starting points here. Chrome: @aleventhal |
Chrome/Edge can keep the id stable after it moves in the DOM. This is a new thing that we do. |
I've been asked to discuss the throwing behavior in here but there's really not much else I can add except this new API, desired for many years, is going out in a not-so-usable way, so that user-land libraries are already required to make sense of it at its very current state: https://github.com/ungap/move-before This makes me sad because there was no real conversation around users' presented use cases and the solution is that "we" are landing a new API that requires mandatory I fully understand the fact developers might be careful when moving nodes around, but the history of appendChild and insertBefore never cared about any of the guards or guarantees this new API wants to care about (for good reasons) so that having a companion instead of wrapping try/catches all over the Web would've been a better choice than just throwing out of the blue. As library author, I own my nodes and I know what's live or not, but as my libraries also need to deal with the fact the DOM is shared and accessible by all other libraries out there, there's nothing else I can do than wrap try/catch around this API but I feel like nobody wins this way. edit see lit-html adding various lines of code as opposite of doing a single operation ... those checks are also apparently not enough so if a library like lit-html could already fail, due missing try/catch around, imagine all other libraries out there. |
I think this the core point here - the newer APIs went into a "higher level" ergonomic direction, which is great. But |
Follow-up to the discussion here: To be blunt, with this API as-is I think it's likely the following will happen:
Assuming (4) occurs, which I think is probably likely, If these checks can be done faster in platform code, they should be done there. Perhaps the behavior here can be revisited based on developer feedback from testing of initial implementations. |
@noamr I guess my point is that if there are no concrete performance benefits because both user-land code and native code needs to perform twice the same checks to be sure it's not going to throw, this APIs won't really benefit much anyone, except for those cases where the state is preserved which is surely desired, but usually that's not the main use case for move nodes operations ... you can see lit having that just on rows reording or containers reordering, it's not that Web developers should move inputs around while you're typing, if you understand what I am talking about ... I hope try/catch won't degrade too much perfs though, but I'd be reluctant to do on my side what inevitably happens also behind the scene after I've done all those user-land checks before calling |
that's a well known slippery slope ... once stuff lands as standard it's already too late to change it because at that point they will tell us "it's too late, we have already these frameworks using this API as it is" (and if we look closer, lit is already on that path and lit has influence in this space). It happened before, it will happen again ... what I think it's missing in here, is some consistency with newly proposed APIs ... I guess that's just my opinion though ... we have a highly forgiving HTML document with a highly easy to break DOM runtime implementation if this is the direction ... and I understand the low-level API, I just don't understand why knowing when such operation can be performed is hidden behind implementations and specs, super hard to get it fully right on the client side due amount of steps in the proposal standard itself, all checks that won't make anything better as the more bindings are needed, the slower it will get. Relax the API to fallback with users' intents and provide a check for when throwing is desirable would be my pick, if only I had any influence in here as consumer of such API. edit arguably, in a new world where DOM never existed, having move operation as default behind every single DOM operation would've been a great starting point, imho ... jokes on DOM being slow have been around for years, adding "if it's fast it might throw" as nowadays joke feels unnecessary, still imho. |
All these predictions and points are valid, but I read them as a request to provide a higher level variant of the API at some point/soon (eg The move operation is a major new addition to DOM by itself - so providing it as the rawest level with predictable outcomes in terms of mutation observers etc and without additional logic on top is useful on its own, but doesn't preclude us from extending later with additional API, while the opposite doesn't have this quality. |
@WebReflection do you have a proposal for |
@noamr Since both sides seems to be right, and by what I can see here the major issue is throwing (as try/catch is slow), would it be possible to make the method return true/false if the move operation was succeed/failed? (maybe the name could be changed to tryMove or something similar). In that way I believe both ends will be having what they want: No error will be thrown, the user will be able to do whatever they want if the operation failed (throw, insertBefore), no "magic" things will happen, and you will still be able to extend this functionality in the future without any risks of breaking existing code. |
This function already returns the new node, like btw this is where this whole thing was discussed with all the browser vendors: |
But that means that the same check will have to run twice: one in userland and then again in the browser. I get the point of returning the node, so another option could be to pass a typed object as last argument. The browser could then throw if no object is passed, or fill a property (succeed?) with true/false if the object is supplied. You could even fill up that object on why it failed (if throwing different errors is required, for example). Again, I'm here trying to cover both sides. |
Honestly this was an opportunity to make common DOM operations fast by default, it’s ending up as “check what internals do before you call this new API” story. IMHO, factor out those internals as userland method or allow devs to not care about those internals when the intention is to place a node in that place regardless. Everybody wins in latter case, nobody wins and n the current state. @titoBouzout … it took forever to have this in place and I even voiced the name was the right thing to do in previous discussions, I’m not waiting other 10 years to have something else, this was the time to offer something desired, great and fast, it’s being published with no developer happy about it, or needing to check specks and duplicate checks to use it. |
Those are not internals, the check is whether you're moving two connected nodes within the same document. |
Just to confirm my understanding: The whole point here is to have the option to move a node without having to go through undesired disconnect/connect steps (which can discard state). Iff a node is not connected to the document that it's trying to move to, it's not possible to avoid the connect step, so an atomic move operation can't happen. To reflect this, The problem is for library code where you would have to check every node that you're moving around to see if it was removed by someone else or something. In these cases, it's pointless to care whether it can be moved with or without (dis)connections, as you're really just declaring a desired DOM fragment. Any nodes that aren't already in the tree don't have any state to preserve anyways. In the same sense that the newer manipulation functions have replaced |
The more I think about that, the more that I realize how fragile it would be wrt manipulation order. It's easy to first replace some nodes that you would later move somewhere else. A properly atomic move api absolutely needs to have transactions to avoid this. The easiest solution is to just make transactions commit in the next microtask. I'm pretty sure this is the same behavior that the current manipulation apis have, but I just want to confirm that this is how I think it's worth it to talk about this sooner rather than later, but I'm pretty sure that this should be in a separate issue. |
Anything using @dead-claudia comment from here #1307 (comment)
That, or every framework will have to ship the same 5 lines of code (if that doesn't make this ridiculous slow, on which case they will restore focus/selection in other ways and skip using this api) |
not adding much but beside fully agreeing with this sentence, the hidden footgun this API is throwing at developers is that even libraries "sure enough" to move around their own nodes can't prevent other libraries to interfere with live nodes ... so, ensureing not-live nodes, or nodes moved elsewhere, where the DOM has no mechanism to provide, or prevent, nodes ownership, looks really like somebody overlooked at the reason this API is desired in the first place: the intent is in the name, nothing else should happen ... if the intent can be clear, let it be, if it needs internals disambiguations for when that canot be performed, let that be an internal implementation detail no Web developer asked for or cared about when Again, this API should be the new |
From #1255 (comment):
This isn't entirely hypothetical, but impact would be mostly limited to large structural tree updates, where most the time is really spent recomputing layout and updating paint. In creating MithrilJS/mithril.js#2982, flattening a nested try/catch in the attribute update flow saved about 10% in performance, and their mere addition to that section caused a roughly 20% perf drop, but only in the fast case of no attributes changed (where diffs are commonly few-millisecond). In the slow case where attributes were frequently changing, it was barely outside margin of error, but paint times would far exceed that anyways. This is of course in the attribute update flow, and virtual DOM frameworks have to be able to process thousands, possibly tens of thousands, of these in just a single frame. In some cases, skipping even one frame with those updates would result in noticeable perf degradation. (Some users use Mithril.js to power games, and so they'd need that kind of speed.) Conversely, a keyed list might have to move hundreds if you change sort order, and users would tolerate some noticeable lag. So, as long as it clocks in at no more than about 10us per operation for the whole try/catch, my only complaint would be just the need for that try/catch. |
What problem are you trying to solve?
Chrome (@domfarolino, @noamr, @mfreed7) is interested in pursuing the addition of an atomic move primitive in the DOM Standard. This would allow an element to be re-parented or re-ordered without today's side effects of first being removed and then inserted.
Here are all of the prior issues/PRs I could find related to this problem space:
insertBefore
vsappendChild
and transitions #880Problem
Without an atomic move operation, re-parenting or re-ordering elements involves first removing them and then re-inserting them. With the DOM Standard's current removal/insertion model, this resets lots of state on various elements, including iframe document state, selection/focus on
<input>
s, and more. See @josepharhar's reparenting demo for a more exhaustive list of state that gets reset.This causes lots of developer pain, as recently voiced on X by frameworks like HTMX, and other companies such as Wix, Microsoft, and internally at Google.
This state-resetting is in part caused by the DOM Standard's current insertion & removal model. While well-defined, its model of insertion and removal steps has two issues, both captured by #808:
What solutions exist today?
One very limited partial solution that does not actually involve any DOM tree manipulation, is this shadow DOM example that @emilio had posted a while back: whatwg/html#5484 (comment) (see my brief recreation of it below).
But as mentioned, this does not seem to perform any real DOM mutations; rather, the slot mutation seems to just visually compose the element in the right place. Throughout this example, the iframe's actual parent does not change.
Otherwise, we know there is some historical precedent for trying to solve this problem with WebKit's since-rolled-back "magic iframes". See whatwg/html#5484 (comment) and https://bugs.webkit.org/show_bug.cgi?id=13574#c12. We believe that the concerns from that old approach can be ameliorated by:
How would you solve it?
Solution
To lay the groundwork for an atomic move primitive in the DOM Standard, we plan on resolving #808 by introducing a model desired by @annevk, @domfarolino, @noamr, and @mfreed7, that resembles Gecko & Chromium's model of handling all script-executing insertion/removal side-effects after all DOM mutations are done, for any given insertion.
With this in place, we believe it will be much easier to separate out the cases where we can simply skip the invocation of insertion/removal side-effects for nodes that are atomically moved in the DOM. This will make us, and implementers, confident that there won't be any way to observe an inconsistent DOM state while atomically moving an element, or experience other nasty unknown side-effects.
The API shape for this new primitive is an open question. Below are a few ideas:
append(node, {atomic: true})
,replaceChild(node, {atomic: true})
Compatibility issues here take the form relying on insertion/removal side-effects which no longer happen during an atomic move. They vary depending on the shape of our final design.
A non-exhaustive list of additional complexities that would be nice to track/discuss before a formal design:
Anything else?
No response
The text was updated successfully, but these errors were encountered: