-
Notifications
You must be signed in to change notification settings - Fork 389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TAG Feedback: isolation and the document #292
Comments
Thank you @travisleithead, and the rest of the TAG, for your comments! It's been a busy time for the relevant members of the community group so we haven't had a chance to review this feedback in-depth yet but will definitely do so soon! If we have any questions regarding the feedback given we'll post them here for further discussion. Thanks again! |
Any updates on this feedback? Apologies if these are being discussed elsewhere (do you have pointers?). I haven't been following WebXR issues lately... |
A fair amount has changed since this question was asked, and so we wanted to provide some updated context around it and close it out. First, it's worth noting that this question was asked against WebVR, and in the meantime we've moved to WebXR. This does not implicitly negate the question, but there are some differences between the APIs that change the dynamics of various interactions mentioned above. For example, when rendering content to a headset we now use a separate WebGL framebuffer allocated explicitly for the purposes of XR rendering. This simplifies the interactions with the DOM because the new framebuffer has no need to be synchronized with it. We also chose to retain an XR-specific As for suggestions that we either allow or enforce the VR content to run in a separate context from the page's main thread, I think most people within the group will agree that it's a great idea, but also one that carries with it a lot of complications (the opening comment mentions some, such as synchronization with audio and video, as well as input.) Since there's already going to be a fairly significant barrier to entry for VR content simply by virtue of the nature of realtime graphics development, we are reluctant to add even more stumbling blocks for developers to create basic VR content, and thus do not think it's appropriate to require an isolated execution context. Given that it's not required, we then similarly feel like it's appropriate as a future addition to the API, especially once we get a better idea of how people are using it in the real world and what the actual performance bottlenecks are. With all of that said, though, it definitely seems like there's some classes of content that would work extremely well with an "XR Worklet" mechanism, and so it's one that I look forward to exploring further in the future! |
I have the pleasure of responding on behalf the TAG. First, thanks for your request for a design review. It was really interesting to read through the explainer which is what we based our feedback on.
From an architectual view, we really only have one overarching point of feedback, and that is about the relationship of the VR rendering loop to the document. What's novel and interesting about this relationship is that WebVR is the first feature to introduce a second "view" within a single document. The first view is the content rendered by the page's document object; the new "view" is a layer obtained from an "exclusive" VR device. While it seems pretty trival to introduce this new view, it begs many open questions of which (we note) several have already been raised in the issue list:
requestIdleCallback
and the notion of competing content with the Web VR rendering loop: Feature Request: vrDisplay.requestIdleCallback #227window.requestAnimationFrame
should be unaffected by the Web VR rendering loop: F2F Followup: Render Loop Proposal #188 (comment)We should be clear that we don't fundamentally object to the model of having a second dedicated view for rendering Web VR content (in fact, it seems essential), but we are concerned with what happens to the document's view when WebVR is presenting. We forsee the very real possibility of competition for system resources between content intending to render to the document, and content attempting to render to the VR device. We understand WebVR rendering is very sensitive to performance problems, and we expect that to achieve the level of performance necessary to integrate WebVR into existing web sites (one of the use-cases in the explainer) that it must be isolated in some way from the document (allowing an implementation to dedicate it's full resources to rendering the VR view, and not be sidelined by other activity potentially happening to the document's view (e.g., Ad cycling, image carousels, videos, animation, etc., driven by CSS and various script events and timers like
requestAnimationFrame
).Speaking of
requestAnimationFrame
, we are also hoping that by clearly defining an isolation boundary for dedicated WebVR there may yet be a way to avoid adding a new animation loop callback specifically for WebVR and potentially re-use with the existingrequestAnimationFrame
.The web's isolation primitives today are iframes and workers (including flavors of Workers like "Worklets" as used by CSS Painting API ). During our TAG discussion, one strategy for isolation we considered is to configure an iframe to run in "VR mode". Such a mode could define what happens to that document's rendering, and could also run animation callbacks at the VR device's native rate, etc. Another strategy to achieve isolation is to use Workers, though we note that there you have a variety of missing features that make it more difficult (e.g., input, audio, and a synchronized animation loop driver). If the set of primitives needed in the web platform is relatively restricted, you might also consider Worklets.
We would also like to see the explainer address what should happen to any document's view when the Web VR view is active. In many cases the device itself is the VR view (like in Google Cardboard), and so the document's view can't be seen by the user by virtue of the VR view taking over the screen (only one presentation surface at a time). However, there are other devices which use an accessory VR headset to present the VR experience, and in these cases there are two display surfaces available for presentation--we would like to see the explainer address what should happen to the document's presentation. For example, does its rendering loop continue to run? This leads back to our previous concern about performance and isolation.
See some additional comments on w3ctag/design-reviews#185.
Thanks again for requesting a TAG review. We hope these comments are helpful, and look forward to continued engagement during the development of Web VR!
The text was updated successfully, but these errors were encountered: