You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Similar to #3363 , I'm wondering if most of the CTS should just run with all features requested and only a few specific tests check that if something isn't requested is not usable.
Some random thoughts
The CTS would request less devices - not sure how much of a win that is but the code to cache devices exists for some reason and if the majority of tests requested everything they'd all get the same device.
Some experimental features have broken valid common WebGPU code. These issues aren't caught by the current CTS because the features are not requested but these issues will break sites for devs running with experimental features enabled.
I'd expect most devs would want to be able to use the internet at large and their own projects even when they've enabled experimental features.
If that's not clear, imagine a feature 'experimental-ray-tracing', if enabled, it breaks renderPasEncoder.draw(...). Any site that enables all features with requestDevice({ requiredFeatures: adapter.features }) starts failing. Ok, fine, that only affects people with these experimental features enabled. But, that could be everyone on a Canary/Nightly/Technology-Preview version of a browser as well as playtesters/beta testers and others.
Further, the dev themselves might be testing out 'experimental-bindless'. They arguably shouldn't have to workaround the previous issue if they enable all features. Yet if they don't they can't test and have to go refactor their code around browser bugs.
If the CTS surfaced this issue it would be unlikely the 'experimental-ray-tracing' feature would have shipped until the issue was resolved. Just enabling all the features in 1 test doesn't cover this because, as the example above shows, the breakage only happens on certain usage. Turning on all the features has a higher chance of finding this broken usage.
Some checks become simpler?
I'm not sure this is true but, right now most tests have 2 ways to deal with features and limits
request them on the device via t.selectDeviceOrSkipTest('name-of-feature')
this has to happen before the test function in beforeAllSubcases
check something on the device t.skipIf(t.device.limits.maxColorAttachments < 5)
this has to happen in the test function, after the device is created
Enabling all the features would switch to just method 2.
Similar to #3363 , I'm wondering if most of the CTS should just run with all features requested and only a few specific tests check that if something isn't requested is not usable.
Some random thoughts
The CTS would request less devices - not sure how much of a win that is but the code to cache devices exists for some reason and if the majority of tests requested everything they'd all get the same device.
Some experimental features have broken valid common WebGPU code. These issues aren't caught by the current CTS because the features are not requested but these issues will break sites for devs running with experimental features enabled.
I'd expect most devs would want to be able to use the internet at large and their own projects even when they've enabled experimental features.
If that's not clear, imagine a feature
'experimental-ray-tracing'
, if enabled, it breaksrenderPasEncoder.draw(...)
. Any site that enables all features withrequestDevice({ requiredFeatures: adapter.features })
starts failing. Ok, fine, that only affects people with these experimental features enabled. But, that could be everyone on a Canary/Nightly/Technology-Preview version of a browser as well as playtesters/beta testers and others.Further, the dev themselves might be testing out
'experimental-bindless'
. They arguably shouldn't have to workaround the previous issue if they enable all features. Yet if they don't they can't test and have to go refactor their code around browser bugs.If the CTS surfaced this issue it would be unlikely the
'experimental-ray-tracing'
feature would have shipped until the issue was resolved. Just enabling all the features in 1 test doesn't cover this because, as the example above shows, the breakage only happens on certain usage. Turning on all the features has a higher chance of finding this broken usage.Some checks become simpler?
I'm not sure this is true but, right now most tests have 2 ways to deal with features and limits
request them on the device via
t.selectDeviceOrSkipTest('name-of-feature')
this has to happen before the test function in
beforeAllSubcases
check something on the device
t.skipIf(t.device.limits.maxColorAttachments < 5)
this has to happen in the test function, after the device is created
Enabling all the features would switch to just method 2.
I don't know if that's simpler or not but it would move most of the checks to one place
instead of spread out as they often are now.
becomes
The text was updated successfully, but these errors were encountered: