Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential invalidating patch criterion for extension algorithm #235

Open
skef opened this issue Nov 8, 2024 · 13 comments
Open

Potential invalidating patch criterion for extension algorithm #235

skef opened this issue Nov 8, 2024 · 13 comments

Comments

@skef
Copy link
Contributor

skef commented Nov 8, 2024

I suspect that the lack of a specific criterion for choosing the next patch during extension will actually interfere with generality. The more information an encoder has about what the client will do, the more clever and "general" it can be about how it encodes. As I've indicated in #232 I think the considerations for glyph-keyed patches will be distinct from those for invalidating patches so I'm only speculating about the latter here. I'm also going to skip the variable subspace problem because I'd need to think more about how to add that in to this framework.

My thoughts about this have evolved primarily from thinking about "one-round-trip encodings" -- encodings such that each patch map has a patch that loads all remaining parameters, and then other patches that load various subsets of those parameters. How do you organize the overall system sensibly to allow for that?

Let's think about two per-patch measures:

  • StuffYouNeed: How many of the codepoints and features needed are loaded by the patch. For now I'm happy to think about this as the simple sum of the two, meaning that the client can easily calculate this using some set operations. (This is setting aside the nascent UVS-motivated mechanism.)
  • TotalStuff: "How much" is loaded by the patch. This is impractical for the client to measure as things stand.

If you could measure TotalStuff, it seems like a good criterion would be:

  • Choose the patches that maximize StuffYouNeed
  • Among those, choose the patch that minimizes TotalStuff, choosing arbitrarily among ties.

This is a very simple strategy and seems like a good starting point for discussion.

Now, since TotalStuff is hard to calculate one approach would be to add a numeric field representing that number to the patch map. I think we've discussed something along these lines in the past with the expected distaste. What does it represent? How would the encoder calculate it? Do we really need to pay the cost for it in the patch table? What about patches listed multiple times? etc. I share these negative thoughts.

Happily, we don't really care the numeric value, it's just there to break ties relative to StuffYouNeed. And I think we're not currently using the patch map order for anything. (It clearly doesn't factor into the extension algorithm, but there might be order-based size optimizations I'm not remembering. If there are, think in terms of an added numeric field to represent the TotalStuff order, and unpacking the patch maps into those orders before analysis.) So the encoder could order the patches in the maps by "increasing TotalStuff", and the second part of the updated criterion would be:

  • Among those, choose the first patch.

So in practice the algorithm would just maintain the current patch maximizing StuffYouNeed as it goes through the list.

Now, going back to generality, the encoder doesn't have to think in terms of TotalStuff. It can just think in terms of the StuffYouNeed criterion and which patches have priority over which other patches when there are ties. For example, while there is necessarily a "total ordering" in the map, from the encoder's perspective there might just be a set of partial orderings arbitrarily collapsed into the map.

Now, what about #222 ? Well, my initial thought is that I can't think of a relevant instance where I would use that mechanism for an invalidating patch. Any given subset will include the glyphs or not and what an invalidating patch encodes is the difference between two subsets. So it shouldn't be a factor. And if it is a factor anyway, it's because of the generality approach of the extension algorithm, which is something that can either be revisited or finessed. (In terms of finessing, we could specify a BS StuffYouNeed calculation for such patch entries with a warning against creating a map that uses it relevantly in practice.)

@garretrieger
Copy link
Contributor

garretrieger commented Nov 9, 2024

Let's think about two per-patch measures:

  • StuffYouNeed: How many of the codepoints and features needed are loaded by the patch. For now I'm happy to think about this as the simple sum of the two, meaning that the client can easily calculate this using some set operations. (This is setting aside the nascent UVS-motivated mechanism.)
  • TotalStuff: "How much" is loaded by the patch. This is impractical for the client to measure as things stand.

If you could measure TotalStuff, it seems like a good criterion would be:

  • Choose the patches that maximize StuffYouNeed
  • Among those, choose the patch that minimizes TotalStuff, choosing arbitrarily among ties.

Agreed, this is quite similar to the approach I used in the prototype which worked quite well in the ift demo. For reference a description of the criteria I used is here. In the prototype I used total # of code points in the patches subset definition as a proxy for TotalStuff.

This is a very simple strategy and seems like a good starting point for discussion.

Now, since TotalStuff is hard to calculate one approach would be to add a numeric field representing that number to the patch map. I think we've discussed something along these lines in the past with the expected distaste. What does it represent? How would the encoder calculate it? Do we really need to pay the cost for it in the patch table? What about patches listed multiple times? etc. I share these negative thoughts.

Happily, we don't really care the numeric value, it's just there to break ties relative to StuffYouNeed. And I think we're not currently using the patch map order for anything. (It clearly doesn't factor into the extension algorithm, but there might be order-based size optimizations I'm not remembering. If there are, think in terms of an added numeric field to represent the TotalStuff order, and unpacking the patch maps into those orders before analysis.) So the encoder could order the patches in the maps by "increasing TotalStuff", and the second part of the updated criterion would be:

  • Among those, choose the first patch.

So in practice the algorithm would just maintain the current patch maximizing StuffYouNeed as it goes through the list.

Now, going back to generality, the encoder doesn't have to think in terms of TotalStuff. It can just think in terms of the StuffYouNeed criterion and which patches have priority over which other patches when there are ties. For example, while there is necessarily a "total ordering" in the map, from the encoder's perspective there might just be a set of partial orderings arbitrarily collapsed into the map.

Agreed, I like the approach of using ordering in the patch map to provide a priority signal, since that's basically free from an encoding standpoint. Definitely better then using code point count like I did in the prototype.

Concretely here's what I'd propose for spec changes: in patch selection for selecting within full or partial invalidation groups mention that a client should pick a patch which has the most StuffYouNeed, but don't define exactly how that's calculated (we could make some recommendations) and then specify any ties are broken by the actual entry ordering within a patch mapping. I'm leaning towards not explicitly defining StuffYouNeed as I think there's likely some variability here in what things specific clients might value.

As an example let's say the client currently needs some subset of the latin alphabet and the user is on a non-metered and fast connection. The client might reasonably decide that StuffYouNeed includes the rest of the latin alphabet as it's likely to be needed eventually and would thus favour patches which contain more of latin. Since the users connection is fast its worthwhile to grab that preemptively. However, if instead the user was on a metered and/or slow connection the client may instead value getting only exactly what it currently needs to minimize byte transfer.

@skef
Copy link
Contributor Author

skef commented Nov 9, 2024

In the prototype I used total # of code points in the patches subset definition as a proxy for TotalStuff.

Sounds like you're already leaning towards the map ordering system, but just for the record here it might be worth capturing the problems with using the patch map contents for this. There are at least two:

  1. Elsewhere we've already discussed the fact that a single patch can have multiple patch map entries. So at a minimum the client would need to go through and collect the contents together by patch as opposed to map entry.
  2. The unicode coverage doesn't really tell you the size. Maybe one patch is filled with ligatures and another isn't. Maybe one patch happens to have some extra glyphs that are huge. And the size doesn't necessarily express the priority. If there wind up being complex interactions between sets of patches (which would admittedly be unusual) the client could be stuck doing some complex analysis.

I don't want to claim these are normal cases, but when we get into e.g. emoji fonts I worry we could run into some of these things.

@skef
Copy link
Contributor Author

skef commented Nov 9, 2024

Concretely here's what I'd propose for spec changes: in patch selection for selecting within full or partial invalidation groups mention that a client should pick a patch which has the most StuffYouNeed, but don't define exactly how that's calculated (we could make some recommendations) and then specify any ties are broken by the actual entry ordering within a patch mapping. I'm leaning towards not explicitly defining StuffYouNeed as I think there's likely some variability here in what things specific clients might value.

I don't think I'm in the same place, so maybe this is something to dig into over our next few meetings.

The general approach we're taking is smart encoder/simple client. Because the analysis is done up-front and isn't time sensitive, the encoder can expend as much energy as it likes. That doesn't meant that the client can't expend any extra energy, but "freedom" on the client side more or less equates to unpredictability on the encoder side. I think there's more benefit to focusing potential cleverness on where we have the cycles.

As an example let's say the client currently needs some subset of the latin alphabet and the user is on a non-metered and fast connection. The client might reasonably decide that StuffYouNeed includes the rest of the latin alphabet as it's likely to be needed eventually and would thus favour patches which contain more of latin. Since the users connection is fast its worthwhile to grab that preemptively. However, if instead the user was on a metered and/or slow connection the client may instead value getting only exactly what it currently needs to minimize byte transfer.

I don't disagree with the need for clients to change their behavior like this but the question is what level to address it on. I would handle this particular example by allowing the client to adjust the subset parameters rather than the algorithm. So it might go through the map tracking two parameter sets and then decide between them.

However, it's worth noting that if the client wants to determine the cost of pre-loading more codepoints, rather than simply making an executive decision, it can't effectively do that in the general case. If you've got table-keyed patches you can only guess at either the relative sizes or (in many cases) how many further patches you may need to load. All the problems with estimating TotalStuff accurately come back into the picture. I guess with this new system we could recommend some sort of weight based on patch sort order but it would be pretty cheesy.

So moving back up to the general question, are there case where:

  • The client wants to trade-off between different results
  • The client has a (potentially fallible) means of measuring the cost of the different results
  • The client can't use our stipulated algorithm to determine each relevant cost, either for inherent or performance-related reasons?

@skef
Copy link
Contributor Author

skef commented Nov 9, 2024

Thinking about possible language for client flexibility, so here's a very rough draft. (I'm being really sloppy with terminology here rather than searching out all of the definitions I should be using.) This is relative to the tiebreak-by-order standand and a fixed algorithm.

Speculative loading of additional subset parameters

In some cases a client may wish to include additional codepoints, features, or axis ranges in a parameter set, e.g. if they may be needed in the near future and patching at that time could introduce noticeable latency. However, some combinations of parameters may be more expensive to load than others and the client may wish to assess those relative costs in deciding what to load. The patch map does not include the sizes of each patch, and even if it did there would only be entries for the current set of invalidating patches. Still, here are some general guidelines for judging relative costs:

When comparing among full or partial invalidation patches, the first question is whether one patch includes all desired codepoints or whether an additional patch will be needed, as the expense (in latency) of one or more round-trips will almost always outweigh the expense of additional bytes in a patch. There is also no way of determining how many further patches will need to be loaded. Therefore, if all of multiple competing options require loading additional patches it is difficult to assess the cost beyond the fact that more codepoints will typically involve loading more data.

In cases where there are different invalidation patches that entirely satisfy respective different parameter sets, it can be assumed that patches occurring later in the map are larger than those earlier in the map. The best proxy for the relative sizes of the patches is probably the number of codepoints listed in the corresponding map entries, but this is at best a rough and unreliable heuristic and in cases where it conflicts with the map ordering the ordering should take precedence.

Although there are corner-cases, glyph-keyed (and, therefore, no invalidation) patches can be considered independent of one another for purposes of speculative subset parameters. Because they are loaded independently an encoder may not have put much thought into their ordering in the map, so the codepoint count in the map entry is the best available proxy for a patch's relative size. Because a patch may be listed in the map more than once a thorough client might want to seek out map entry with the largest number of listed codepoints as the proxy.

Although a client is free to use these heuristics or any others to arrive at set of subset parameters to load, the patches it loads to satisfy that set, and the order it loads and applies the patches in, must be the same as what is specified by the extension algorithm for that set.

@garretrieger
Copy link
Contributor

That draft looks reasonable for speculative loading, I don't think we currently talk about speculative loading anywhere and we likely should. Just wanted to note though that for speculative loading there's a subtle distinction between:

  1. Increasing the subset definition to cover additional things and then execute extension algorithm, and
  2. Prefering to select larger patches from the patches that match a non inflated subset definition.

For 1 this will result in the font loading everything it needs for those additional things, where-as with 2 you may only load part of what you need for the additional things. In a mixed IFT font this could be a low cost way to eliminate a future round trip by grabbing a larger partial invalidation patch but not yet getting any of the associated non invalidating patches. That said this seems like a micro optimization I'm not sure we actually need to get into this level of detail in a spec section on speculative loading.

Going back to the patch selection ordering problem I've been thinking about things a bit more, and ultimately we are trying to achieve the following two outcomes (as you noted in the speculative loading draft) when a client is deciding between which patch to get first:

  1. Most importantly, reduce the total number of round trips.
  2. Once the number of round trips is minimized best effort to reduce the number of bytes transferred.

So when describing how a client should decide ordering we probably want to frame it in terms of those two objectives. That should lead to fairly predictable client behaviour. I believe round trip minimization should be equivalent to finding the smallest number of invalidating patches that cover everything which is needed, since each patch is a round trip. Size minimization will be more vague of course, but it's far less important so using proxies such as entry ordering or code point set sizes should be good enough. We'll probably also want to re-iterate in encoder guidance that clients will be looking to minimize round trips so create encodings that are conducive to that end goal.

Broadly, there's two main cases I see a client encountering with invalidating patches:

  1. The client is deciding the ordering between a set of disjoint invalidating patches. In this case the selected ordering isn't particularly important. The total number of round trips and total transfer sizes won't change based on the order of selection.
  2. The client is deciding the ordering between a set of patches that overlap. Here the client can eliminate round trips if it's able to find patches which are supersets of two or more smaller patches. As an example if a font has patches: A, B, C, A + B, A+B+C and the client needs A and B then the above criteria should naturally lead to selecting the A + B patch.

@garretrieger
Copy link
Contributor

Ah actually we do have a small section on speculative loading: https://w3c.github.io/IFT/Overview.html#reduce-requests though that could probably use a bit of fleshing out.

@skef
Copy link
Contributor Author

skef commented Nov 14, 2024

The client is deciding the ordering between a set of disjoint invalidating patches. In this case the selected ordering isn't particularly important. The total number of round trips and total transfer sizes won't change based on the order of selection.

A client's determination that it's in this case will always be based on an assumption, correct? (That assumption being that the next iteration of the patch map will look like the current one except with the loaded patch removed.)

@skef
Copy link
Contributor Author

skef commented Nov 14, 2024

  1. Increasing the subset definition to cover additional things and then execute extension algorithm, and
  2. Prefering to select larger patches from the patches that match a non inflated subset definition.

I can think about this more, but given that you can only load invalidating patches one at a time I'm not sure how different these two are. I guess you could have a weirdly encoded font that puts patches that cover more earlier in the list, but that's just straightforwardly counter-productive.

I guess I feel like a client should be judging the value of speculative loading based on what's its getting, measured in additional parameters it will support. Whereas 2 distinct from 1 seems to amount to "I want to load a larger next patch without knowing what that patch will give me support for."

My biggest problem making judgments about 2 is the fact that we have a unified algorithm for invalidating and non-invalidating patches, and no particular restrictions on how the patches are formed. Suppose someone encodes a font with these non-invalidating patches (among others):

1.a-z
2. A-Z
3. a-z+A-Z
4. (f)(i)

(The last containing the fi ligature. Presumably the only motivation for a subset of patches like this would be if a-z and A-Z compress together well. )

If you want to support {f, i, r}, you need to load either 1 and 4 or 3 and 4. Loading 1, 3, and 4 won't result in corruption but it's certainly counter-productive. So how is the client supposed to think about this? I'm not sure it's worth allowing combinations like 1,2,3 in terms of the complexity it adds to the decision, but this possibility seems inherent in the unified system I continue to gripe about. And anyway what is it, specifically, that makes you load 4 if you need f and i? For example, what makes 3 optional (and undesirable) if you've loaded 1 but 4 mandatory?

I'm most comfortable working this sort of thing out relative to a specific rule that we can test. If the rule amounts to "client chooses" then I worry it will be difficult to arrive at a system that is reliably and demonstrably correct given arbitrary sets of patches.

(I say "arrive at" because I don't believe the current spec handles cases like 1-4 above correctly, or is even imagining such combinations in practice, even though they aren't ruled out. And I similarly don't think the invalidating patch side would deal with #222-style patches correctly. They probably don't make sense for invalidating patches but, again, we haven't ruled them out. What makes an entry an "alternative" for a set of codepoints versus an additional requirement for a set of codepoints? I don't know. My intuition is that, generally speaking, one should treat overlapping patches as "alternatives" for invalidation patches but as "all are required" for no-invalidation patches but obviously that's not the direction we're going in. The latter would seem to rule out combinations like 1-4.)

@skef
Copy link
Contributor Author

skef commented Nov 14, 2024

I guess I should clarify here: I'm basically saying that invalidation patch map entries should effectively be treated disjunctively and no-invalidation patch map entries should effectively be treated conjunctively. And we have discussed the two mechanisms that lead to something like this: The algorithm itself is conjunctive (1) and the editing of the patch map for invalidation patches can strip out the overlapping alternatives (2).

But then the question is, what about disjunctive no-invalidation patches? I can't recall ruling those out in our discussions and there doesn't seem to be practical guidance about avoiding them. It seems like our approach to these questions is, instead: The implementer should read the extension algorithm and deduce these rules from it. And maybe they aren't rules, anyway. We might stick to the subsets of behavior we're comfortable with (and are insuring among the possibilities provided by the algorithm), but maybe someone could be more clever than us and mix things in a different way.

Well, maybe, but if we want to leave that sort of cleverness in the encoder open, shouldn't we make the client behavior as specific as possible? And if we don't care about leaving it open, wouldn't it be much more comprehensible to be more specific about our descriptions of the algorithm and its limitations?

But I guess I'm back to my broken record.

@garretrieger
Copy link
Contributor

A client's determination that it's in this case will always be based on an assumption, correct? (That assumption being that the next iteration of the patch map will look like the current one except with the loaded patch removed.)

That's right, we don't expose any information about what the patch map will look like post application so the client will need to make some assumptions about the expected outcome.

1.a-z 2. A-Z 3. a-z+A-Z 4. (f)(i)

(The last containing the fi ligature. Presumably the only motivation for a subset of patches like this would be if a-z and A-Z compress together well. )

If you want to support {f, i, r}, you need to load either 1 and 4 or 3 and 4. Loading 1, 3, and 4 won't result in corruption but it's certainly counter-productive. So how is the client supposed to think about this? I'm not sure it's worth allowing combinations like 1,2,3 in terms of the complexity it adds to the decision, but this possibility seems inherent in the unified system I continue to gripe about. And anyway what is it, specifically, that makes you load 4 if you need f and i? For example, what makes 3 optional (and undesirable) if you've loaded 1 but 4 mandatory?

Since these are non-invalidating patches, under the current algorithm patches 1, 3, and 4 would be loaded. This happens because all patches that match the input subset definition are loaded by the algorithm and since they are glyph keyed upon application they will only ever remove themselves from the mapping.

I'm most comfortable working this sort of thing out relative to a specific rule that we can test. If the rule amounts to "client chooses" then I worry it will be difficult to arrive at a system that is reliably and demonstrably correct given arbitrary sets of patches.

(I say "arrive at" because I don't believe the current spec handles cases like 1-4 above correctly, or is even imagining such combinations in practice, even though they aren't ruled out. And I similarly don't think the invalidating patch side would deal with #222-style patches correctly. They probably don't make sense for invalidating patches but, again, we haven't ruled them out. What makes an entry an "alternative" for a set of codepoints versus an additional requirement for a set of codepoints? I don't know. My intuition is that, generally speaking, one should treat overlapping patches as "alternatives" for invalidation patches but as "all are required" for no-invalidation patches but obviously that's not the direction we're going in. The latter would seem to rule out combinations like 1-4.)

I guess I should clarify here: I'm basically saying that invalidation patch map entries should effectively be treated disjunctively and no-invalidation patch map entries should effectively be treated conjunctively. And we have discussed the two mechanisms that lead to something like this: The algorithm itself is conjunctive (1) and the editing of the patch map for invalidation patches can strip out the overlapping alternatives (2).

This is how the algorithm currently operates, all intersecting patches are candidates for loading and for invalidating patches it's expected on application they will remove overlapping entries that are no longer necessary. Since non-invalidating patches don't have the capability to remove entries other then themselves all matching non-invalidating patches will always be loaded as in the example above.

But then the question is, what about disjunctive no-invalidation patches? I can't recall ruling those out in our discussions and there doesn't seem to be practical guidance about avoiding them. It seems like our approach to these questions is, instead: The implementer should read the extension algorithm and deduce these rules from it. And maybe they aren't rules, anyway. We might stick to the subsets of behavior we're comfortable with (and are insuring among the possibilities provided by the algorithm), but maybe someone could be more clever than us and mix things in a different way.

Yes, currently this is implicit. If the encoder knows that all matching non-invalidating patches will be downloaded and applied then that leads to the fact that you want your non-invalidating patches to be as disjoint as possible to maximize efficiency by avoiding downloading data for the same glyph more than once. We could explicitly mention this in the encoder guidance so that it's clear. It's useful to have the option for overlapping non-invalidating patches open because in cases where you have complex relationships between glyphs you may end up wanting to make a tradeoff and have a glyph in more than one patch as discussed in encoding considerations around handling shared glyphs.

The specification provides an important constraint around overlapping non-invalidating patches. Encoding requirements item 2 implies that if you have two non-invalidating patches that contain data for the same glyph, that glyph data must be the same in each patch. Since 2 requires that regardless of application order the resulting font is the same. This requirement ensures that if you do have overlapping non-invalidating patches a compliant encoding will work correctly regardless of the ordering the client chooses to apply the patches in.

Well, maybe, but if we want to leave that sort of cleverness in the encoder open, shouldn't we make the client behavior as specific as possible? And if we don't care about leaving it open, wouldn't it be much more comprehensible to be more specific about our descriptions of the algorithm and its limitations?

For non-invalidating patches the client behaviour is currently well defined. The main concern left to address is the original topic of this issue around providing more concrete details on selection order of invalidating patches since that's currently left more open ended. What do you think about defining it in terms of minimizing round trips and then size as I mentioned above? I believe that provides enough predictability for encoders and prioritizes the right outcomes on the client side.

@skef
Copy link
Contributor Author

skef commented Nov 14, 2024

The main concern left to address is the original topic of this issue around providing more concrete details on selection order of invalidating patches since that's currently left more open ended. What do you think about defining it in terms of minimizing round trips and then size as I mentioned above? I believe that provides enough predictability for encoders and prioritizes the right outcomes on the client side.

Certainly minimizing round trips first, and then size, is the proper goal. The question is what can we say about this? The language I came up with was based on how I expect encoders will normally work, but we currently have no guidelines on how encoders should work in terms of the organization of the patch maps.

Suppose you need to support A, B, and C, and are considering throwing in D and E speculatively. You have the initial patch map. What do you know?

  1. You don't know how many round trips it will take to add support for A, B, and C because you don't know what the updated patch map will look like, it might require an additional patch for B, and then when that's loaded there could be another patch for B. Of course, on the client side there's nothing to be done about it, if you need to render B you'll just have to load all three patches serially.
  2. Same with D and E. Picking a patch that includes all of A-E may help, but the next patch map might include an additional patch needed to add support for D or E, in which case you'll have either loaded extra data to begin with for nothing, or will need to load the subsequent patch (although you can continue with rendering while you wait for it).

So any determination about what to do at the first stage depends on the conventions you expect the encoder to follow. And there are, I strongly suspect, conventions that make the most sense for the encoder to follow. But we haven't written any of this down in the specification or in related documents, for either no-invalidation or invalidation patches (each of which would presumably have very different associated conventions).

So: absent establishing guidelines for encoders to follow, I don't think there's anything productive to say about speculative loading. Or, to put it a different way, if we are resisting establishing such guidelines to leave the possibility of more "clever" encoders open, such cleverness goes against being able to make predictions on the value of speculative loading. We'll have to choose one or the other.

@garretrieger
Copy link
Contributor

garretrieger commented Nov 15, 2024

It's reasonable to assume that both client and encoders will be aligned on the desire to minimize round trips. So if we firm up the client patch selection by defining a more specific process for selecting between matching partial invalidation patches that aims to minimize round trips that will guide encoder implementations to picking encoding schemes that produce minimal round trips within the client framework.

Here's roughly what I'm thinking. To start for the client we make the following assumptions:

  • If a mapping is triggered by some set of code points then we assume that patch will meaningfully advance support for those code points (like wise for features and design space).
  • In the absence of any other info we will assume that the available patches are equivalent in terms of round trip cost. That is picking and applying a specific patch will get you support for the associated code points/features/design space in the same number of round trips as any of the other intersecting patches.

From this it follows that if we want to minimize round trips for the current iteration then a reasonable approach is to select the next patch such that it minimize the number of remaining patches. Where remaining patches is defined as patches whose coverage is not a subset of the selected patch (coverage is only considered on code points/features/design space present in the target subset definition). For example if you have patches A, B, C, A + B and you need something from A, B and C then, selecting A + B results in an expected one outstanding patch (C). Versus two for all other selections. Where there are ties these can be broken at the discretion of the client, but most likely they'll want to select the patch that has the smallest apparent size (eg. based on entry order in the patch map).

This framework should be straightforward on the client and provide the predictability that encoders need to produce encodings that may contain overlapping invalidating patches with an aim to reduce round trips.

As for the speculative loading guidance, I think it's fine to talk about that only in terms of extending the target subset definition to cover things the client thinks might be needed in the future and not get into details of trying to optimize speculative loading based on the contents of the patch mappings.

@skef
Copy link
Contributor Author

skef commented Nov 15, 2024

I'll open a new PR with what I see as reasonable guidelines for patch map construction (relative to the ordering idea) and we can hash out more of these questions relative to a specific proposal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants