#[auto_system] simplified system configuration #10851
Replies: 6 comments 17 replies
-
The linkme crate cannot be used because it does not support wasm. dtolnay/linkme#6 (comment) |
Beta Was this translation helpful? Give feedback.
-
This would also be problematic with multiple |
Beta Was this translation helpful? Give feedback.
-
Your general gripes about system initialization and interdependency are not widely accepted as shared pain points. This means either you are trying to solve your problems in a way that is distinct from the way others are trying to solve these same problems, or, it means you've found novel problems. Because you have not currently mentioned any novel problems, and because the examples you provide do not seem novel, this suggests that you're just trying to solve things in a different way, and because you are experiencing pain nobody else seems to be experiencing, I am forced to conclude that the way you've identified to solve these problems is what we know colloquially as "a bad way." If you were to share either the ways in which you've tried and failed to solve these problems, or the novel nature of the problems causing you this pain, it might help the reader to take your suggestion more seriously. As is I see an inadequate and over complicated set of tools to solve a problem I don't have. |
Beta Was this translation helpful? Give feedback.
-
No, not even remotely close. Custom schedules give you more "buckets" into which to throw code that'll execute at the same time. Using custom schedules without thinking too hard is answering "How early does this need to run" with an answer like "Kind of!" instead of an answer like "stage 0.16." If we run into problems, then we solve that problem, but because we're being good ECS citizens and writing stateless systems, the vast majority of our logic isn't interrelated in such a way that it needs to give a crap about precisely when it executes. As long as "Kind of early" executes before "middle of the frame" and "late in the frame", everything will go fine, if we need more fine-grained control for a specific reason, we can use something more complicated.
Yeah I just solve for this by having a run-condition on the appropriate resource(s). Way easier than treating it as an execution ordering concern. I might also use an event queue if I care even less about exactly what frame the action gets resolved in. Run conditions are a really strong tool for that kind of thing.
Sorry, I meant like, what do we mean in terms of behavior that makes us care about this from a system ordering perspective? The way I see it, when this happens, you have three basic scenarios:
None of those generally have any kind of cascading implications, which is part of why I am confused.
It should not be controversial for me to indicate that stateless systems are the norm and that stateful concerns regarding global resources are a code smell in the context of an ECS.
Yes, but if your motivation for writing code like this is that you want to be able to define system ordering as a set of stateful operations against global resources, you have not used an approach that helps you write good ECS systems. This is like building a hammer with which to break the wings of Pegasus so that he sticks to the road when you ride him. |
Beta Was this translation helpful? Give feedback.
-
I'm going to make the argument here that it really feels like these are not really concerns about system ordering, they are concerns that are unnecessarily bloating trivial system ordering problems by "artificially" inflating the number of systems you're using. There are plenty of methods and techniques for breaking out logic that do not involve writing new systems - things like Case in point, it seems like logic for For the general problem of like, I have some kind of operation that yields an entity, and it becomes problematic to reify that entity into a specific set of components without writing a brand new system each time - this was also a conceptual issue I had a lot of trouble with. The solution I ended up implementing, with assistance from the community, was to derive an abstract representation of the "sorts" of component sets an activity would be able to interact with, and then to reify that representation into operations on world state using custom commands. Lemme know if that doesn't make a whole lot of sense to you, it's a big concept to try to fit into a couple of sentences, so I don't really feel like I'm doing it justice - but the conceptual upshot here is that it's a technique to allow us to scope behavior against the world state we actually care about instead of fragmenting that concern any time we have a new "sort" of entity to worry about, which is something we generally want to avoid for more reasons than just "it makes ordering harder to deal with." Also just as like, an aside here, I am not trying to make the argument that any of these techniques are somehow "obvious," only that they're working more with the engine than against the engine.
Yes, but this is only disruptive if you're actually introducing a new "branch" point in the schedule graph. If these new systems have the same ordering constraints as the last one, this doesn't require revisiting any assumptions you've made up until this point or doing anything unique. You just schedule your systems the same way the original system was scheduled. My experience is that system ordering as a concern, when it is a concern, is almost always local to some concept of a "process" or a "mechanic" - I do not generally have concerns around system / behavior ordering that require systems to care about the state of a variable in the world, because that's not how I structure my code, because to structure my code that way introduces a shitload of conceptual overhead I'd rather not try to deal with. Because of this, typically when I think about system ordering, I think about it in terms of how a system relates to other systems, and not how or when a system interacts with it's parameters, because I have made the architectural decision to always assume indeterminately ordered access to world state, because I am writing code in an engine that defaults to parallel execution with it's own ideas about how to order operations against world state.
Sure, for certain values of "unintentional." I generally structure my code as if world state were a giant double buffer, because it essentially IS a giant double buffer, and I find that most behaviors don't actually need to worry about the delay, because if everything is delayed by 30 ms, then it all plays out at the same rate in the end anyway. Then only scenario in which this becomes a problem is if you have a chain of operations implicitly depending on each other, but like, don't do that, and if you're forced to, you expect it'll be a pain in the ass. Much like how in Unity, an |
Beta Was this translation helpful? Give feedback.
-
I for one feel the pain of scheduling. To quote stepancheg:
I think it's indeed very difficult to have a predictable schedule when you have even just a few systems that touch the same thing. In truth, I don't think there should be too much disparate systems touching mutably the same components/resources, so the issue with schedule predictability shouldn't show much too often, but when it shows up it's a headache. The problem, as stepancheg rightly identified, is that the only relationship between systems is either implicit (by not declaring them) or decoupled from the system (in the I think an approach where scheduling is inherent to the system would help a bit. For one, it would enable some form of composability of schedule that up to now is just not possible in bevy. But all approaches suggested by stepancheg up to now felt awkward and not really fit for bevy. I don't really think I don't really have a concrete suggestion though ¯\_(ツ)_/¯ |
Beta Was this translation helpful? Give feedback.
-
I struggle the most with Bevy system configurations.
When the number of systems is greater than 100, and you are doing rapid prototyping (rather than slowly developed and peer reviewed mod for example), bugs are unavoidable. I have already encountered several incorrectly configured systems in my yet very small application.
Moreover, code refactoring is hard. For example, the number of code changes needed to move system from one file to another, or split a system into two systems, is too high.
I posted about it before, and suggested a few approaches to fix it, for example:
Before
/After
barrier RFC: Barriers with Before/After markers #10618So far, none was accepted, unfortunately.
Here I'm describing an approach I'm using in my project.
#[auto_xxx]
TL;DR
Using fictional example. Code looks like this:
And finally, my app
main
function looks like this:And that's it. Functions are private, no plugins, systems are not added explicitly. Just add these functions, and all systems and resources are installed.
How does it work
Basic idea is this.
I'm using linkme crate to collect configurations.
"Framework" implementation is these 7 lines of code:
AUTO_CONFIG
contains all the configurations, andapply_auto_configs
needs to be called from my app main plugin.AUTO_CONFIG
is populated by proc macros. For example, for#[auto_resource]
proc-macro generated code looks like this (first level of proc macro expansion):What annotations are supported
#[auto_config]
The most generic one: takes an
&mut App
, and runs code on it.Kind of auto-discoverable lightweight plugin.
#[auto_plugin]
Applied to
Plugin
, callsapp.add_plugins()
.#[auto_resource]
and#[auto_state]
Calls
app.init_resource()
orapp.init_state()
:#[auto_system]
And finally, the most useful one is
#[auto_system]
:Arguments for
#[auto_system]
are self-explanatory.But additionally,
Res
orResMut
parameters of the system can be marked as#[before]
and#[after]
.#[before]
translates to.before(ResourceTypeSet::<MyRes>::new())
#[after]
translates to.after(ResourceTypeSet::<MyRes>::new())
So systems with
#[before]
and#[after]
on the same resource have dependency relationship using shared system setResourceTypeSet<MyRes>
. (And the same with events.)Potential extension: groups
In my kitchen sink implementation all configurations are added into the same group. Could possibly be extended to have groups of configurations, for example:
Final thoughts
With "groups" this can be used to configure Bevy and Bevy mods, but I don't think it is needed there, because the complexity of Bevy core and Bevy plugins is not in system configurations. Moverover, this setup does not support generic code (e.g. if an
#[auto_system]
needs to be instantiate twice with different type parameters).This might be useful mostly for final applications developed by Bevy users.
I'm certain Bevy needs some solution to reduce boiledplate of system configurations. I'd say core Bevy needs it, but perhaps as a mod.
Unfortunately, it is unlike I will be able to publish this code as an open source project — I don't have time cannot maintain it long term.
The best I can do is either:
Beta Was this translation helpful? Give feedback.
All reactions