You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
No
Describe the solution you'd like
When using multiple cameras to identify position using april tags, create a mechanism where the data from all cameras configured to identify april tags are combined into single position reported to the robot using Network Tables. This would reduce network table traffic and simplify the use of multiple cameras. Different techniques could be used to derive this combined position, including: averaging; ambiguity based weighted averaging; closest to last position; or something else. Each april tag pipeline, (or the central configure for the combining function) definition would need to include camera to robot position offsets. Flags could be added to stop sending individual camera positions to the network tables. The single consolidated position could add data providing existence and quality of the individual camera found positions.
Describe alternatives you've considered
Currently the processing for this is done on the robot. The above would be the suggested alternative.
Additional context
None.
The text was updated successfully, but these errors were encountered:
This doesn't seem possible to do to me without hardware frame synchronization. The backend itself is also pretty rigidly coupled to the 1-1-1 mapping of camera to vision runner to result producer. Might be cool though!
Is your feature request related to a problem? Please describe.
No
Describe the solution you'd like
When using multiple cameras to identify position using april tags, create a mechanism where the data from all cameras configured to identify april tags are combined into single position reported to the robot using Network Tables. This would reduce network table traffic and simplify the use of multiple cameras. Different techniques could be used to derive this combined position, including: averaging; ambiguity based weighted averaging; closest to last position; or something else. Each april tag pipeline, (or the central configure for the combining function) definition would need to include camera to robot position offsets. Flags could be added to stop sending individual camera positions to the network tables. The single consolidated position could add data providing existence and quality of the individual camera found positions.
Describe alternatives you've considered
Currently the processing for this is done on the robot. The above would be the suggested alternative.
Additional context
None.
The text was updated successfully, but these errors were encountered: