We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello @ZwwWayne . In the proposed architecture we are fusing the lidar point cloud and the camera information .
Can you please comment on what will be the output confidence value and the bounding box coordinates ??
Do we take the bounding box and confidence information from lidar point cloud 3D detections (or) Camera Image 3D detections ??
What if the one of the sensor fails. How we get the bounding box and the confidence score information. ??
The output from the architecture is (y true, y new, y end , y link). How we are getting the track id's from these outputs ??
What is y new and y end ( Start and end of the trajectory ) ?? Thanks In Advance ....
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hello @ZwwWayne . In the proposed architecture we are fusing the lidar point cloud and the camera information .
Can you please comment on what will be the output confidence value and the bounding box coordinates ??
Do we take the bounding box and confidence information from lidar point cloud 3D detections (or) Camera Image 3D detections ??
What if the one of the sensor fails. How we get the bounding box and the confidence score information. ??
The output from the architecture is (y true, y new, y end , y link). How we are getting the track id's from these outputs ??
What is y new and y end ( Start and end of the trajectory ) ??
Thanks In Advance ....
The text was updated successfully, but these errors were encountered: