Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataset size of 0 #626

Closed
1 task done
alexeidinca-ro opened this issue Nov 27, 2023 · 1 comment
Closed
1 task done

Dataset size of 0 #626

alexeidinca-ro opened this issue Nov 27, 2023 · 1 comment
Labels
question Further information is requested

Comments

@alexeidinca-ro
Copy link

alexeidinca-ro commented Nov 27, 2023

Search before asking

  • I have searched the Supervision issues and found no similar feature requests.

Question

Hello! Hope you are well!

I am trying to run a Supervision Benchmark (confusion matrix) on a custom dataset and everytime it gives me the dataset size of zero and a confusion matrix of zero. What am i doing wrong in this approach? The data set is a test one, the format is a YOLO one with images and annotations in txt format. The YAML file is the classic YOLO one.

I am attaching the code and content of the YAML file.

data.yaml below:

train: ../ita-my-patch/train/images
val: ../ita-my-patch/valid/images
test: ../ita-my-patch/test/images

nc: 1
names: ["HH"]

Notebook code below:

import supervision as sv
from ultralytics import YOLO
import numpy as np 

dataset = sv.DetectionDataset.from_yolo(
    images_directory_path='ita-my-patch/test/images',
    annotations_directory_path='ita-my-patch/test/labels',
    data_yaml_path='data.yaml'
)

model = YOLO("runs/detect/train27/weights/best.pt")

print('dataset classes:', dataset.classes)
print('dataset size:', len(dataset))

def callback(image: np.ndarray) -> sv.Detections:
    result = model(image)[0]
    return sv.Detections.from_ultralytics(result)


confusion_matrix = sv.ConfusionMatrix.benchmark(
   dataset = dataset,
   callback = callback
)

confusion_matrix.matrix

Additional

No response

@alexeidinca-ro alexeidinca-ro added the question Further information is requested label Nov 27, 2023
@SkalskiP
Copy link
Collaborator

Hi @alexeidinca-ro! 👋🏻 Let me convert that question into the discussion and put it into the Q&A section.

@roboflow roboflow locked and limited conversation to collaborators Nov 27, 2023
@SkalskiP SkalskiP converted this issue into discussion #628 Nov 27, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants