I am trying to run ONNX object detection model in C++. so, I have loaded my own model to C++ code and I have checked output dimensions, it was [-1, 4]
so, when I got output tensor size, the size is weird.
I don't know why the dimensions is weird.
std::vector<int64_t> outputDims = outputTensorInfo.GetShape();
std::cout << "Output Dimensions: " << outputDims << std::endl;
A shape can have a single dimension set to
-1, as this means that "all other data is put here".This is a small pytorch example for you to understand the
-1notation in tensor shapes.Usually the batch size is the first dimension. So the interpretation of your output shape
(-1, 4)is the following: The network does not know the input batch size in advance, so the first output dimension (=batch size) is also unknown (if you feed in batches of size 32, the output batch size will be 32, too). However, the output for each single element is known and is given with(4,).