I have some 3D objects from ikea furniture and I would like to sample point clouds and display them as 2D image. In the PointNet Paper (https://arxiv.org/abs/1612.00593) they used a very nice visualization:
But I can't quite figure out how they did it (images top row). Mostly, because I cannot describe the visualization either. It looks like they do some sort of depth coloring, but I wonder multiple things:
- How do they deal with perspective? Is there some automated way how they display the point clouds, so the object is good visible (e.g. no degree of freedom loose I suppose?)?
- What is the strategy for the colors?
- What do they color exactly? It looks like they use bigger squares
I started with something like this:
point_clouds_per_part = assembly_step.sample_point_cloud(2500)[0]
all_points = point_clouds_per_part.reshape(-1, 3)
df = pd.DataFrame(all_points.numpy(), columns=['x', 'y', 'z'])
z_normalized = (df['z'] - df['z'].min()) / (df['z'].max() - df['z'].min())
plt.figure(figsize=(100, 100))
scatter = plt.scatter(df['x'], df['y'], c=z_normalized, cmap='cool', s=50, marker='s')
plt.colorbar(scatter, label='X coordinate')
plt.xlabel('X')
plt.ylabel('Y')
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
And well, I don't recognize much:

I tried something like:
cloud = pyntcloud.PyntCloud(df)
cloud.plot(background="black", use_as_color="y", cmap="cool", elev=20, azim=67, initial_point_size=10)
Which does look a lot better (in case you wonder: It's the part on the side of the applaro ikea bench). But well, now it is 3D, I set arbitrary values for rotation and not sure if I color the pixels after the same logic as the authors did.
I would appreciate some help on how to reproduce the plots of PointNet!
I added a csv with the point clouds here https://pastebin.com/PNUMaAys for testing. In case it is relevant, this is my code for sampling the point clouds in the first place:
torch.tensor(trimesh.sample.sample_surface(trimesh.load(path, force='mesh'), number_of_points)[0])

