My question is very close to this one.
I installed Caffe on google Colab, and I am trying to run this open source CNN model. the pre-trained models and the test can be downloaded here.
I followed these steps:
I repeated all the instructions in this notebook to install Caffe on my google Colab.
I then downloaded the pre-trained models including the test.py from this link (also mentioned above), using the following lines of code:
!wget http://opensurfaces.cs.cornell.edu/static/minc/minc-model.tar.gz
I run the run the test.py, which includes this line of code:
net=caffe.Classifier('deploy-{}.prototxt'.format(arch),'minc-{}.caffemodel'.format(arch),channel_swap=(2,1,0),mean=numpy.array([104,117,124]))
but I get the following error:
AttributeError: module 'caffe' has no attribute 'Classifier'
the code and the the Caffe folder on colab.
I see "classifier.py"in caffe folder on google drive, are they the same thing? and if yes, how can I implement the address into the above mentioned line of code?
Thanks in advance.
this is the code I run before test.py to install caffe on colab.
!ls
!git clone https://github.com/BVLC/caffe.git
!git reset --hard 9b891540183ddc834a02b2bd81b31afae71b2153 #reset to the newest revision that worked OK on 27.03.2021
# !sudo apt-cache search libhdf5-
# !sudo apt-cache search gflags
# !sudo apt --fix-broken install
!sudo apt-get install libgflags2.2
!sudo apt-get install libgflags-dev
!sudo apt-get install libgoogle-glog-dev
# !sudo apt-get install libhdf5-10 - replaced with 100
!sudo apt-get install libhdf5-100
!sudo apt-get install libhdf5-serial-dev
!sudo apt-get install libhdf5-dev
# !sudo apt-get install libhdf5-cpp-11 - replaced with 100
!sudo apt-get install libhdf5-cpp-100
!sudo apt-get install libprotobuf-dev protobuf-compiler
!find /usr -iname "*hdf5.so"
# got: /usr/lib/x86_64-linux-gnu/hdf5/serial
!find /usr -iname "*hdf5_hl.so"
!ln -s /usr/lib/x86_64-linux-gnu/libhdf5_serial.so /usr/lib/x86_64-linux-gnu/libhdf5.so
!ln -s /usr/lib/x86_64-linux-gnu/libhdf5_serial_hl.so /usr/lib/x86_64-linux-gnu/libhdf5_hl.so
#!find /usr -iname "*hdf5.h*" # got:
# /usr/include/hdf5/serial/hdf5.h
# /usr/include/opencv2/flann/hdf5.h
# Let's try the first one.
%env CPATH="/usr/include/hdf5/serial/"
#fatal error: hdf5.h: No such file or directory
!sudo apt-get install libleveldb-dev
!sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev
!sudo apt-get install libsnappy-dev
!echo $CPATH
%cd caffe
!ls
!make clean
!cp Makefile.config.example Makefile.config
!sed -i 's/-gencode arch=compute_20/#-gencode arch=compute_20/' Makefile.config #old cuda versions won't compile
!sed -i 's/\/usr\/local\/include/\/usr\/local\/include \/usr\/include\/hdf5\/serial\//' Makefile.config #one of the 4 things needed to fix hdf5 issues
!sed -i 's/# OPENCV_VERSION := 3/OPENCV_VERSION := 3/' Makefile.config #We actually use opencv 4.1.2, but it's similar enough to opencv 3.
!sed -i 's/code=compute_61/code=compute_61 - gencode=arch=compute_70,code=sm_70 - gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_75,code=compute_75/' Makefile.config #support for new GPUs
!make all -j 4 # -j would use all availiable cores, but RAM related errors occur
import cv2
print(cv2.__version__)
import caffe
# !./data/mnist/get_mnist.sh #Yann Lecun's hosting sometimes fails with 503 error
# So we use alternative source of mnist dataset
!wget www.di.ens.fr/~lelarge/MNIST.tar.gz
!tar -zxvf MNIST.tar.gz
!cp -rv MNIST/raw/* data/mnist/
!./examples/mnist/create_mnist.sh
This is the test.py that I run(at first I get errors to "Place images to be classified in images/brick/.jpg, images/carpet/.jpg, ...'", I create a folder named 'images' and I put the example.jpg inside, then it gets solved.
#!/usr/bin/env python2
from __future__ import division
from __future__ import with_statement
from __future__ import print_function
import caffe
import numpy
import glob
import os.path
import sys
if __name__=='__main__':
if not os.path.exists('images'):
print('Place images to be classified in images/brick/*.jpg, images/carpet/*.jpg, ...')
sys.exit(1)
categories=[x.strip() for x in open('categories.txt').readlines()]
arch='googlenet' # googlenet, vgg16 or alexnet
net=caffe.Classifier('deploy-{}.prototxt'.format(arch),'minc-{}.caffemodel'.format(arch),channel_swap=(2,1,0),mean=numpy.array([104,117,124]))
result={}
for i,x in enumerate(categories):
result[x]=[]
for j,y in enumerate(sorted(glob.glob('images/{}/*'.format(x)))):
z=net.predict([caffe.io.load_image(y)*255.0])[0]
k=z.argmax()
print(arch,y,categories[k],z[k],k==i)
result[x].append(k==i)
for i,x in enumerate(categories):
print(arch,x,sum(result[x])/len(result[x]))
print(arch,sum(sum(x) for x in result.values())/sum(len(x) for x in result.values()))
and I receive this error :
AttributeError Traceback (most recent call last) in () 18 19 arch='googlenet' # googlenet, vgg16 or alexnet ---> 20 net=caffe.Classifier('deploy-{}.prototxt'.format(arch),'minc-{}.caffemodel'.format(arch),channel_swap=(2,1,0),mean=numpy.array([104,117,124])) 21 22 result={}
AttributeError: module 'caffe' has no attribute 'Classifier'