I bought a dell 7559 laptop for deep learning. I got ubuntu 16.04 installed on it but I am having trouble getting caffe and tensorflow on it. The laptop used Nvidia Optimus technology to switch between gpu and cpu to save battery usage. I checked the bios to see if I can set it to use only gpu but there is no option for it. Using bumblebee or nvidia-prime didnt work either. I now have ubuntu 16 with mate desktop environment it is preventing from getting the black screen but didnt help with the cuda issue. I was able to install the drivers and cuda but when I build caffe and tensorflow they fail saying that it didnt detect a gpu. And I wasnt able to install opengl. I tried using several versions of nvidia drivers but it didnt help. Any help would be great. thanks.
Caffe and Tensorflow on a Dell 7559 with nvidia optimus technology
802 Views Asked by ItsKalvik At
1
There are 1 best solutions below
Related Questions in GPU
- A deterministic GPU implementation of fused batch-norm backprop, when training is disabled, is not currently available
- What is the parameter for CLI YOLOv8 predict to use Intel GPU?
- Windows 10 TensorFlow cannot detect Nvidia GPU
- Is there a way to profile a CUDA kernel from another CUDA kernel
- Does Unity render invisible material?
- Quantization 4 bit and 8 bit - error in 'quantization_config'
- Pyarrow: ImportError: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found
- How to setup SLI on two GTX 560Ti's
- How can I delete a process in CUDA?
- No GPU EC2 instances associated with AWS Batch
- access fan and it's speed, in linux mint on acer predator helios 300
- Why can CPU memory be specified and allocated during instance creation but not GPU memory on the cloud?
- Why do CUDA asynchronous errors occur? (occur on the linux OS)
- Pytorch how to use num_worker>0 for Dataloader when using multiple gpus
- Running PyTorch MPS acceleration on Apple M1, get "Placeholder storage has not been allocated on MPS device!" error, but all seems to be on device
Related Questions in TENSORFLOW
- A deterministic GPU implementation of fused batch-norm backprop, when training is disabled, is not currently available
- Keras similarity calculation. Enumerating distance between two tensors, which indicates as lists
- Does tensorflow have a way of calculating input importance for simple neural networks
- How to predict input parameters from target parameter in a machine learning model?
- Windows 10 TensorFlow cannot detect Nvidia GPU
- unable to use ignore_class in SparseCategoricalCrossentropy
- Why is this code not working? I've tried everything and everything seems to be fine, but no
- Why convert jpeg into tfrecords?
- ValueError: The shape of the target variable and the shape of the target value in `variable.assign(value)` must match
- The kernel appears to have died. It will restart automatically. whenever i try to run the plt.imshow() and plt.show() function in jupyter notebook
- Pneumonia detection, using transfer learning
- Cannot install tensorflow ver 2.3.0 (distribution not found)
- AttributeError: module 'keras._tf_keras.keras.layers' has no attribute 'experimental'
- Error while loading .keras model: Layer node index out of bounds
- prediction model with python tensorflow and keras, gives error when predicting
Related Questions in NVIDIA
- Windows 10 TensorFlow cannot detect Nvidia GPU
- Rootless Docker OCI: error modifying OCI spec: failed to inject CDI devices: unresolvable CDI devices nvidia.com/gpu=all: unknown
- How to setup SLI on two GTX 560Ti's
- CUDA is compatible with gtx 1660ti laptop GPU?
- Use Nvidia as DMA devices is possible?
- I have a reboot error for installing nvidia-driver
- Using CUDA with an intel gpu
- GPU is not detected in Tensorflow
- Resolving "no kernel image is available for execution on the device" CUDA Error
- Why compile to cubin and not just to PTX?
- [ LINUX ]Tensorflow-GPU not working - TF-TRT Warning: Could not find TensorRT
- Unable to capture iterations on dlprof
- How do I restore the GPU after docker?
- Video isn't recognized as HDR in YouTube upload
- cuGraph graph_view_t constructor error: "offsets.size() returns an invalid value"
Related Questions in CAFFE
- cv2.dnn issue with using SDD in deepface.analyze(): no attribute 'readNetFromCaffe'
- Fatal Error During CMake Build: 'caffe/blob.hpp' File Not Found - Seeking Assistance
- How to convert a .pth file into a .protox.txt and a .caffemodel files?
- How can I subtract 2 arrays (Eltwise) in Caffe?
- How can I fix the 'blob size is zero' error in Python 2.7 while using pose recognition with OpenCV and Caffe models?
- Why does gradient perturbation to input blob in overrided Solver::Callback::after_step() cause the input blob altered
- How to calculate the gradient with respect to "Input" layer in Caffe?
- Calculate Euclidean distance between two blobs via Caffe
- How to multiply two feature map pixel by pixel in caffe?
- ValueError: cannot reshape array of size 0 into shape (16,3,3,3)
- Compilation error while compiling caffe on ubuntu
- Encounter `cudaGetDeviceProperties_v2` when compiling caffe
- OpenCV Vector Subscript out of range error in java
- nvcc fatal : Unsupported gpu architecture 'compute_80'
- Netron:caffemodel weights Tensor data is empty
Related Questions in OPTIMUS
- Programically setting NvOptimusEnablement for Python-based OpenGL Programs
- Gtk/gtkmm glarea queue render doesnt work with dedicated graphics
- Kubundu 20.04 switching between Intel and Nvidia GPU
- Vulkan SDK detects only 1 GPU at a time
- Debian bumblebee problems
- How do I tell the Java Invocation Api which java vm to use?
- Child process inherits nvidia profile from parent
- assembly load error on C# exe which has been modified to export a variable
- How to deal the poor bumblebee performance?
- When calling D3D's CREATEDEVICE from inside DLLMAIN in VC++, it creates a deadlock(loaderlock?). Is there a way to overcome this? End goal inside
- Optimus headless browser with C#
- Laravel 5.4 Collection Map Return Values
- tensorflow does not use gpu, but cuda does
- Enable/disable nvidia Optimus programmatically using C++ under Ubuntu amd64
- Enable/disable Optimus/Enduro in cross platform manner
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
I think Bumblebee can enable you to run Caffe/Tensorflow in GPU mode. More generally, it also allows you to run other CUDA programs on a laptop with Optimus technology .
When you have installed Bumblebee correctly (tutorial: Bumblebee Wiki for Ubuntu ), you can invoke the Caffe binary by pepending
optirunbefore the caffe binary. So it goes like the following:This works for the NVidia DIGITS server as well:
In addition, Bumblebee also works on my dual-graphics desktop PC (Intel HD 4600 + GTX 750 Ti) as well. The display on my PC is driven by the Intel HD 4600 through the HDMI port on the motherboard. The NVidia GTX 750 Ti is only used for CUDA programs.
In fact, for my desktop PC, the "nvidia-prime" (it's actually invoked through the command line program
prime-select) is used to choose the GPU that drives the desktop. I have the integrated GPU connect to the display with the HDMI port and the NVidia GPU through a DisplayPort. Currently, the DisplayPort is inactive. The display signal comes from the HDMI port.As far as I understand, PRIME does so by modifying
/etc/X11/Xorg.confto make either the Intel integrated GPU or the NVidia GPU the current display adapter available to X. I think the PRIME settings only makes sense when both GPUs are connected to some display, which means there need not be an Optimus link between the two GPUs like in a laptop (or, for a laptop with a Mux such as Dell Precision M4600, the Optimus is disabled in BIOS).More information about the Display Mux and Optimus may be found here: Using the NVIDIA Driver with Optimus Laptops
Hope this helps!