TransWikia.com

How to make a Python code execute on GPU and not CPU

Data Science Asked on February 7, 2021

I am doing some image pre-processing using Python 3. As I know the code is executed on CPU for which the process is a little bit slow, its a good idea to make it run on GPU for the faster process because GPU can do graph operation much faster than CPU. Is there a way to make it run on GPU without using a library?

Please note that it is required to make it like a GPU version of Keras or TensorFlow.

One Answer

You may apply Wolfram Language to your project. There is a free Wolfram Engine for developers and with the Wolfram Client Library for Python you can use these functions in Python.

Either the CUDALink Overview and CUDALink Guide or the OpenCLLink Overview and OpenCLLink Guide will enable you to run code on your GPU. However, the CUDALink guide image process functions built-in so perhaps best to start there for your project.

from wolframclient.evaluation import WolframLanguageSession
from wolframclient.language import wl, wlexpr

Start the Wolfram session.

wolfSession = WolframLanguageSession()

The CUDA package is included with the Wolfram Engine but has to be explicitly loaded.

wolfSession.evaluate(wl.Needs('CUDALink`'))

Then you can check if the engine has located one or more compatible GPUs with CUDAQ.

print(wolfSession.evaluate(wl.CUDALink.CUDAQ()))
True

There is a one-time download of CUDA binaries when CUDAQ is evaluated. You can make your first call directly in the kernel to track the one-time download instead of first calling through Python. Run woframscript in the terminal and evaluate the Needs and CUDAQ functions at the input prompt.

enter image description here

Detailed information can be returned with CUDAInformation.

print(wolfSession.evaluate(wl.CUDALink.CUDAInformation(1,'Core Count')))
160

Grab an image for some CUDA image function examples.

imgWolfram=wolfSession.evaluate(
    wl.Rasterize(
        wl.Import('https://i2.wp.com/oemscorp.com/wp-content/uploads/2015/07/wolfram-language-logo.png'),
        'Image'
    )
);

The image can be Exported in one of the supported Raster Image Formats or Vector Graphics Formats.

wolfSession.evaluate(
    wl.Export(
        '<path with image filename>', 
        imgWolfram
        )
    )
)

Wolfram Language logo

A few example CUDA image functions include

  • CUDAImageConvolve

    wolfSession.evaluate(
        wl.Export(
            '<path with image filename>', 
            wl.CUDALink.CUDAImageConvolve(imgWolfram,  [[-1,0,1],[-2,0,2],[-1,0,1]])
        )
    )
    

Mathematica graphics

  • CUDAErosion

    wolfSession.evaluate(
        wl.Export(
            '<path with image filename>', 
            wl.CUDALink.CUDAErosion(imgWolfram,6)
        )
    )
    

Mathematica graphics

  • CUDAOpening

    wolfSession.evaluate(
        wl.Export(
            '<path with image filename>', 
            wl.CUDALink.CUDAOpening(imgWolfram,6)
        )
    )
    

Mathematica graphics

Terminate the Wolfram session.

wolfSession.terminate()

Since you have processing many image files you would get slightly better performance by processing directly in wolfram script with the Wolfram Engine rather than by calling the engine from Python.

Hope this helps.

Answered by Edmund on February 7, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP