About

rCUDA is middleware for remote GPU virtualisation. It allows CUDA applications to be executed in nodes without a GPU by using a GPU located in another node. This process is transparent to applications, which do not need to be modified before execution with the rCUDA middleware.

 

Open source/proprietary

The rCUDA middleware is a proprietary solution whose source code is not available. The binaries of rCUDA are available to any institution or company that requests them.

 

Architecture

rCUDA features a distributed client–server architecture, as shown in the figure. 
The client part of the middleware is installed in the node executing the application requesting GPU services, whereas the server side runs in the node owning the actual GPU. The architecture depicted in the figure is used as follows: the client middleware receives a CUDA request from the accelerated application and appropriately processes and forwards it to the server middleware. On the server side, the middleware receives the request and interprets and forwards it to the GPU, which completes the execution of the request and returns the execution results to the server middleware. Finally, the server sends back the results to the client middleware, which forwards them to the accelerated application. 

 

 

Watch the demo video