Skip to content
Snippets Groups Projects

Presentation

Merged Ghost User requested to merge presentation into master
13 files
+ 2463
0
Compare changes
  • Side-by-side
  • Inline

Files

\begin{abstract}
Recent computing platforms combine CPUs with different types of
accelerators such as Graphical Processing Units ({\it GPUs}) to cope
with the increasing computation power needed by complex real-time
applications. NVIDIA
GPUs are compound of hundreds of computing elements called {\it CUDA
cores}, to achieve fast computations for parallel applications.
However, GPUs are not designed to support real-time
execution, as their main goal is to achieve maximum throughput for their resources. Supporting real-time
execution on NVIDIA GPUs involves not only achieving timely
predictable calculations but also to optimize the CUDA cores usage.
In this work, we present the design and the implementation of {\it
PRUDA} (Predictable Real-time CUDA), a programming platform to
manage the GPU resources, therefore decide when and where a
real-time task is executed. PRUDA is written in {\sf C} and provides
different mechanisms to manage the task priorities and allocation on
the GPU. It provides tools to help a designer to properly implement
real-time schedulers on the top of CUDA.
\end{abstract}
\ No newline at end of file
Loading