Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
R
rtgpgpu
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
GitLab community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
PTASK
rtgpgpu
Merge requests
!1
Presentation
Code
Review changes
Check out branch
Download
Patches
Plain diff
Expand sidebar
Merged
Presentation
presentation
into
master
Overview
0
Commits
3
Pipelines
0
Changes
24
Merged
Ghost User
requested to merge
presentation
into
master
5 years ago
Overview
0
Commits
3
Pipelines
0
Changes
24
Add of a new
presentation
folder
0
0
Merge request reports
Compare
master
master (base)
and
latest version
latest version
e1c0c348
3 commits,
5 years ago
24 files
+
2985
−
0
Side-by-side
Compare changes
Side-by-side
Inline
Show whitespace changes
Show one file at a time
Files
24
presentation/jrwtc2019/texs/abstract.tex
0 → 100644
+
21
−
0
View file @ e1c0c348
Edit in single-file editor
Open in Web IDE
\begin{abstract}
Recent computing platforms combine CPUs with different types of
accelerators such as Graphical Processing Units (
{
\it
GPUs
}
) to cope
with the increasing computation power needed by complex real-time
applications. NVIDIA
GPUs are compound of hundreds of computing elements called
{
\it
CUDA
cores
}
, to achieve fast computations for parallel applications.
However, GPUs are not designed to support real-time
execution, as their main goal is to achieve maximum throughput for their resources. Supporting real-time
execution on NVIDIA GPUs involves not only achieving timely
predictable calculations but also to optimize the CUDA cores usage.
In this work, we present the design and the implementation of
{
\it
PRUDA
}
(Predictable Real-time CUDA), a programming platform to
manage the GPU resources, therefore decide when and where a
real-time task is executed. PRUDA is written in
{
\sf
C
}
and provides
different mechanisms to manage the task priorities and allocation on
the GPU. It provides tools to help a designer to properly implement
real-time schedulers on the top of CUDA.
\end{abstract}
\ No newline at end of file
Loading