Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
R
rtgpgpu
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
GitLab community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
PTASK
rtgpgpu
Commits
7586d6c1
Commit
7586d6c1
authored
5 years ago
by
zahoussem
Browse files
Options
Downloads
Patches
Plain Diff
pruda description
parent
18e2b2eb
No related branches found
No related tags found
No related merge requests found
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
README.md
+24
-20
24 additions, 20 deletions
README.md
with
24 additions
and
20 deletions
README.md
+
24
−
20
View file @
7586d6c1
# PRUDA : Real-time programing interface on the top of CUDA
# PRUDA : Real-time programing interface on the top of CUDA
PRUDA is a set of programming tools and mechanisms to control
PRUDA is a set of programming tools and mechanisms to control
scheduling within the GPU. It also provide implementation the following scheduling policies:
scheduling within the GPU. It also provides the implementation the
following real-time scheduling policies:
-
Fixed Priority non preemptive and preemptive
-
Earliest Deadline First (EDF) preemptif and non preemptive scheduling techniques
-
Gang scheduling techniques where the GPU is considered as a multiprocessor architecture.
Details about each scheduling policy will be given in different
section. First, we will describe the PRUDA functionalities,
structures. We will also show how a scheduling policy can be easily
implemnted with PRUDA.
-
Fixed Priority (FP): preemptive and non-preemptive
-
Earliest Deadline First (EDF) : preemptif and non preemptive
-
EDF-Gang scheduling techniques : the GPU is considered as a multiprocessor architecture.
Additionally PRUDA aims to not modify CUDA-user programming
style. Therefore, the PRUDA user can use already developped CUDA
kernels. To keep the user-free of kernel signatures, PRUDA must be
compiled at the same time with the user source kernel code. Neverless,
PRUDA also provides a dynamic configuration possibilities with fixed
kernel signature (int , int
*, int *
, int
*, int *
);
PRUDA can handle implicitly memory copy operations, and also cuda
unified memory.
# prerequisites
-
Programming with CUDA
# Prerequisites
-
Basic Knowledge about real-time systems
PRUDA is a platform built on the top of CUDA for real-time systems. Therefore,
- C++ compiler
- Nvidia NVCC compiler
# PRUDA and GPU handling:
## The GPU in the eye of PRUDA:
A GPU is compound of one or several streaming multiprocessors (SMs)
A GPU is compound of one or several streaming multiprocessors (SMs)
and one or several copy engines (CEs). Streaming multiprocessors are
and one or several copy engines (CEs). Streaming multiprocessors are
...
@@ -123,8 +122,13 @@ section.
...
@@ -123,8 +122,13 @@ section.
## CUDA
functionalities
:
## CUDA
operations
:
PRUDA allows a kernel to execute within a single SM
PRUDA allows a kernel to execute within a single SM
## Singe core strateg
\ No newline at end of file
# PRUDA usage by example
# PRUDA scheduling tools and policies
## Single core strategy for non preemptive schedulers
## Single core strategy for preemptive schedulers
## Multicore strategy for GANG preemptive schedulers
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment