In order to satisfy timing constraints, modern real-time applications require massively parallel accelerators such as General Purpose Graphic Processing Units (GPGPUs). Generation after generation, the number of computing clusters made available in novel GPU architectures is steadily increasing, hence, investigating suitable scheduling approaches is now mandatory. Such scheduling approaches are related to mapping different and concurrent compute kernels within the GPU computing clusters, hence grouping GPU computing clusters into schedulable partitions. In this paper we propose novel techniques to define GPU partitions; this allows us to define suitable task-to-partition allocation mechanisms in which tasks are GPU compute kernels featuring different timing requirements. Such mechanisms will take into account the interference that GPU kernels experience when running in overlapping time windows. Hence, an effective and simple way to quantify the magnitude of such interference is also presented. We demonstrate the efficiency of the proposed approaches against the classical techniques that considered the GPU as a single, non-partitionable resource.

Contention-Aware GPU Partitioning and Task-to-Partition Allocation for Real-Time Workloads / Zahaf, H. -E.; Sanudo Olmedo, I. S.; Singh, J.; Capodieci, N.; Faucou, S.. - (2021), pp. 226-236. (Intervento presentato al convegno 29th International Conference on Real-Time Networks and Systems, RTNS 2021 tenutosi a fra nel 2021) [10.1145/3453417.3453439].

Contention-Aware GPU Partitioning and Task-to-Partition Allocation for Real-Time Workloads

Singh J.;Capodieci N.;
2021

Abstract

In order to satisfy timing constraints, modern real-time applications require massively parallel accelerators such as General Purpose Graphic Processing Units (GPGPUs). Generation after generation, the number of computing clusters made available in novel GPU architectures is steadily increasing, hence, investigating suitable scheduling approaches is now mandatory. Such scheduling approaches are related to mapping different and concurrent compute kernels within the GPU computing clusters, hence grouping GPU computing clusters into schedulable partitions. In this paper we propose novel techniques to define GPU partitions; this allows us to define suitable task-to-partition allocation mechanisms in which tasks are GPU compute kernels featuring different timing requirements. Such mechanisms will take into account the interference that GPU kernels experience when running in overlapping time windows. Hence, an effective and simple way to quantify the magnitude of such interference is also presented. We demonstrate the efficiency of the proposed approaches against the classical techniques that considered the GPU as a single, non-partitionable resource.
2021
29th International Conference on Real-Time Networks and Systems, RTNS 2021
fra
2021
226
236
Zahaf, H. -E.; Sanudo Olmedo, I. S.; Singh, J.; Capodieci, N.; Faucou, S.
Contention-Aware GPU Partitioning and Task-to-Partition Allocation for Real-Time Workloads / Zahaf, H. -E.; Sanudo Olmedo, I. S.; Singh, J.; Capodieci, N.; Faucou, S.. - (2021), pp. 226-236. (Intervento presentato al convegno 29th International Conference on Real-Time Networks and Systems, RTNS 2021 tenutosi a fra nel 2021) [10.1145/3453417.3453439].
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1251455
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 1
social impact