Since their introduction the Trasformer architectures emerged as the dominating architectures for both natural language processing and, more recently, computer vision applications. An intrinsic limitation of this family of “fully-attentive” architectures arises from the computation of the dot-product attention, which grows both in memory consumption and number of operations as O(n^2) where n stands for the input sequence length, thus limiting the applications that require modeling very long sequences. Several approaches have been proposed so far in the literature to mitigate this issue, with varying degrees of success. Our idea takes inspiration from the world of lossy data compression (such as the JPEG algorithm) to derive an approximation of the attention module by leveraging the properties of the Discrete Cosine Transform. An extensive section of experiments shows that our method takes up less memory for the same performance, while also drastically reducing inference time. Moreover, we assume that the results of our research might serve as a starting point for a broader family of deep neural models with reduced memory footprint. The implementation will be made publicly available at https://github.com/cscribano/DCT-Former-Public.

DCT-Former: Efficient Self-Attention with Discrete Cosine Transform / Scribano, C.; Franchini, G.; Prato, M.; Bertogna, M.. - In: JOURNAL OF SCIENTIFIC COMPUTING. - ISSN 1573-7691. - 94:(2023), pp. 1-25. [10.1007/s10915-023-02125-5]

DCT-Former: Efficient Self-Attention with Discrete Cosine Transform

C. Scribano
;
G. Franchini
;
M. Prato;M. Bertogna
2023

Abstract

Since their introduction the Trasformer architectures emerged as the dominating architectures for both natural language processing and, more recently, computer vision applications. An intrinsic limitation of this family of “fully-attentive” architectures arises from the computation of the dot-product attention, which grows both in memory consumption and number of operations as O(n^2) where n stands for the input sequence length, thus limiting the applications that require modeling very long sequences. Several approaches have been proposed so far in the literature to mitigate this issue, with varying degrees of success. Our idea takes inspiration from the world of lossy data compression (such as the JPEG algorithm) to derive an approximation of the attention module by leveraging the properties of the Discrete Cosine Transform. An extensive section of experiments shows that our method takes up less memory for the same performance, while also drastically reducing inference time. Moreover, we assume that the results of our research might serve as a starting point for a broader family of deep neural models with reduced memory footprint. The implementation will be made publicly available at https://github.com/cscribano/DCT-Former-Public.
2023
94
1
25
DCT-Former: Efficient Self-Attention with Discrete Cosine Transform / Scribano, C.; Franchini, G.; Prato, M.; Bertogna, M.. - In: JOURNAL OF SCIENTIFIC COMPUTING. - ISSN 1573-7691. - 94:(2023), pp. 1-25. [10.1007/s10915-023-02125-5]
Scribano, C.; Franchini, G.; Prato, M.; Bertogna, M.
File in questo prodotto:
File Dimensione Formato  
2203.01178.pdf

Open Access dal 16/03/2024

Tipologia: Versione dell'autore revisionata e accettata per la pubblicazione
Dimensione 1.17 MB
Formato Adobe PDF
1.17 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1295556
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 9
social impact