A Study on the Performance of Base-m Polynomial Selection Algorithm Using GPU


Durmus O., ÇABUK U. C., DALKILIÇ F.

International Conference on Artificial Intelligence and Applied Mathematics in Engineering (ICAIAME), Antalya, Türkiye, 20 - 22 Nisan 2019, cilt.43, ss.509-517 identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Cilt numarası: 43
  • Doi Numarası: 10.1007/978-3-030-36178-5_40
  • Basıldığı Şehir: Antalya
  • Basıldığı Ülke: Türkiye
  • Sayfa Sayıları: ss.509-517
  • Anahtar Kelimeler: Polynomial selection, General number field sieve, Graphical processing unit (GPU), Heterogeneous computing, RSA
  • Dokuz Eylül Üniversitesi Adresli: Evet

Özet

Factorization of large integers has been being considered as a challenging problem in computer science and engineering since the earliest times of the computer technology. Despite the comprehensive efforts, there is still no reported deterministic polynomial-time algorithm; however, its complexity class is in fact not yet decided. A fast and robust polynomial-time algorithm for this problem is required to increase the processing capabilities of current systems. Yet, there are also hesitations at the same time within the community, due to the potential security threats that may appear in such a case. The (asymptotically) fastest algorithm ever found so far to factor large integers is the general number field sieve. Its performance depends on selection of "good" polynomials, which requires a specific procedure for such a selection. Another significant performance factor surely is the power of the processing hardware and their peripherals. This article unveils and discusses the impacts of heterogeneous computing using a graphics processor units (GPU) instead of a central processing unit (CPU) on the performance of polynomial selection and so of factoring large integers. Further, the article presents implementation details and a comparative performance evaluation of the Base-m polynomial selection method to select "good" polynomials. Accordingly, the GPU is found to be more effective over larger numbers with more roots, while the CPU appeared more effective over smaller numbers with less roots, possibly due to the excessive overheads in the GPU processing procedures.