International Conference on Artificial Intelligence and Applied Mathematics in Engineering (ICAIAME), Antalya, Türkiye, 20 - 22 Nisan 2019, cilt.43, ss.509-517
Factorization of large integers has been being considered as a challenging problem in computer science and engineering since the earliest times of the computer technology. Despite the comprehensive efforts, there is still no reported deterministic polynomial-time algorithm; however, its complexity class is in fact not yet decided. A fast and robust polynomial-time algorithm for this problem is required to increase the processing capabilities of current systems. Yet, there are also hesitations at the same time within the community, due to the potential security threats that may appear in such a case. The (asymptotically) fastest algorithm ever found so far to factor large integers is the general number field sieve. Its performance depends on selection of "good" polynomials, which requires a specific procedure for such a selection. Another significant performance factor surely is the power of the processing hardware and their peripherals. This article unveils and discusses the impacts of heterogeneous computing using a graphics processor units (GPU) instead of a central processing unit (CPU) on the performance of polynomial selection and so of factoring large integers. Further, the article presents implementation details and a comparative performance evaluation of the Base-m polynomial selection method to select "good" polynomials. Accordingly, the GPU is found to be more effective over larger numbers with more roots, while the CPU appeared more effective over smaller numbers with less roots, possibly due to the excessive overheads in the GPU processing procedures.