A Comparative Study of Digital Hardware Acceleration Techniques for AI Workloads
Authors: Karthik Wali
DOI: https://doi.org/10.37082/IJIRMPS.v8.i4.232594
Short DOI: https://doi.org/g9q39z
Country: USA
Full-text Research PDF File:
View |
Download
Abstract: To be more precise, remarkable progress in the development of AI and ML has resulted in high requirements for modern hardware architecture. The traditional CPUs are not efficient enough to handle these performance characteristics of artificial intelligence; thus, hardware accelerators such as GPUS FPGAs and ASICs have come to the forefront. In terms of power efficiency, processing ability, and flexibility, all of these technologies come with some advantages, which are listed below. This paper aims to discuss and compare different techniques in the acceleration of AI-based workloads in hardware. We explain how they are different from one another architecturally, their performances, and how the different methods can be useful in different architectures of AI. The most important figure of merit evaluated include computation throughput, power consumption, latency, and scalability. The paper also covers tendencies that have not been established yet, like neuromorphic computing and acceleration with quantum computers and their role in creating the future of AI processing. It can be beneficial to AI researchers and engineers to consider and choose the proper acceleration technique according to the application needs.
Keywords:
Paper Id: 232594
Published On: 2020-07-10
Published In: Volume 8, Issue 4, July-August 2020