High performance computing technology is evolving rapidly. One important development trend is the introduction of specific architectures with improved computing precision into the general computing domain. For example, NVDIA's tensor processing, Tesla V100 can support 16-bit computing capability of 112 TFLOPS, which is 8 times its single precision computing power, providing new opportunities and challenges for further performance optimization of various applications. At the same time, machine learning and preconditioning of iterative algorithms also show that the algorithm itself is relatively robust to precision. At present, how to design efficient and effective approximate computing algorithms to best utilize the computing power of mixed precision architectures has become an important research topic in high-performance computing.
This project hopes to focus on the study of approximate calculation for some widely-used algorithms (Gauss elimination, iterative method, etc.) in the field of linear algebra with higher accuracy requirements based on two computing platforms (GPU and FPGA), including evaluating the precision of different algorithms in various applications, exploiting current IEEE754 floating-point operation standard, exploring potential floating-pointing operation standards.
1. Familiarity with C / C + + Programming. Experience in CUDA Programming will be a bonus.
2. Interest in programming and performance optimization over GPU and FPGA
3. Strong background in computer architecture and numerical computation
Keywords: scientific computing, uncertainty quantification
Mentors: Profs. Wei Xue (Tsinghua University) and Haohuan Fu (Tsinghua University)