Optimizing Intensive Data Computations Using GPU
Modern high-throughput genome-wise biological data play a perfect role in exploring disease's reason than even before. However, computational challenges in the genome data analysis are in increase. The data can have a very large number of features, but very small number of samples are used. This can lead us to miss some data. The noise and redundancy in the data prevent us from mining useful information in addition to, the data are often subject to missing values that may be important and expressive. So, Fast optimization methods are also necessary to overcome these challenges, various machine learning techniques are proposed. We propose to use kernel sparse-coding-based classification approach to classify biological samples data. The matrix and tensor factorization techniques are proposed to reduce the dimensions of the biological data. Many linear models, like support vector machines, have been surveyed. More ever, the nearest border technique is proposed for large-scale multi-class classification problem. Strategies to handling missing values are explored and new methods are proposed. We present a case study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. New optimization methods are proposed in sparse representation
Use of FutureSystems
Future grid will provide access to GPU processing environment.
Scale of Use
Couple days per week. using 256-512 core of GPU support cuda