Wednesday, October 19, 2016

Random Projections for Scaling Machine Learning in Hardware

Conitinuing in our Mapping ML to Hardware series, here is a way to produce random projections different than the way we do at LightOn.
Random projections have recently emerged as a powerful technique for large scale dimensionality reduction in machine learning applications. Crucially, the randomness can be extracted from sparse probability distributions, enabling hardware implementations with little overhead. In this paper, we describe a Field-Programmable Gate Array (FPGA) implementation alongside a Kernel Adaptive Filter (KAF) that is capable of reducing computational resources by introducing a controlled error term, achieving higher modelling capacity for given hardware resources. Empirical results involving classification, regression and novelty detection show that a 40% net increase in available resources and improvements in prediction accuracy is achievable for projections which halve the input vector length, enabling us to scale-up hardware implementations of KAF learning algorithms by at least a factor of 2. Execution time of our random projection core is shown to be an order of magnitude lower than a single core central processing unit (CPU) and the system-level implementation on a FPGA-based network card achieves a 29x speedup over the CPU. 



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly