Thursday, December 31, 2015

Nuit Blanche in Review (December 2015)


 

This is the last Nuit Blanche in Review for 2015 (the previous on was in November) and there are still a few days before things become hectic again: What Are You Waiting For ?
So it looks like, we began to use the term of The Great Convergence back in January and look what happened, it's already the end of the year. As recently as a few days ago, we had this, which makes me wonder if the tag is ML or CS or ML and CS. Any way, this got me inspired to write about  The Fall of Everything Else and realized we needed to accelerate things such as making some hardware become more relevant by increasing majorly the size of current datasets from said harware ( (Hamming's Time: Making Hyperspectral Imaging Mainstream).. To help in this endeavor, you may want to visit that site.  Next year or in a day, whichever comes faster, I'll begin to do the same for genomics, stay tuned. In the meantime, you want to read this Sunday Morning Insight: 10x Not 10% by Ken Norton.
We recently reached outr 4.5 million page views total, and had about 270,000 unique visits this year. That's about 740 visit per day or about 1000 unique visits per day of the week. Most importnatly, looking back since 2008, every day saw an average increase of 80 unique visits more than the previous day for the past 8 years.   Linear growth is fine.
 Here are some in-depth insights for subjects covered by Nuit Blanche.
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, December 30, 2015

k-Means Clustering Is Matrix Factorization

While we know this (see the Advanced Matrix Factorization Jungle Page.), Christian really wanted to get to the bottom of this in writing. Thank you !
 
This closed form solution makes it more like a subspace clustering algorithm, from the Jungle page
 
Subspace Clustering: A = AX  with unknown X, solve for sparse/other conditions on X  

 
 
We show that the objective function of conventional k-means clustering can be expressed as the Frobenius norm of the difference of a data matrix and a low rank approximation of that data matrix. In short, we show that k-means clustering is a matrix factorization problem. These notes are meant as a reference and intended to provide a guided tour towards a result that is often mentioned but seldom made explicit in the literature.
Related:

 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, December 29, 2015

A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction

 
 
From the paper:

Another novel aspect of our theory is a translation invariance result that formalizes the idea of the features becoming more translation-invariant with increasing network depth(see, e.g., [12]–[14], [17], [18]).


A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction by Thomas Wiatowski, Helmut Bölcskei

Deep convolutional neural networks have led to breakthrough results in practical feature extraction applications. The mathematical analysis of such networks was initiated by Mallat, 2012. Specifically, Mallat considered so-called scattering networks based on semi-discrete shift-invariant wavelet frames and modulus non-linearities in each network layer, and proved translation invariance (asymptotically in the wavelet scale parameter) and deformation stability of the corresponding feature extractor. The purpose of this paper is to develop Mallat's theory further by allowing for general convolution kernels, or in more technical parlance, general semi-discrete shift-invariant frames (including Weyl-Heisenberg, curvelet, shearlet, ridgelet, and wavelet frames) and general Lipschitz-continuous non-linearities (e.g., rectified linear units, shifted logistic sigmoids, hyperbolic tangents, and modulus functions), as well as pooling through sub-sampling, all of which can be different in different network layers. The resulting generalized network enables extraction of significantly wider classes of features than those resolved by Mallat's wavelet-modulus scattering network. We prove deformation stability for a larger class of deformations than those considered by Mallat, and we establish a new translation invariance result which is of vertical nature in the sense of the network depth determining the amount of invariance. Moreover, our results establish that deformation stability and vertical translation invariance are guaranteed by the network structure per se rather than the specific convolution kernels and non-linearities. This offers an explanation for the tremendous success of deep convolutional neural networks in a wide variety of practical feature extraction applications. The mathematical techniques we employ are based on continuous frame theory.
 
 
Eaelier:
 

Deep Convolutional Neural Networks Based on Semi-Discrete Frames by Thomas Wiatowski, Helmut Bölcskei

Deep convolutional neural networks have led to breakthrough results in practical feature extraction applications. The mathematical analysis of these networks was pioneered by Mallat, 2012. Specifically, Mallat considered so-called scattering networks based on identical semi-discrete wavelet frames in each network layer, and proved translation-invariance as well as deformation stability of the resulting feature extractor. The purpose of this paper is to develop Mallat's theory further by allowing for different and, most importantly, general semi-discrete frames (such as, e.g., Gabor frames, wavelets, curvelets, shearlets, ridgelets) in distinct network layers. This allows to extract wider classes of features than point singularities resolved by the wavelet transform. Our generalized feature extractor is proven to be translation-invariant, and we develop deformation stability results for a larger class of deformations than those considered by Mallat. For Mallat's wavelet-based feature extractor, we get rid of a number of technical conditions. The mathematical engine behind our results is continuous frame theory, which allows us to completely detach the invariance and deformation stability proofs from the particular algebraic structure of the underlying frames.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Biological screens from linear codes: theory and tools

 


Biological screens from linear codes: theory and tools by Yaniv Erlich, Anna Gilbert, Hung Ngo, Atri Rudra, Nicolas Thierry-Mieg, Mary Wootters, Dina Zielinski, Or Zuk

Molecular biology increasingly relies on large screens where enormous numbers of specimens are systematically assayed in the search for a particular, rare outcome. These screens include the systematic testing of small molecules for potential drugs and testing the association between genetic variation and a phenotype of interest. While these screens are ``hypothesis-free,'' they can be wasteful; pooling the specimens and then testing the pools is more efficient. We articulate in precise mathematical ways the type of structures useful in combinatorial pooling designs so as to eliminate waste, to provide light weight, flexible, and modular designs. We show that Reed-Solomon codes, and more generally linear codes, satisfy all of these mathematical properties. We further demonstrate the power of this technique with Reed-Solomon-based biological experiments. We provide general purpose tools for experimentalists to construct and carry out practical pooling designs with rigorous guarantees for large screens.
  A free web tool to design and guide experiments will eventually be at: http://pooling.teamerlich.org/

Previous posts of interest:

 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, December 28, 2015

Representation and Coding of Signal Geometry

Petros just mentioned it in on Twitter, here it is: Representation and Coding of Signal Geometry by Petros T Boufounos, Shantanu Rane, Hassan Mansour

Approaches to signal representation and coding theory have traditionally focused on how to best represent signals using parsimonious representations that incur the lowest possible distortion. Classical examples include linear and non-linear approximations, sparse representations, and rate-distortion theory. Very often, however, the goal of processing is to extract specific information from the signal, and the distortion should be measured on the extracted information. The corresponding representation should, therefore, represent that information as parsimoniously as possible, without necessarily accurately representing the signal itself.
In this paper, we examine the problem of encoding signals such that sufficient information is preserved about their pairwise distances and their inner products. For that goal, we consider randomized embeddings as an encoding mechanism and provide a framework to analyze their performance. We also demonstrate that it is possible to design the embedding such that it represents different ranges of distances with different precision. These embeddings also allow the computation of kernel inner products with control on their inner product-preserving properties. Our results provide a broad framework to design and analyze embeddins, and generalize existing results in this area, such as random Fourier kernels and universal embeddings.
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Ten Lectures and Forty-Two Open Problems in the Mathematics of Data Science, Afonso S. Bandeira

Afonso mentioned the release of lectures notes of a course he gave this past fall on on his twitter and on his blog. I liked the notes very much and giggled at that part:

It is known that 43 $<$ R(5) $<$ 49. There is a famous quote in Joel Spencer's book [Spe94] that conveys the di fficulty of computing Ramsey numbers:
\Erd}os asks us to imagine an alien force, vastly more powerful than us, landing on Earth and demanding the value of R(5) or they will destroy our planet. In that case, he claims, we should marshal all our computers and all our mathematicians and attempt to find the value. But suppose, instead, that they ask for R(6). In that case, he believes, we should attempt to destroy the aliens." 
here are the notes: Ten Lectures and Forty-Two Open Problems in the Mathematics of Data Science by Afonso S. Bandeira

Preface
These are notes from a course I gave at MIT on the Fall of 2015 entitled: \18.S096: Topics in Mathematics of Data Science". These notes are not in nal form and will be continuously edited and/or corrected (as I am sure they contain many typos). Please use at your own risk and do let me know if you nd any typo/mistake. Part of the content of this course is greatly inspired by a course I took from Amit Singer while a graduate student at Princeton. Amit's course was inspiring and in uential on my research interests. I can only hope that these notes may one day inspire someone's research in the same way that Amit's course inspired mine. These notes also include a total of forty-two open problems (now 41, as in meanwhile Open Problem 1.3 has been solved [MS15]!). This list of problems does not necessarily contain the most important problems in the eld (although some will be rather important). I have tried to select a mix of important, perhaps approachable, and fun problems. Hopefully you will enjoy thinking about these problems as much as I do! I would like to thank all the students who took my course, it was a great and interactive audience! I would also like to thank Nicolas Boumal, Ludwig Schmidt, and Jonathan Weed for letting me know of several typos. Thank you also to Nicolas Boumal, Dustin G. Mixon, Bernat Guillen Pegueroles, Philippe Rigollet, and Francisco Unda for suggesting open problems. 

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, December 27, 2015

Sunday Morning Insight: 10x Not 10% by Ken Norton

Here is Ken Norton's outstanding video featuring his 10x Not 10%, Product management by orders of magnitude presentation (the link goes to Ken long form essay of that presentation). You may notice certain themes mentioned here on Nuit Blanche before and highlighted below. He mentions betting on trends while we call them the steamrollers. One should notice that while Ken marvels at the images of Pluto having been dowloaded a few hours earlier, the talk was given fifteen days too early when The Second Inflection Point in Genome Sequencing occured. Let us note that we wondered when that second inflection point back in 2014 and that it took place about a year later. In all, there are certainly elements of the strategy described by Ken we are trying to accomplish at Lighton. Enjoy the video !


  
 

Related:




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, December 24, 2015

Why are deep nets reversible: A simple theory, with implications for training


Why are deep nets reversible: A simple theory, with implications for training by Sanjeev Arora, Yingyu Liang, Tengyu Ma

Generative models for deep learning are promising both to improve understanding of the model, and yield training methods requiring fewer labeled samples.
Recent works use generative model approaches to produce the deep net's input given the value of a hidden layer several levels above. However, there is no accompanying "proof of correctness" for the generative model, showing that the feedforward deep net is the correct inference method for recovering the hidden layer given the input. Furthermore, these models are complicated.
The current paper takes a more theoretical tack. It presents a very simple generative model for RELU deep nets, with the following characteristics: (i) The generative model is just the reverse of the feedforward net: if the forward transformation at a layer is A then the reverse transformation is AT. (This can be seen as an explanation of the old weight tying idea for denoising autoencoders.) (ii) Its correctness can be proven under a clean theoretical assumption: the edge weights in real-life deep nets behave like random numbers. Under this assumption ---which is experimentally tested on real-life nets like AlexNet--- it is formally proved that feed forward net is a correct inference method for recovering the hidden layer.
The generative model suggests a simple modification for training: use the generative model to produce synthetic data with labels and include it in the training set. Experiments are shown to support this theory of random-like deep nets; and that it helps the training.
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

The exponential advantage of distributed and deep representations

  Here are some of the papers/preprints related to the following slides extracted from the NIPS 2015 Deep Learning NIPS’2015 Tutorial by Geoff Hinton, Yoshua Bengio and Yann LeCun. I also added a few papers that came afterwards to get a big picture view of what sort of architecture ought to be used for maximum information embedding. Enjoy !

 

 
 
 

 
Deep Learning by Ian Goodfellow and Aaron Courville and Yoshua Bengio 
 
 
On the number of response regions of deep feed forward networks with piece-wise linear activations by Razvan Pascanu, Guido Montufar, Yoshua Bengio

This paper explores the complexity of deep feedforward networks with linear pre-synaptic couplings and rectified linear activations. This is a contribution to the growing body of work contrasting the representational power of deep and shallow network architectures. In particular, we offer a framework for comparing deep and shallow models that belong to the family of piecewise linear functions based on computational geometry. We look at a deep rectifier multi-layer perceptron (MLP) with linear outputs units and compare it with a single layer version of the model. In the asymptotic regime, when the number of inputs stays constant, if the shallow model has kn hidden units and n0 inputs, then the number of linear regions is O(kn0nn0). For a k layer model with n hidden units on each layer it is Ω(n/n0k1nn0). The number n/n0k1 grows faster than kn0 when n tends to infinity or when k tends to infinity and n2n0. Additionally, even when k is small, if we restrict n to be 2n0, we can show that a deep model has considerably more linear regions that a shallow one. We consider this as a first step towards understanding the complexity of these models and specifically towards providing suitable mathematical tools for future analysis.
 

On the Number of Linear Regions of Deep Neural Networks by Guido Montúfar, Razvan Pascanu, Kyunghyun Cho, Yoshua Bengio
We study the complexity of functions computable by deep feedforward neural networks with piecewise linear activations in terms of the symmetries and the number of linear regions that they have. Deep networks are able to sequentially map portions of each layer's input-space to the same output. In this way, deep models compute functions that react equally to complicated patterns of different inputs. The compositional structure of these functions enables them to re-use pieces of computation exponentially often in terms of the network's depth. This paper investigates the complexity of such compositional maps and contributes new theoretical results regarding the advantage of depth for neural networks with piecewise linear activation functions. In particular, our analysis is not specific to a single family of models, and as an example, we employ it for rectifier and maxout networks. We improve complexity bounds from pre-existing work and investigate the behavior of units in higher layers.
 

When Does a Mixture of Products Contain a Product of Mixtures? by  Guido F. Montufar, Jason Morton

We derive relations between theoretical properties of restricted Boltzmann machines (RBMs), popular machine learning models which form the building blocks of deep learning models, and several natural notions from discrete mathematics and convex geometry. We give implications and equivalences relating RBM-representable probability distributions, perfectly reconstructible inputs, Hamming modes, zonotopes and zonosets, point configurations in hyperplane arrangements, linear threshold codes, and multi-covering numbers of hypercubes. As a motivating application, we prove results on the relative representational power of mixtures of product distributions and products of mixtures of pairs of product distributions (RBMs) that formally justify widely held intuitions about distributed representations. In particular, we show that a mixture of products requiring an exponentially larger number of parameters is needed to represent the probability distributions which can be obtained as products of mixtures.
 

FitNets: Hints for Thin Deep Nets
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, Yoshua Bengio

While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.

On the Expressive Power of Deep Learning: A Tensor Analysis by Nadav Cohen, Or Sharir, Amnon Shashua

It has long been conjectured that hypothesis spaces suitable for data that is compositional in nature, such as text or images, may be more efficiently represented with deep hierarchical architectures than with shallow ones. Despite the vast empirical evidence, formal arguments to date are limited and do not capture the kind of networks used in practice. Using tensor factorization, we derive a universal hypothesis space implemented by an arithmetic circuit over functions applied to local data structures (e.g. image patches). The resulting networks first pass the input through a representation layer, and then proceed with a sequence of layers comprising sum followed by product-pooling, where sum corresponds to the widely used convolution operator. The hierarchical structure of networks is born from factorizations of tensors based on the linear weights of the arithmetic circuits. We show that a shallow network corresponds to a rank-1 decomposition, whereas a deep network corresponds to a Hierarchical Tucker (HT) decomposition. Log-space computation for numerical stability transforms the networks into SimNets.
In its basic form, our main theoretical result shows that the set of polynomially sized rank-1 decomposable tensors has measure zero in the parameter space of polynomially sized HT decomposable tensors. In deep learning terminology, this amounts to saying that besides a negligible set, all functions that can be implemented by a deep network of polynomial size, require an exponential size if one wishes to implement (or approximate) them with a shallow network. Our construction and theory shed new light on various practices and ideas employed by the deep learning community, and in that sense bear a paradigmatic contribution as well.
 
 
 
How Can Deep Rectifier Networks Achieve Linear Separability and Preserve Distances? by Senjian An, Farid Boussaid, Mohammed Bennamoun (attendant slides and video from ICML 2015)
This paper investigates how hidden layers of deep rectifier networks are capable of transforming two or more pattern sets to be linearly separable while preserving the distances with a guaranteed degree, and proves the universal classification power of such distance preserving rectifier networks. Through the nearly isometric nonlinear transformation in the hidden layers, the margin of the linear separating plane in the output layer and the margin of the nonlinear separating boundary in the original data space can be closely related so that the maximum margin classification in the input data space can be achieved approximately via the maximum margin linear classifiers in the output layer. The generalization performance of such distance preserving deep rectifier neural networks can be well justified by the distance-preserving properties of their hidden layers and the maximum margin property of the linear classifiers in the output layer.

Training Very Deep Networks  by Rupesh K. Srivastava, Klaus Greff, Juergen Schmidhuber
Theoretical and empirical evidence indicates that the depth of neural networks is crucial for their success. However, training becomes more difficult as depth increases, and training of very deep networks remains an open problem. Here we introduce a new architecture designed to overcome this. Our so-called highway networks allow unimpeded information flow across many layers on information highways. They are inspired by Long Short-Term Memory recurrent networks and use adaptive gating units to regulate the information flow. Even with hundreds of layers, highway networks can be trained directly through simple gradient descent. This enables the study of extremely deep and efficient architectures.


 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, December 23, 2015

What Are You Waiting For ?

 
 
For different reasons, the winter break/solstice is a good thing for discoveries and momentous firsts.

 What are you waiting for ?
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

What Happens to a Manifold Under a Bi-Lipschitz Map?

 


What Happens to a Manifold Under a Bi-Lipschitz Map by  Armin Eftekhari, Michael B. Wakin

We study geometric and topological properties of the image of a smooth submanifold of $\mathbb{R}^{n}$ under a bi-Lipschitz map to $\mathbb{R}^{m}$. In particular, we characterize how the dimension, diameter, volume, and reach of the embedded manifold relate to the original. Our main result establishes a lower bound on the reach of the embedded manifold in the case where $m \le n$ and the bi-Lipschitz map is linear. We discuss implications of this work in signal processing and machine learning, where bi-Lipschitz maps on low-dimensional manifolds have been constructed using randomized linear operators.
 
 h/t Laurent Jacques
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, December 22, 2015

Streaming Kernel Principal Component Analysis

Sketching and random features together what's not to like:

Streaming Kernel Principal Component Analysis by Mina Ghashami, Daniel Perry, Jeff M. Phillips

Kernel principal component analysis (KPCA) provides a concise set of basis vectors which capture non-linear structures within large data sets, and is a central tool in data analysis and learning. To allow for non-linear relations, typically a full $n \times n$ kernel matrix is constructed over $n$ data points, but this requires too much space and time for large values of $n$. Techniques such as the Nystr\"om method and random feature maps can help towards this goal, but they do not explicitly maintain the basis vectors in a stream and take more space than desired. We propose a new approach for streaming KPCA which maintains a small set of basis elements in a stream, requiring space only logarithmic in $n$, and also improves the dependence on the error parameter. Our technique combines together random feature maps with recent advances in matrix sketching, it has guaranteed spectral norm error bounds with respect to the original kernel matrix, and it compares favorably in practice to state-of-the-art approaches.
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly