Monday, August 31, 2009

CS: Compressed sensing reconstruction of a string signal from interferometric observations of the cosmic microwave background

If you thought some of the previous posts were too esoteric, watch out! we're now talking about doing inverse problems on strings :-). Enjoy.

Compressed sensing reconstruction of a string signal from interferometric observations of the cosmic microwave background by Yves Wiaux, Gilles Puy, Pierre Vandergheynst. The abstract reads:
We propose an algorithm for the reconstruction of the signal induced by cosmic strings in the cosmic microwave background (CMB), from radio-interferometric data at arcminute resolution. Radio interferometry provides incomplete and noisy Fourier measurements of the string signal, which exhibits sparse or compressible magnitude of the gradient due to the Kaiser-Stebbins (KS) effect. In this context the versatile framework of compressed sensing naturally applies for solving the corresponding inverse problem. Our algorithm notably takes advantage of a model of the prior statistical distribution of the signal fitted on the basis of realistic simulations. Enhanced performance relative to the standard CLEAN algorithm is demonstrated by simulated observations under noise conditions including primary and secondary CMB anisotropies.

Sunday, August 30, 2009

Imaging with Nature (part 3)

In the comment section of the previous entry on Imaging with Nature, I had the following interaction with one of the reader. The reason I am putting this exchange is that often, when trying to explain compressive sensing, the issue of random sampling as understood in a compressive sensing setting is often misunderstood:

Anonymous said...

Imagine a Demon (a relative of Maxwell's demon?) with complete knowledge of the entire scene, cameras, mathematics, optics, and s tiny paint brush.

Could such a demon touch up the scene in a subpixel manner such that the normal imaging system sees the same image as before, while the random imager sees something wildly different? It seems to me that it could. The touched up scene would be highly improbable I suppose.

I find this fascinating, as much as I understand this.


To what I said:

I don't think it would but being random there would be little control about the capability for superresolution.


Anonymous then said...

Is this demon interesting? I don't want to waste your time!

It seems to me that such a demon could defeat superresolution. To the demon, nothing is random, he knows everything, past and future. His only power is his ability to paint very fine detail in such a way that the detail looks like one thing at one resolution and another completely differnt thing at another resolution.

Salvador Dali made paintings like this years ago.

I really want to understand this.

Thanks,

Michael


I then responded:

Michael,

Thanks for your comment, I am sure a lot a people are asking themselves similar questions, so this is a good exercise.

In effect, while the random medium is indeed random, the calibration step I am mentioning is really about having a full characterization of it. So, in effect, we are building the demon. Once we know everything about the random medium we can use the medium to perform the imaging.

Compressed sensing tells you that if the scene of interest is sparse, the random medium (known after being characterized) and a certain number of measurement, done through this medium, allow one to know perfectly the scene of interest. Only the physics will put some constraints on what you can see.

Hope this answer your question, if not let me know.



Thank you Michael!

As Jacques Laurent rightly pointed out recently this type of approach parallels in-situ measurements found in the work of Lawrence Carin, Dehong Liu, and Ya Xue ( In Situ Compressive Sensing (short version) ).

Is your brain pulsing blood in order to work ?


The latest arxiv blog entry features an article entitled The Thermodynamics of Human Reaction Times by Moscoso del Prado. The blog entry makes a claim about the speed at which information travels in the brain (you need to read the update in the blog entry by Moscoso del Prado in the blog entry itself). The information processing seems to be in the realm of about 60 bit per second. This is fascinating as this number is close to the 30 bits/second rate found in the information transferred through acoustic mud-pulsing in the Measurement While Drilling techniques used by the likes of companies like Schlumberger to transfer information from the tip of the drill bit all the way up to the operator at the surface. Should we expect a similar phenomenon in the brain where blood is being slightly pulsed ? At the very least, this would definitely provide some type of basis for using fMRI as a way of defining information activity in the brain.

Saturday, August 29, 2009

Imaging with Nature (part 2)


In the comment section of the previous entry, Laurent Jacques highlighted the fact that if one were to use clouds as a mechanism for performing imaging, one would need to figure out the cloud Point Spread Function (PSF). He is totally right. As in the Random Lens Imager case, the calibration step is essential to determine the point spread function (PSF). Hence the cloud PSF calibration needs to happen very fast indeed. Compressive Sensing gives a way of reducing the number of trials needed to determine the full PSF as can be seen in [2].

In a Lidar mode, each measurement goes at the speed of light and so even if one has to perform many of them, the calibration is expected to go faster than most turbulence of interest in the cloud. Now the problem is to scan the whole cloud fast enough. Can the reduction of samples afforded by compressive sensing enable a rapid scanning, this is the question to answer.

While we are on the subject of using Nature as an imaging tool,
Luc Arnold for instance uses the Moon as a mirror to find out vegetation on Earth [1]. Since we know the Moon's surface with a certain accuracy, how can we use this information to obtain a better Moon PSF and eventually better images of Earth ? Let us note that this type of investigation (imaging Earth with the Moon with accuracy) is not interesting to the astronomy folks as the Moon-Earth imaging example is an example for detecting chemical elements on exoplanets. Therefore, they do not care about the details of the Moon's surface since they will not know it in real observations.



Friday, August 28, 2009

Imaging with Nature


The reason Russia has so much expertise in space (they flew several space stations) stems from the cold war era. During that time, the Russians had Salyut space stations manned with cosmonauts where cameras would fill half of the space station. The reason they had to man the cameras is that the earth is always covered with clouds and the Russians did not have much computing power to process away images with clouds. I talked about this in the following entry: Competing with the human element

The point is that we generally look at clouds as an impediment to imaging. With compressive sensing however, I am wondering the following: Could we use clouds as part of an imaging process ?

The light from the sun bounces from the ground onto the cloud and then some of it bounces back to the ground (cloud albedo).

Could we use some means of determining all the features of these clouds and then use that information to evaluate the cloud albedo. Then use the cloud albedo information to determine the PSF of that system from the sunlight.

The idea comes from the very intriguing paper: The Design of Compressive Sensing Filter by Lianlin Li, Wenji Zhang, Yin Xiang, Fang Li where one could use the different layers of the ionosphere to produce compressed measurements.

I know it sounds a litte crazy but I need to convince myself that it cannot be done by looking at the cloud radiation albedo numbers, so I need to digg this a little further.

Thursday, August 27, 2009

The scent of a glass of wine.

As some of you have noted, I am performing a little self-experiment of putting a NOse
Clip While Eating (No-C-WE) to see how much weight I can lose [1]. I am currently in the mode of not doing this to see how the weight loss fares.

Here is a small side-effect of interest. Before this self-experimentation, when I would go out, I would be a little tipsy when I would drink less than a quarter a glass of Champagne or red wine. Yes, I know! One fourth of a glass is small, but invariably, my brain would make me feel I was in a different operating mode. My speech would be a little slow but not to the point that people around me would notice. I would notice it though. In effect, very early in the drinking mode, I would know and accept that I was not in shape to drive or do anything that required some form of focused control. Do remember though that the amounts we are talking about would be well below most drinking standards (except those with full prohibition).

During this period of self-experimentation, I have mentioned the fact that most food's scent went away and the fact that I ate less as a result. Wine drinking took a stranger turn. I tried this several times and noticed the same effect. With the nose clip on, I can drink about four full glasses of wine and feel perfectly ok. By ok, I mean that I can talk and have a good thoughtful conversation with no obvious side effect. However, as soon as I would walk away from the dinner table, I would not walk straight nor my movements would be as controlled as I thought they should be. My brain literally would not understand why it was unable to do things right.

Is this the same effect that happens to social drunks or alcoholics ? i.e. does getting accustomed to the taste of a certain spirit puts the brain in a mode equivalent to that of a person who drinks with a nose clip on ? Is the mechanism that makes us tipsy, scent related ? If so, how can we put it back in people who have lost it ?

[1] The idea came from Seth Roberts series of self experimentations.

CS as a disruptive technology ?

If one were to think of compressive sensing as an enabling technology (be it in imaging or else), what specific technology using compressive sensing could be considered disruptive ? One way to help think about how to answer this question is alist of three questions (from here) :
  • Is there a large population of people who historically have not had the money, equipment or skill to do this thing for themselves?
  • Are there customers at the low end of the market who would be happy to purchase a product with less (but good enough) performance if they could get it at a lower price?
  • Is the innovation disruptive to all of the significant incumbents in the industry?

Tuesday, August 25, 2009

CS: TomoPIV meets Compressed Sensing

Interesting! Testing a physical measurement process as if it were a compressed sensing implementation and checking it against various conditions (null space property, ...) this is what this paper undertakes in TomoPIV meets Compressed Sensing by Stefania Petra and Christoph Schnörr. The abstract reads:
We study the discrete tomography problem in Experimental Fluid Dynamics - Tomographic Particle Image Velocimetry (TomoPIV) - from the viewpoint of compressed sensing (CS). The CS theory of recoverability and stability of sparse solutions to underdetermined linear inverse problems has rapidly evolved during the last years. We show that all currently available CS concepts predict an extremely poor worst case performance, and a low expected performance of the TomoPIV measurement system, indicating why low particle densities only are currently used by engineers in practice. Simulations demonstrate however that slight random perturbations of the TomoPIV measurement matrix considerably boost both worst-case and expected reconstruction performance. This finding is interesting for CS theory and for the design of TomoPIV measurement systems in practice.


Credit & Licence: Mick Petroff; Tip Thanks: James Holmes (Cairns). Via APOD.

Sunday, August 23, 2009

CS: Phase transitions for greedy sparse approximation algorithms, Exponential bounds implying construction of CS mat, Fast Bayesian Matching Pursuit


Jared Tanner just sent me this:
Two new compressed sensing manuscripts are available. Items 1 and 6 on my webpage publication list: http://www.maths.ed.ac.uk/~tanner/
Item 6 was submitted about a year ago, but I've only recently posted it online; it gives finite dimensional bounds for Gaussian matrices and l1-regularization. Item 1 is a new paper whose results which quantifies what has actually been proven about CoSaMP, Subspace Pursuit and Iterated Hard Thresholding. Rather than saying something like "the algorithm work if RIP constant with sparsity 4k is less than 1/10 then the algorithm work" we cast the results in terms of the phase transition framework using Gaussian matrices.

Here is item 1: Phase transitions for greedy sparse approximation algorithms by Jeffrey Blanchard, Coralia Cartis, Jared Tanner and Andrew Thompson. The abstract reads:

A major enterprise in compressed sensing and sparse approximation is the design and analysis of computationally tractable algorithms for recovering sparse, exact or approximate, solutions of underdetermined linear systems of equations. Many such algorithms have now been proven using the ubiquitous Restricted Isometry Property (RIP) [9] to have optimal-order uniform recovery guarantees. However, it is unclear when the RIP-based sufficient conditions on the algorithm are satisfied. We present a framework in which this task can be achieved; translating these conditions for Gaussian measurement matrices into requirements on the signal's sparsity level, size and number of measurements. We illustrate this approach on three of the state-of-the-art greedy algorithms: CoSaMP [27], Subspace Pursuit (SP) [11] and Iterated Hard Thresholding (IHT) [6]. Designed to allow a direct comparison of existing theory, our framework implies that IHT, the lowest of the three in computational cost, also requires fewer compressed sensing measurements than CoSaMP and SP.

and item 6: Exponential bounds implying construction of compressed sensing matrices, error-correcting codes and neighborly polytopes by random sampling. by Jared Tanner and David L. Donoho. The abstract reads:

In [12] the authors proved an asymptotic sampling theorem for sparse signals, showing that n random measurements permit to reconstruct an N-vector having k nonzeros provided
n \gt 2 · k · log(N/n)(1 + o(1));
reconstruction uses ℓ1 minimization. They also proved an asymptotic rate theorem, showing existence of real error-correcting codes for messages of length N which can correct all possible k-element error patterns using just n generalized checksum bits, where
n \gt 2e · k log(N/n)(1 + o(1));
decoding uses ℓ1 minimization. Both results require an asymptotic framework, with N growing large. For applications, on the other hand, we are concerned with specific triples k,n, N. We exhibit triples (k, n,N) for which Compressed Sensing Matrices and Real Error-Correcting Codes surely exist and can be obtained with high probability by random sampling. These derive from exponential bounds on the probability of drawing ‘bad’ matrices. The bounds give conditions effective at finite-N, and converging to the known sharp asymptotic conditions for large N. Compared to other finite-N bounds known to us, they are much stronger, and much more explicit.
Our bounds derive from asymptotics in [12] counting the expected number of k-dimensional faces of the randomly projected simplex TN−1 and cross-polytope CN. We develop here finite-N bounds on the expected discrepancy between the number of k-faces of the projected polytope AQ and its generator Q, for Q = TN−1 and CN.
Our bounds also imply existence of interesting geometric objects. Thus, we exhibit triples (k, n,N) for which polytopes with 2N vertices can be centrally k-neighborly.

Thanks Jared for the heads-up!

Philip Schniter, Lee C. Potter, Justin Ziniel just made available a new version of Fast Bayesian Matching Pursuit code. From the website:

An updated version 1.2 software package has been made available for those who would like to explore how the algorithm works in MATLAB. The updated version now includes support for complex-valued signals and noise.


PS: I am still away.

Friday, August 21, 2009

CS: Post-Doc at Lawrence Berkeley Labs.

Stefano Marchesini wants to let y'all know about this Post-Doc fellowship opportunity at Lawrence Berkeley labs:

Physicist Postdoc Fellow (X-ray Coherent Imaging and Scattering)
Req Number:

The Advanced Light Source Division (ALS) at Lawrence Berkeley National Laboratory (Berkeley Lab) offers an exciting postdoctoral fellow opportunity to conduct experiments at the ALS and develop experimental and theoretical methods for x-ray coherent imaging and scattering. This position reports to the Coherent Imaging lead scientist within the Experimental Systems Group.

Key responsibilities include:
  • Develop experimental and theoretical methods for x-ray coherent imaging, holography and general scattering.
  • Communicate work results via reports and oral presentations, including participation in meetings, reviews, conferences, and publications in refereed journals.


Qualifications include:
  • Ph.D. in Optics, Physics or Engineering within the last 2 years
  • Experience with complex experimental apparatuses such as synchrotron beamlines and/or optical measurements
  • Familiarity with signal processing (phase retrieval, compressive sensing) and/or familiarity with experimental techniques of synchrotron radiation research is advantageous.


Note : This is a one-year term appointment with the possibility of renewal after one year based on demonstrated performance and availability of funds.

Thanks Stefano !
I'll add it to the Compressive Sensing Jobs page later.


Thursday, August 20, 2009

Random Recoiling

I recently came across several papers/articles or blog entries that make me recoil in some fashion or another, here is a batch:
  • NSF Workshop: Electronic Design Automation-Past, Present, and Future. Discussions include some blog entries including mine.
  • I wonder if this Panalogy Principle is a good way of thinking about manifold signal processing.
  • It looks like the gaussian nature of the brownian motion is not verified after all.
    "...In both sets of experiments, there were many features in full agreement with Einstein and the bell-shaped curve; but there were also features in significant disagreement. In those cases, the beads moved much farther than the common curve could predict. In those extreme displacements, diffusion behavior was not Gaussian, the researchers report. The behavior was exponential.These large displacements happen less often, but when they do occur, they are much bigger than we previously thought possible," Granick said..."
  • Here is a document written by some danish mathematicians on how much Mathematics there is in space related projects. If they had waited a few more months, they would have put compressive sensing as an encoding mechanism, but then again they needed to talk to the folks at ESA not JPL. I was a little underwhelmed by the problems they found that math was solving. The whole process of integration and verification (I&V) is a mind buggling process for instance as one needs to make sure that the models developed for each parts behave somewhat nicely when connected to each other. It is not as interesting as developing trajectories but it's a more difficult problem in the end. I would not be surprised to find out that one of the reason most space projects go over budget has to do, in part, to this mismatch.
  • In the world of nuclear weaponry, here is an interesting inverse problem revolving around the following question: How do you establish trust by letting an adverserial party look at your nuclear weapon without opening it up ? The process should allow the querying party to be using an instrumentation that allows for some sort of inverse problem (is the warheard there?) while at the same time, the inverse problem should not be good enough to give too much information about how the weapon is assembled. This is all the more important as the querying party include countries that do not have nuclear weapons. The issue is that you do not want to be proliferating by allowing inspection. I recall a story that was told to us at the Nuclear engineering seminar back at Texas A&M about a visit by the U.S in the (then) U.S.S.R on this verification business. Back then, before 1990, the US team was asked to verify if a series of missiles had nuclear warheads in them. Actually, the issue was whether the missiles under inspection had more than one warhead as the same missile could have one or more warheads in them. Of course, the U.S team could not come within a pre-specific number of yards from the missiles. The soviet military officer in charge asked the U.S team to pick one of the missile at random and do their measurement at once. Recall we were in the middle of the Cold war, so the whole thing was pretty serious and ceremonious, as this exchange was probably the first one since the beginning of the Cold war. But as soon as the US team decided on the missile to be remotely inspect, one of the red guards jumped and smile to the surprise of the visiting U.S team. After the test had been performed, the person in charge of the U.S delegation asked why the Red Guards had this unexpected behavior. The Head of the soviet team responded that all the guards had a betting pool and that this one soldier had won. Sometimes, you think the game theory is about one thing when in fact it really is about another one.

Tuesday, August 18, 2009

CS: Adding to the Big Picture in Compressive Sensing

Laurent Jacques kindly sent me the following:

I was just surfing on your very handy Big Picture page. Around the Section 3, "CS Measurements", I saw that you have listed the zoology of the different *RIP.

If you want one more, in the paper that I co-wrote with David Hammond and M. Jalal Fadili about Dequantizing CS, we used the RIP_p to characterize the performance of the BPDQ decoder. RIP_p is there just the "sandwiching" of the Lp norm (with p>=2) of the random projection of sparse signals x, i.e. Phi x, between their L2 norm multiplied by the common (1+- delta)^1/2 factors, so that RIP_2 is the common RIP.


I am going to add it in the Big Picture. As I have said before, I look forward to having more of this type of interaction, so that the Big Picture really becomes a living document on Compressive Sensing. According to the stats, that page has an average of 100 visitors per day!

Monday, August 17, 2009

CS: Content Based Retrieval, Bioluminescence Tomography, Recovering sparse signals with non-convex penalties and DC programming

Today, we have three new entries at the Rice Compressive Sensing repository, I haven't had time to read time, I'll do that during my break.
Image databases, medical records and geographical information systems contain data that is intrinsically correlated, i.e. elements within a single single record show a high degree of correlation. Content based retrieval is a common technique for querying such databases. The query specifies an image or components that the record is expected to contain or be similar to.We propose a technique for compact storage of such correlated data that is used for content based retrieval. Our method utilizes the machinery of compressive sensing, which allows an under determined system of equations to be approximately solved by l1-minimization if the data is a sparse linear combination of an appropriate set of basis vectors. Such sparsity is seen in these correlated databases. If the sparsity is high or if some distortion is permitted in the retrieved data, the data can be retrieved by a reconstruction operation with a constant storage cost independent of the number of records stored. If exact retrieval is needed, some additional storage is required for each record, much smaller than the size of the original record. We illustrate the performance of this method with a database of remote sensing images.

Through restoration of the light source information in small animals in vivo, optical molecular imaging, such as fluorescence molecular tomography (FMT) and bioluminescence tomography (BLT), can depict biological and physiological changes observed using molecular probes. A priori information plays an indispensable role in tomographic reconstruction. As a type of a priori information, the sparsity characteristic of the light source has not been sufficiently considered to date. In this paper, we introduce a compressed sensing method to develop a new tomographic algorithm for spectrally-resolved bioluminescence tomography. This method uses the nature of the source sparsity to improve the reconstruction quality with a regularization implementation. Based on verification of the inverse crime, the proposed algorithm is validated with Monte Carlo-based synthetic data and the popular Tikhonov regularization method. Testing with different noise levels and single/multiple source settings at different depths demonstrates the improved performance of this algorithm. Experimental reconstruction with a mouse-shaped phantom further shows the potential of the proposed algorithm.


This paper considers the problem of recovering a sparse signal representation according to a signal dictionary. This problem could be formalized as a penalized least-squares problem in which sparsity is usually induced by a ℓ1-norm penalty on the coefficients. Such an approach known as the Lasso or Basis Pursuit Denoising has been shown to perform reasonably well in some situations. However, it was also proved that non-convex penalties like the pseudo ℓq-norm with q \lt 1 or SCAD penalty are able to recover sparsity in a more efficient way than the Lasso. Several algorithms have been proposed for solving the resulting non-convex least-squares problem. This paper proposes a generic algorithm to address such a sparsity recovery problem for some class of non-convex penalties. Our main contribution is that the proposed methodology is based on an iterative algorithm which solves at each iteration a convex weighted Lasso problem. It relies on the family of non-convex penalties which can be decomposed as a difference of convex functions. This allows us to apply difference of convex functions programming which is a generic and principled way for solving non-smooth and non-convex optimization problem. We also show that several algorithms in the literature dealing with non-convex penalties are particular instances of our algorithm. Experimental results demonstrate the effectiveness of the proposed generic framework compared to existing algorithms, including iterative reweighted least-squares methods.


Credit: NASA / JPL / SSI, Shadow on Saturn's ring thanks to the equinox.

Thursday, August 13, 2009

Post-doctoral / Internship position in Multiscale/Multirate Image Processing with Applications in the Geosciences

Laurent Duval a reader of this blog has a postdoc position (which does not require an understanding of compressive sensing):

Subject: Job: Post-doctoral / Internship position in Multiscale/Multirate Image Processing with Applications in the Geosciences

Organization: IFP
Location: Rueil-Malmaison, France
Deadline: Open until filled (beginning 2nd semester 2009)
Duration: 12 months
Gross salary: from ~2400 euros

Context:
IFP has an opening for a post-doctoral position in its Technology, Computer Science and Applied Mathematics Department. IFP is located in Rueil-Malmaison, France, near Paris. The position offers the possibility of collaboration with the Signal and Communications group at University Paris-Est.

IFP is a world-class public-sector research and training center, aimed at developing the technologies and materials of the future in fields of energy, transport and the environment. It provides public and industry stakeholders with innovative solutions for a smooth transition to the more efficient, more economical, cleaner and sustainable energies and materials of the future.

IFP fosters knowledge transfer between long-term fundamental research, applied research and industrial development in keeping with the recommendations of the Barcelona European Council held in March 2002. IFP is funded both by a State budget and by resources provided by private French and foreign international partners.

More information on the Web :
http://www.ifp.fr/
http://www.ifp.com/
http://www-igm.univ-mlv.fr/LabInfo/equipe/signal
http://www-syscom.univ-mlv.fr/~pesquet/

Topic:
The topic proposed for this post-doctoral position is focused on the analysis of geophysical data and their filtering with the help of multiscale/multirate image processing algorithms. Historically, the complexity of seismic data and its interpretation have contributed to the development of several efficient signal processing tools such as the wavelet transform.

In certain seismic data however, different wave types mixed together cannot be separated easily by standard random noise filtering schemes. In order to remedy this issue, efforts have been underway to use models that can be partially matched with data in order to allow for adaptive identification or subtraction.

The aim of the proposed work is to develop innovative techniques for multiscale/ multirate data/model matching. The eventual goal is to exploit simultaneously model and sparse features in the transformed domain with the recently developed directional wavelets and filter banks, based on local multiscale attributes.

While the proposed subject is focused on seismic applications, it is strongly related to more general issues in model based signal processing and detection theory, found in many areas of engineering and science.

Related references:
-C. Chaux et al., 2006, IEEE Trans. Image Processing 15(8) 2397-2412, doi: 10.1109/TIP.2006.875178
Image Analysis Using a Dual-Tree M-Band Wavelet Transform

-A. Droujinine, 2006, J. Geophys. Eng. 3 59-81, doi: 10.1088/1742-2132/3/1/008
Multi-scale geophysical data analysis using the eigenimage discrete wavelet transform

-J. Gauthier et al., 2009, IEEE Trans. Signal Processing, doi: 10.1109/TSP.2009.2023947
Optimization of Synthesis Oversampled Complex Filter Banks


Qualifications:
(1) A PhD degree in Electrical Engineering (signal or image processing,
computer vision, Computer Science, Applied Mathematics), or other related
experience;
(2) Programming skills with MATLAB and C/C++;
(3) Excellent skills in signal/image analysis;
(4) Knowledge in Geophysics is desirable but not required;
(5) Knowledge in wavelets and filter banks is highly desirable.

Application procedure:
Candidates should send an application letter with a PDF detailed CV, together with a list of publications, a PDF copy of their PhD Thesis and at least two reference letters.

Documents should be sent at: laurent(dot)duval(at)ifp.fr

For further information, please contact:
Laurent Duval
IFP, R1130R
1 et 4 avenue de Bois-Preau
F-92852 Rueil-Malmaison Cedex
Tel: +33 1 47 52 61 02
Tel: +33 1 47 52 70 12

Wednesday, August 12, 2009

CS: Sparse Recovery with Pre-Gaussian Random Matrices


Sparse Recovery with Pre-Gaussian Random Matrices by Simon Foucart, Ming-Jun Lai.The abstract reads:
We show that a matrix whose entries are independent copies of a symmetric pre-Gaussian random variable possesses, with overwhelming probability, a Modified Restricted Isometry Property in q-quasinorms for 0 \lt q \le 1/3. We then prove that, if the matrix of an underdetermined linear system of equations satisfies this property, then the sparsest solution of the system can be found using l_q-minimization.


Credit: 808caver, Moonbow over Venus, when light reflected from the Moon shines on rain at night.

Tuesday, August 11, 2009

CS: Update on "A Simple Compressive Sensing Algorithm for Parallel Many-Core Architectures", Saturn Equinox


Jerome Darbon just let me know the following
We have just updated our report "A Simple Compressive Sensing Algorithm for Parallel Many-Core Architectures", Alexandre Borghi, Jerome Darbon, Sylvain Peyronnet, Tony Chan and Stanley Osher. We have added more experiments including time results for the newest Nvidia GPUs and multi-cores of Intel. You might be interested by these new results.

The new version is available here:
or as a webpage here:

This is very interesting as we now have information about what happens beyond some toy models. The first version of this paper was covered earlier when it came out. Let us note that other authors and teams have since used GPUs to perform the compressive sensing reconstruction (see the second part of the Compressive Sensing Hardware page). From that list, there are four:

2.1 UCLA GPU/Multicore solver.

2.2 University of Calgary GPU solver.

2.3 Graz University of Technology GPU solver

2.4 University of Wisconsin Implementation of SpaRSA on a GPU


Let us note that the SpaRSA paper is specifically designed around implementing non greedy solver on the GPU. One should also keep in mind, as I mentioned earlier this year, that there is always the Jacket product allowing Matlab to GPU programming. The paper of Jerome and co-workers covers multicore CPUs, an area of investigation left untouched by the other investigations, even though it looks like this type of approach is bound for a bright future. Thanks Jerome for the heads-up!

By the way, today is Saturn Equinox, an event that takes place every fifteen years, which means two things:





Image Credit: NASA/JPL/Space Science Institute.

Monday, August 10, 2009

CS: A Short Note on Compressed Sensing, Non-Iterative Reweighted-Norm Least-Squares Local l_0 Minimization, CVPR09 and a talk

Today, we have two papers, some conference papers and a presentation, here we go:

In this short note, we propose another demonstration of the recovery of sparse signals in Compressed Sensing when their support is partially known. In particular, without very surprising conclusion, this paper extends the results presented recently in [VL09] to the cases of compressible signals and noisy measurements by slightly adapting the proof developed in [Can08].

Laurent also mentions on his page that if somebody finds an error he would be happy to get your feedback.



We present a non-iterative algorithm for computing sparse solutions to underdetermined M×N linear systems of equations. The algorithm computes a solution which is a local minimum of the l_0 norm (number of nonzero values) obtained from the l_1 norm (sum of absolute values) minimum. At each step, it uses reweighted-norm least-squares minimization to compute the l_pp norm for values of p decreasing from 2 to 0. The result is similar to the l_1 solution, but uses less computation (solution of ten M×M systems of equations), and there are no convergence issues.
Andy makes the difference between iteration and recursion. I think his approach is different compared to the current reweighted l_p strategies. I look forward to someone implementing this algorithm.


The CVPR '09 paper that are on the web are listed here.

Frank Curtis will make a presentation at Lehigh University, Bethlehem, PA on 19-21 August 2009 with the title: A Sequential Quadratic Programming Method for Nonsmooth Optimization
The desscription of the talk:

Algorithms for the solution of smooth, constrained optimization problems have enjoyed great successes in recent years. In particular, the framework known as sequential quadratic programming (SQP) has been studied and applied to a variety of interesting applications. Similarly, there has been a great deal of recent work on the solution of nonsmooth, unconstrained optimization applications. One approach that has been successful in this context is gradient sampling (GS) — a method that, unlike the many variations of bundle methods, only requires the computation of gradients during the solution process. In this talk, we combine elements of SQP and GS to create an algorithm for nonsmooth, constrained optimization and illustrate the potential for such an approach on illustrative test problems in eigenvalue optimization and compressed sensing.


Credit: NASA/JPL/Space Science Institute, Punching through the F Ring, Released: August 7, 2009 (PIA 11662). Something is going through Saturn's F ring as it turns out Saturn's Equinox comes up every fifteen years and this inclination allows one to see in greater details things that are going through the rings. Via Bad Astronomy Blog. This is interesting, as we recently saw comets hitting Jupiter in a fifteen year interval. I know this sound stupid but has anyone evalute if the angle of inclination of this "thing" is the same as that of the other "thing" that Jupiter two weeks ago ?

Saturday, August 08, 2009

Ensemble Averaging Love



Well now that the Netflix competition is over, the folks at Netflix thinks they can up the ante of the suspense that occured in the first contest by having a second contest. Woohoo! It's all fine and dandy but I keep on having mixed feelings about putting all these nerdy brains to work on a problem where one matches one's taste to others through proxies such as movies. There is a tougher challenge at hand and one you can directly impact: People matching. As it so happens, Markus Frind, the guy who started Plenty of Fish is looking for a person with a Ph.D to help out in the referral engine for his hugely successful free dating site. Think of it as the Craig's list of love. Can you really handle finding Luuuuvvvv ? As the Chinese say, may you live in interesting times....


PS: for those of you not accustomed to the word, Luuuuvvv is sometimes pronounced the same as love in certain part of the U.S. where country music is typically a hallmark.

Video: LeeAnn Rimes, Blue. LeeAnn first sang this song when she was 13. Many people remember the first time they heard that song.

Thursday, August 06, 2009

CS: An announcement and a plea. Videos of CS particle Filter results

I am going to be offline starting next Tuesday till the end of the month. The automatic submission system of Blogger allows me to post some entries in advance. If you want to submit a job description, a paper or a code at a certain date in the second part of August, I can write the entry now and it will show up at the appropriate time on the blog. You just need to give me the address where the job description/paper or code will be located and the date at which you want an announcement to be made on the blog. Thank you.

David a commenter on the previous entry rightly pointed out that the video of Rich Baraniuk's presentation on Compressive Sensing presenting the MIT Random Lens Imager by Rob Fergus, Antonio Torralba, and William T. Freeman has pulled by the author of the video. If you are that person please, pretty please put it back on youtube for the enlightment of the community.





In case it doesn't show up again, I think I am going to have to make a video myself. and that my friends, it would surely be an abomination.


While checking Youtube, I found the following two videos using Compressive Sensing and Particle Filter. If somebody knows the owner or the paper used to get these results, I'll be glad to make a larger mention of it on the blog. Here they are:



Description: Compressive tracking of two targets using the Compressive Particle Filter. We used a CS matrix with each element drawn iid from N(0,1). The total number of measurements is 1000. The circles denote the approximate target size.





Description: Video results of the Compressive Particle Filter. The frames are 144x176. We used 1000 measurements and a CS matrix with each element drawn iid from N(0,1). The circle denotes the estimated size of the target.

Wednesday, August 05, 2009

CS: Blind Deconvolution and a postdoc.

In the past, I have mentioned that one of the big issue when one was trying to arrange a Compressive Sensing system (or any system for that matter) was the issue of calibration. But in compressive sensing, it takes a particular importance as the mixing or multiplexing is supposedly random. Let's take the example of the MIT Random Lens Imager (by Rob Fergus, Antonio Torralba, and William T. Freeman) featured in one of Rich Baraniuk's presentation on Compressive Sensing:



(this video and others can be found on the Compressive Sensing Hardware page [Update: It looks like the video was removed])

After having constructed such a system, one is bound to go through a lengthy calibration process of taking pictures of known elements and their response on the sensor with the eventual expectation that with enough such calibration photos one would be able to build the Point Spread Function or the Camera Response Function. The MIT paper takes the right point of view that the PSF should also be sparse and tries to devise this PSF for a small part of the sensor (they use only a 32x32 section of the CMOS/CDD, see figure 7) and one wonders how one can speed up this process now that we have faster reconstruction algorithms as the MIT report came out in 2006. This issue of finding the (here random) mixing system is known in the literature as blind deconvolution. I did a search on this keyword in my e-mail box that contains the daily dump of my webcrawler and found the following papers that I may or may not have covered:

Non-Iterative Valid Blind Deconvolution of Sparsifiable Images using an Inverse Filter by Andrew Yagle. The abstract reads:

We propose a new non-iterative algorithm for the blind deconvolution problem of reconstructing both a sparsifiable image and a point-spread function (PSF) from their valid 2D convolution. No support constraint is needed for either the image or the PSF, nor is non-negativity of the image. The only requirements are that the PSF be modelled as having a finite support inverse or equalizing filter (IF), and that the product of the (known) numbers of nonzero pixels in this inverse filter and the sparsified image be less than the total number of image pixels. The algorithm requires only the solution of a large linear system of equations and a small rank-one matrix decomposition.

and also:

Let us note that this is not the same case studied by Yaron Rachlin and Dror Baron in The Secrecy of Compressive Sensing Measurements. In that case, the inputs AND the mixing matrix are unknowns.


I also found this job on the interwebs: a postdoc at University of Edinburgh, Scotland, I am adding it to the Compressive Sensing Jobs page:

University of Edinburgh School of Engineering: Research Associate in "Compressed Sensing SAR"

Vacancy details

* Vacancy Reference: 3011316
* Department: School of Engineering
* Job Title: Research Associate in "Compressed Sensing SAR"
* Job Function: Academic
* Job Type: Full Time
* Live Date: 29-Jul-2009
* Expiry Date: 20-Aug-2009
* Salary Scale: £20,226 - £23,449
* Internal job: No. Anybody can apply for this position.
* Further Information: Further Information
* Conditions Of Employment: View Conditions of Employment

School of Engineering: Research Associate in "Compressed Sensing SAR"

This post is to investigate a compressed sensing coding techniques for use in Synthetic Aperture Radar (SAR) remote sensing applications.

The purpose of "Compressed sensing SAR" is to provide SAR imaging with a superior compression system that has the capability of practically transmitting large quantities of SAR data to the receiving station over restricted bandwidth with minimal on board computation. The communication or data storage bottleneck is a key factor currently limiting coverage and/or resolution in SAR instruments. Our unique approach to addressing this problem is to utilize the new concept of "compressed sensing" to remove this bottleneck. Compressed sensing has the potential to provide a compression scheme that is bandwidth efficient while requiring minimal on-board processing. The project will explore the feasibility of such a technique and design and evaluate a proof of concept compression system. In addition, you will have the opportunity to work closely with other members of the sparse representations research group within the Institute for Digital Communications.

You will have a PhD (or have recently submitted) or equivalent in engineering, computer science or related discipline, and have a strong background in signal processing. The appointment will be for two years starting on 1st October 2009 or as soon as possible thereafter.

Fixed Term: 2 years

Tuesday, August 04, 2009

CS: Sparsity in Random Matrix Theory, Approximation of functions of few variables in high dimensions, YALL1, KSVD-Box v11


Going back from "applied" math to "pure" mathematics: On the role of sparsity in Compressed Sensing and Random Matrix Theory by Roman Vershynin. The abstract reads:
We discuss applications of some concepts of Compressed Sensing in the recent work on invertibility of random matrices due to Rudelson and the author. We sketch an argument leading to the optimal bound \Omega^(N−1/2) on the median of the smallest singular value of an N × N matrix with random independent entries. We highlight the parts of the argument where sparsity ideas played a key role.
Following up on some elements shown in the third Paris Lecture of Ron DeVore, here is the more substantive paper: Approximation of functions of few variables in high dimensions by Ron DeVore, Guergana Petrova, Przemyslaw Wojtaszczyk. The abstract reads:

Let f be a continuous function defined on \Omega:= [0, 1]^N which depends on only l coordinate variables, f(x1, . . . , xN) = g(xi1 , . . . , xi` ). We assume that we are given m and are allowed to ask for the values of f at m points in \Omega. If g is in Lip1 and the coordinates i_1, . . . , i_l are known to us, then by asking for the values of f at m = L^l uniformly spaced points, we could recover f to the accuracy |g|Lip1L−1 in the norm of C(). This paper studies whether we can obtain similar results when the coordinates i_1, . . . , i_l are not known to us. A prototypical result of this paper is that by asking for C(l)L^l(log2 N) adaptively chosen point values of f, we can recover f in the uniform norm to accuracy |g|Lip1L−1 when g 2 Lip1. Similar results are proven for more general smoothness conditions on g. Results are also proven under the assumption that f can be approximated to some tolerance \epsilon (which is not known) by functions of l variables.
I note that the authors make a connection to the Junta's problem as discussed by Dick Lipton recently and mentioned here.

Also, following up on yesterday's release of a dictionary learning technique (without the code yet), today we have two new upgrades of a dictionary learning tool and a reconstruction solver:

  • Ron Rubinstein has released a new version of the K-SVD dictionary learning technique. From the page:

    KSVD-Box v11 Implementation of the K-SVD and Approximate K-SVD dictionary training algorithms, and the K-SVD Denoising algorithm. Requires OMP-Box
    v9.

    Update (August 3 2009): KSVD-Box v11 has been released, and adds 1-D signal handling!


  • Yin Zhang just released YALL1 version beta-6 that can solve 2 more models for A*A' = I (with bug fixing). From the page:


    This Matlab code can currently be applied to the following eight (8) L1-minimization
    problems:

    • (BP) min ||Wx||w,1 s.t. Ax = b
    • (L1/L1) min ||Wx||w,1 + (1/ν)||Ax - b||1
    • (L1/L2) min ||Wx||w,1 + (1/2ρ)||Ax - b||22
    • (L1/L2con) min ||Wx||w,1, s.t. ||Ax - b||2 \le δ
    • (BP+) min ||x||w,1 s.t. Ax = b and x \ge 0
    • (L1/L1+) min ||x||w,1 + (1/ν)||Ax - b||1 s.t. x \ge 0
    • (L1/L2+) min ||x||w,1 + (1/2ρ)||Ax - b||22 s.t. x \ge 0
    • (L1/L2con+) min ||x||w,1, s.t. ||Ax - b||2 <= δ, x \ge 0
    where A is m by n with m less than n, and the solution x (or its representation Wx) is supposed to be (approximately) sparse. The data (A,b) can be real or complex, and the signal x can also be complex in the cases of no nonnegativity constraint. A unitary sparsifying basis W is allowed and the 1-norm for x (or Wx) can be weighted by a vector w \ge 0. The capacity for solving models (L1/L2con) and (L1/L2con+) has been added to version beta-6 for A*A' = I.



  • Image credit: NASA, ESA, and H. Hammel (Space Science Institute, Boulder, Colo.), and the Jupiter Impact Team. If you recall the Another Space Odyssey entry, well, this picture is that of this impact as viewed by Hubble. Via the Bad Astronomy blog.

    Monday, August 03, 2009

    CS: Online Sparse Coding and Matrix Factorization, GraDes, TSW-CS





    Today, we have a potential game changer in dictionary learning which as we all know is part of compressive sensing since it allows one to reconstruct sparse signals after being incoherently acquired. The article is Online Learning for Matrix Factorization and Sparse Coding by Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro and features a somewhat simple algorithm that seems to scale to very large datasets. The abstract of the paper reads:
    Sparse coding—that is, modelling data vectors as sparse linear combinations of basis elements—is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on the large-scale matrix factorization problem that consists of learning the basis set, adapting it to specific data. Variations of this problem include dictionary learning in signal processing, non-negative matrix factorization and sparse principal component analysis. In this paper, we propose to address these tasks with a new online optimization algorithm, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples, and extends naturally to various matrix factorization formulations, making it suitable for a wide range of learning problems. A proof of convergence is presented, along with experiments with natural images and genomic data demonstrating that it leads to state-of-the-art performance in terms of speed and optimization for both small and large datasets.

    One of the author, Julien Mairal, mentioned to me that the algorithm should be shortly available. I have added the paper (and soon to follow code) to the dictionary section of the Big Picture in Compressive Sensing.

    Rahul Garg let me know that the GraDes algorithm is now available here. It was featured in Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property by Rahul Garg and Rohit Khandekar and mentioned here. As Laurent Jacques and the authors mentioned later, GraDes is pretty much an IHT algorithm. I have added it to the reconstruction section of the Big Picture in Compressive Sensing page.

    Finally, we have Exploiting Structure in Wavelet-Based Bayesian Compressive Sensing by Lihan He and Lawrence Carin. The abstract reads:
    Bayesian compressive sensing (CS) is considered for signals and images that are sparse in a wavelet basis. The statistical structure of the wavelet coefficients is exploited explicitly in the proposed model, and therefore this framework goes beyond simply assuming that the data are compressible in a wavelet basis. The structure exploited within the wavelet coefficients is consistent with that used in waveletbased compression algorithms. A hierarchical Bayesian model is constituted, with efficient inference via Markov chain Monte Carlo (MCMC) sampling. The algorithm is fully developed and demonstrated using several natural images, with performance comparisons to many state-of-the-art compressive-sensing inversion algorithms.


    The TSW-CS code is part of the Bayesian Compressive Sensing Code of Larry Carin's group at Duke.

    Credit: World Science Festival 2009: Bobby McFerrin Demonstrates the Power of the Pentatonic Scale from World Science Festival on Vimeo.

    Sunday, August 02, 2009

    CS: Short Stats

    So far there has been 511 entries on Compressive Sensing in this blog. If you are coming to the blog, the statistics on the right hand side show that the blog has about 467 subscribers reading this blog through an RSS feed, while another 182 people receive every entry by e-mail. About 300 to 500 people come to the site on a daily basis thanks to search engines and other sources. In short, a little more than a 1000 people are interested in learning what is up on the subject of Compressive Sensing.

    After an initial spike of interest, about 50 people now come to the Compressive Sensing hardware page weekly. Every week, we also have on average:
    I am even surprised that some people e-mailing me know about the Big Picture in Compressive Sensing page but not the blog!

    Printfriendly