Saturday, December 31, 2011

Thank You and Happy New Year

In particular, let me thank the following people who made these  "nuit blanches" in 2011 more entertaining:

Yves WiauxStephen BeckerJong Chul YeSergey TenSylvain GiganTom Kaye, Art Anderson, Justin ZinielJerome GillesSon HuaBo ZhangMateusz MalinowskiPatrick GillHenrik OhlssonJason T. ParkerVolkan CevherPhilip SchniterJason LaskaMark DavenportFlorent Krzakala, Lenka Zdeborova , Marc Mézard, François Sausset,Yifan SunThong DoGael VaroquauxMatthieu PuigtMing YanLaurent DuvalPeter GerstoftKiryung LeeYoram BreslerAswin C. SankaranarayananDaniel ReetzDror BaronOlivier GriselHervé JégouDanny BicksonJoel TroppPiotr IndykGongguo Tang, Silvio Ventres, Marco DuarteVicente MalaveZeno GanterTianyi David ZhouUlugbeck KamilovMark TygertJared TannerBob SturmHenry PfisterAndreas TillmanMike WakinMark DavenportGenevera AllenXiaobo QuJake HofmanRebecca WillettMartin JaggiNematollah K. BatmanghelichOri KatzZai YangIvan PapushaHyrum AndersonGreg CharvatDavid San SegundoTibault ReveyrandLaurent JacquesEric TramelDaniel Reetz, Alain K., Dick Gordon, Cable Kurwitz, Emily AllstotRemi GribonvalMax LittleAngshul MajumdarDirk LorenzJulien MairalJort GemmekeKrzysztof, Ilan, Andrew McGregorPiotr IndykLaurent DaudetNamrata VaswaniJim FowlerRishi GuptaThomasRamesh RaskarZhilin ZhangPetros BoufounosThomas Arildsen, R. H. Sri Hari, Ravi Kiran B., Pierre Vandergheynst, Emmanuel Candes, Gabriel Peyre, Gilles Chardon, Sebastien Popoff, Geoffroy Lerosey, Borhan SannadajiSteven Vandeput,  Maria Jafari, Stefano Marchesini, Alessandro Foi, Kun Qiu, Vladimir KofmanOri ShentalEmil Sidky, Varun AV, Tim Lin , Mohammadreza (Reza) Mahmudimanesh and Alejandro Weinstein



Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Inception and Philosophy





Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

We're the 1%

Thank you Stan.


Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, December 30, 2011

Entangled-photon compressive ghost imaging


Here the last paper of the year: Entangled-photon compressive ghost imaging by Petros Zerom, Kam Wai Clifford Chan, John C. Howell, and Robert W. Boyd. The abstract reads:
We have experimentally demonstrated high-resolution compressive ghost imaging at the single-photon level using entangled photons produced by a spontaneous parametric down-conversion source and using single-pixel detectors. For a given mean-squared error, the number of photons needed to reconstruct a two-dimensional image is found to be much smaller than that in quantum ghost imaging experiments employing a raster scan. This procedure not only shortens the data acquisition time, but also suggests a more economical use of photons for low-light-level and quantum image formation.

Since it is behind a paywall, the following presentations might provide some additional insight:









Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

It's not a spinning disk either ...

Remember when a year and a half ago, I mentioned that the 'thing' that enables one to render compressive measurement universal and timeless should not be called a light modulating device or a rotating collection of masks, but rather a Randomized Carousel.



Mad Men ´The Carousel´ from Emilio on Vimeo.



Well, it looks like we have an instance of one: Spinning disk for compressive imaging by H. Shen, L. Gan, N. Newman, Y. Dong, C. Li, Y. Huang, and Y. C. Shen. The abstract reads:

We report the first, to the best of our knowledge, experimental implementation of a spinning-disk configuration for high-speed compressive image acquisition. A single rotating mask (i.e., the spinning disk) with random binary patterns was utilized to spatially modulate a collimated terahertz (THz) or IR beam. After propagating through the sample, the THz or IR beam was measured using a single detector, and THz and IR images were subsequently reconstructed using compressive sensing. We demonstrate that a 32-by-32 pixel image could be obtained from 160 to 240 measurements in both the IR and THz ranges. This spinning-disk configuration allows the use of an electric motor to rotate the spinning disk, thus enabling the experiment to be performed automatically and continuously. This, together with its compact design and computational efficiency, makes it promising for real-time imaging applications


While the paper is behind a paywall, here are the videos showing some results of reconstruction using:

TV-min nonlinear algorithm
   
 TV-min
   
 MMSE linear estimation
 


This will be added to compressive sensing hardware shortly.

Credit: AMC, Mad Men.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, December 29, 2011

Sparse- and Low-Rank Approximation Wiki

Stephen Becker let me know of this new wiki he set up: the Sparse- and low-rank Approximation Wiki. From the introduction:
"...Welcome to the sparse- and low-rank approximation wiki. This wiki has information on solvers and problems that arise in these fields (and subfields, such as compressed sensing). Everyone is welcome and encouraged to edit this wiki; it runs on the same software as wikipedia. Please use common-sense and standard etiquette when editing. The first step to editing is to create an account. For an idea of where to start editing, see pages that are wanted. If you have an implementation of an algorithm, please link to it! There are many excellent lists of related material on the web. This wiki is not trying to duplicate this information, but rather provide a less comprehensive but more detailed listing of available methods. The goal is that this website becomes a useful tool for practitioners searching for the ideal algorithm, and for researchers wanting to develop new methods..."
I have started adding a few resources, don't hesitate to participate. Thanks Stephen for the initiative.



  Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, December 28, 2011

Compressive MUSIC, Forward/Backward Compressive Subspace Fitting Implementations

Jong Chul Ye alerted me to this new resource:
".....Hi Igor,

....  I would like to bring your attention to our website with a compressive sensing joint sparse recovery software packages for multiple measurement vector (MMV) problems.
The current version of package includes three algorithms to address MMV problems:
  1. Compressive MUSIC (CS-MUSIC)
  2. Forward Compressive Subspace Fitting (forward CSF)
  3. Backward Compressive Subspace Fitting (backward CSF)
More specifically, Compressive MUSIC identifies the parts of support using CS, after which the remaining supports are estimated using a novel generalized MUSIC criterion. Using a large system MMV model, we showed that our compressive MUSIC requires a smaller number of sensor elements for accurate support recovery than the existing CS methods and that it can approach the optimal L0-bound with finite number of snapshots. The theoretical analysis of Compressive MUSIC will appear in 2012 January Issue of IEEE Trans. on Information Theory.
While these type of hybrid algorithms are optimal for critically sampled cases, they have limitations in exploiting the redundant sampling to improve noise robustness. To address this issue,   we recently introduced a novel subspace fitting criterion that extends the generalized MUSIC criterion so that it exhibits near-optimal behaviors for various sampling conditions. In addition, the subspace fitting criterion leads to two alternative forms of compressive subspace fitting (CSF) algorithms with forward and backward support selection (forward CSF, and backward CSF), which significantly improve the noise robustness.
Related manuscripts can be downloaded from the following links:
[1] J. M. Kim, O. K. Lee, and J. C. Ye. Compressive MUSIC: A Missing Link between Compressive Sensing and Array Signal Processing. IEEE Trans. Inf. Theory, Jan. 2012 [preprint is here]
[2] J. M. Kim, O. K. Lee, and J. C. Ye, 2011. Noise Robust Joint Sparse Recovery using Compressive Subspace Fitting.  arXiv:1112.3446v1 [cs.IT]
If you have any question, please feel free to contact me.
Happy New Year !
-Jong..."

Thanks  Jong . This webpage will be both featured in the Big Picture in Compressive Sensing and in the Matrix Factorization Jungle Page..The earlier mention "Compressive MUSIC with optimized partial support for joint sparse recovery by Jong Min KimOk Kyun LeeJong Chul Ye [no code]" will now be replaced with a link to the page where the code resides.


Tuesday, December 27, 2011

Best Papers of 2011 ?

I put those two questions on Quora but you can answer them here in the comment section as well. Both outlets allow for anonymous commenting:

What papers have been the most interesting in Compressive Sensing in 2011?

Is there any Computer Science view on Compressive Sampling?

I answered that question on Quora. If somebody has a more intuitive answer for the computer science and engineering folks (not the TCS folks obviously), please give it on Quora or edit/upvote this one:

Let's use the concept of hash functions and see if that helps

In this instance, think of one key that is run through different hash functions yielding several hashes. The goal of a hash function is to among other things to reduce the size of the information from the initial key: each hash is smaller bit-wise compared to the initial key.

The hash functions used in Compressive Sensing are linear functionals of the key. Random numbers are used to produce these linear functionals.
It is generally difficult if not impossible to get the original key from one hash, but with  Compressive Sensing  it is possible to recover a key from the different hashes produces by several hash functions.
When you look at the details, you'll find that the number of hash functions needed to recover the initial key is small if the initial key has some regularity (it has a lot of the same character in it, for a vector representing that key we say that it is sparse, i.e it has a lot of zeros in it). 
The number of hashes (produced each with a different hash function) needed to recover the initial key (and never have a collision with another key) depends on the "regularity" of the key, not on the large dimensional space in which the key resides. 
This is why the field is called  Compressive Sensing , if you were to sense keys through different simple hash functions, then if the keys have some sorts of regularity (sparsity), the number of hash functions needed to recover these keys is proportional to the regularity of the key not the high dimensional space in which the key lives. The compression comes from the few number of hashes needed to determine uniquely a key.  Compressive Sensing also provides the machinery to recover that key from several hashes

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, December 26, 2011

Bilateral Random Projections

Reminded of it thanks to Sergey, here is the simple but powerful:


Bilateral Random Projections by Tianyi ZhouDacheng Tao. The abstract reads:
Low-rank structure have been profoundly studied in data mining and machine learning. In this paper, we show a dense matrix X’s low-rank approximation can be rapidly built from its left and right random projections Y_1 = XA_1 and Y_2 = X^T A_2, or bilateral random projection (BRP). We then show power scheme can further improve the precision. The deterministic, average and deviation bounds of the proposed method and its power scheme modification are proved theoretically. The effectiveness and the efficiency of BRP based low-rank approximation is empirically verified on both artificial and real datasets.


Here is a small and simple implementation of it.


clear
% X is an mxn low rank matrix
m= 100;
n= 20;
r = 20;
q = 30;
X = rand(m,n);
%
%
A1 = randn(n,r);
A2 = randn(m,r);
%
Y1 = X*A1;
A2 = Y1;
Y2 = X'*A2;
A1 = Y2;
Y = X*A1;
L = Y1*inv(A2'*Y1)*Y2';  % compare both svds
svd(X)
svd(L)
It will be included shortly in the Matrix Factorization Jungle but it is also the main tool used in GoDec. The video of  Tianyi Zhou who talks about GoDec and the Bilateral Random Projections can be found here.




Friday, December 23, 2011

The Trial That Wasn't

An anonymous reader wrote in the comments section of yesterday's entry that we should pay attention to the discussion and attendant comments that are taking place on Nature's site
'...Three-dimensional technique on trial
Critics take a hard look at ankylography, a proposed method for revealing molecular structures from single pictures.
Eugenie Samuel Reich..."
Can we make this a more dramatic title please ? Nature needs to sell more paper and no 1015 seconds is not a femtosecond (what's 30 orders of magnitude among friends ?). 

Before we get into the listing of what possibly could be wrong or right, here is why I am a little uneasy about this. The authors make available their solvers for all to try. Let us imagine that the solver naturally enforce an unrecognized sparsity assumption that the authors themselves have not recognized. Where is the burden of proof ? On the side of the original authors who need to make a connection with what their solver is really doing or on the side of the critics ? I am not sure I know but it may well be unraveled by trying the solver out on many different cases. Let us check the substance of the comments. The initial critics make no mention whatsoever to a sparsity issue/constraint: You cannot have a negative result (this algo does not work) if you have not done some minimal research on all possible known possibilities: Compressive sensing has been in existence since 2004 now and not mentioning a sparsity based assumption is becoming not acceptable. Here is a more insightful comment from none other than Stefano Marchenisi who  makes the following point:

The problem in my view is that at short wavelengths the diffracted intensity drops quickly at high angles, while long wavelengths lack the penetration required to perform 3D imaging of most materials, limiting the applications of this technique.
We found that computational focussing was insufficient to provide the true 3D structure of a general object1:
"It should be noted that this computational focusing does not constitute 3D imaging, but is simply the propagation of a 2D coherent field. The optical transfer function (OTF) for this imaging system is the Ewald surface, and in this situation with coherent illumination the integrated intensity of the image does not change with defocus (a consequence of Parseval's theorem and the invariance of the diffraction intensities with defocus). That is, it is unlikely that numerical defocusing of a complicated object could give results that could be as easily interpreted as for the pyramid-membrane test object used here. This situation is unlike partially-coherent imaging in a microscope, where out-of-focus objects contribute less power to the image and some optical sectioning can be carried out." 1
Having said that, it may be possible to apply further constraints, for example if the object is sufficiently sparse 2 to extend the technique.

1 H. N. Chapman et al. "High-resolution ab initio three-dimensional x-ray diffraction microscopy"

2 Compressive Holography, David J. Brady, Kerkil Choi, Daniel L. Marks, Ryoichi Horisaki, and Sehoon Lim, Optics Express, Vol. 17, Issue 15, pp. 13040-13049 (2009)

I think Stefano makes one of the most salient point that will make the difference between investigating science with or without priors more pronounced in the future. Two days ago we had an imaging technique that reconstructed an image based on some sort of inpainting


"....
Sparsity-based super-resolution
In mathematical terms, the bandwidth extrapolation problem underlying sub-wavelength imaging corresponds to a non-invertible system of equations which has an infinite number of solutions, all producing the same (blurred) image carried by the propagating spatial frequencies.
That is, after measuring the far field, one can add any information in the evanescent part of the spectrum while still being consistent with the measured image. Of course, only one choice corresponds to the correct sub-wavelength information that was cut off by the diffraction limit. The crucial task is therefore to extract the one correct solution out of the infinite number of possibilities for bandwidth extension. This is where sparsity comes into play. Sparsity presents us with prior information that can be exploited to resolve the ambiguity resulting from our partial measurements, and identify the correct bandwidth extrapolation which will yield the correct recovery of the sub-wavelength image...". 
At some point people will ask if this is legal in about the same way we have seen the debate between Frequentists and Bayesians. I think we may have a very similar issue here (unrecognized as it may).

Kevin Raines one of the original author wrote a long note (I have highlighted what I thought was an important point)

I would like to thank the community for providing vigorous feedback on our new imaging technique `ankylography'. We sincerely appreciate the debate and remain confident that our work will withstand further analysis and replication.
I will be presenting comments upon this debate that may not reflect the views of my co-authors as I have not been a member of Prof. Miao's research group for some time and am currently a graduate student in the Department of Applied Physics at Stanford University.
Before offering my comments, I want to clarify that our ankylography paper reports the discovery of a new method for coherent diffraction imaging using single or very few two-dimensional, spherical diffraction patterns to obtain a three-dimensional image. To support this claim, we provided:

1. Numerical evidence. We presented a series of non-trivial reconstructions, including a reasonable noise model, of objects of sufficient complexity to be of scientific interest: a glass nano-particle and a polio virion.
2. Experimental evidence. The experiment we performed was simple, but it verified the experimental feasibility of the principle of ankylography and showed that the technique tolerated noise at experimental levels. Since then, several experimental demonstrations that represent more complex and practical situations of ankylography have been published.

3. Theoretical Analysis. Our theoretical analysis developed a mathematical intuition for the validity of the principle of ankylography. It is important to note that none of the authors are mathematicians and that we did not intend to present a complete mathematical theory of anklography. We accept that there are scaling limitations to ankylography (as we identified in our paper). However, it is our goal and stated aim to work within the limitations to develop a robust imaging technique.

I will now comment briefly upon the two recent articles that aired some criticisms of ankylography.
1. Fundamental limits of ankylography due to dimensional deficiency by Wei. This paper analyses ankylography in terms of channel capacity. I recommend the vast body of theoretical and numerical work that has been done in compressed sensing for a counter viewpoint on information measures in imaging to those interested. In my view, channel capacity is not the best information measure for ankylography, and there certainly are other measures. Moreover, it is not just the scaling – in the limiting sense – that is important to ankylography: the details of the scaling are vital. That is, the most important case in ankylography is that of relatively small samples; we do not intend to reconstruct objects of size, say, 10^4 in each dimension.
2. Non-uniqueness and instability of ankylography by Wang et al. Our detailed numerical protocols were not followed in this work, and therefore it is not surprising that the authors obtained their disappointing results. Since the paper only appears to reflect the authors' interpretation/implementation of their version of ankylography, it does not diminish our work in any way.
I will now comment upon this news report by Eugenie Samuel Reich. Overall, it is an excellent article, and I commend Nature and the author for their good work. I have three suggestions:

1. It is worth noting that several further experimental demonstrations of ankylography have recently been published:
i. C.-C. Chen, H. Jiang, L. Rong, S. Salha, R. Xu, T. G. Mason and J. Miao. Three-dimensional imaging of a phase object from a single sample orientation using an optical laser. Phys. Rev. B 84, 224104 (2011).
ii. M. D. Seaberg, D. E. Adams, E. L. Townsend, D. A. Raymondson, W. F. Schlotter, Y. Liu, C. S. Menoni, L. Rong, C.-C. Chen, J. Miao, H. C. Kapteyn and M. M. Murnane. Ultrahigh 22 nm resolution coherent diffractive imaging using a desktop 13 nm high harmonic source. Opt. Express 19, 22470-22479 (2011).
The article states that "Miao has since made clear that the technique does not work on objects larger than 15 x 15 x 15 volume pixels, a size dependent on the resolution of the imaging technology". I wish to clarify that that limitation only applies to the simple demonstration code using a simplified algorithm. We do have more complex code, explained in detail in our paper, that will work on substantially larger sized objects. Also, we did not "train" our code on any particular type of structure, and implemented only very general constraints even in the more complex algorithms. So I would suggest replacing "the technique" with "the simple demonstration code" in the news article. By way of example, the source code for the ankylographic reconstruction of a simulated sodium silicate glass structure with 25 × 25 × 25 voxels has been posted at http://www.physics.ucla.edu/research/imaging/Ankylography. Even at these modest sizes that we obtain with relatively simple reconstruction codes, we have found that there are scientifically interesting samples that can be explored, which is why I am perplexed by, and disagree with, Marchesini's quote in the news report. Perhaps it was not clear that the limitation of 15^3 was only for the very simple matlab demonstration code? In any case, by my analysis, ankylography is quickly advancing upon its promise of becoming a useful imaging tool.

3. The article also states that "Despite the appeal of ankylography, many researchers were perplexed by what they see as a violation of the basic physical principle that you cannot get complete, 3D information from a single flat picture" Ankylography requires a spherical diffraction pattern. In practice, one can obtain a spherical diffraction pattern from a flat detector – if it is large enough – through a mathematical mapping that we describe in detail in our paper. In fact, in our original paper, we indicated that spherical detectors are to be preferred. Additionally, in the case of a flat detector, it would have to be much larger than the sample size. The paragraph continues "In particular, the picture will provide incomplete information about the interior of a subject, and critics argue that many possible 3D structures could generate the same image" In ankylography, the sample under view must be semi-transparent to the illuminating, coherent beam, and we discuss uniqueness in our paper. In my view, only in cases of very large sample sizes will uniques potentially become a problem, but even then only pathologically (i.e. with small probability). So, for a non-specialist, a useful thought experiment might be to consider imaging a transparent grain of rice with a large detector the size, say, of a queen matress at a good distance. Mathematically, the detector would make a measurement that maps onto a sphere in a three-dimensional space, despite the fact that the measurement is made in two-dimensions.
Finally, in response to Marchesini's comment above: I agree that the photon statistics may present a challenge for ankylography in certain applications. For example, the scattering from a single protein will likely be too weak to use ankylography directly (which is why we don't claim this application in our original paper). In our paper, we made precise calculations about the photon statistics and incorporated the noise due to finite photon number into our simulations, using the standard Poisson distribution. However, we wish to limit the debate here to the general feasibility of ankylography assuming that the diffraction pattern with a reasonable signal-to-noise ratio is obtainable.
Kevin, nice answer, but please get a webpage on the interwebs.

In all, I really don't see that much of an issue as the code is available for all to try. There maybe a hidden enforcement of sparsity in this solver and nobody can really figure it out... yet. At some point, instead of waiting seven years for some mathematical proof, I suggest any enterprising researcher to look into how robust this code is with respect to the sparsity of the sample. If there is a phase transition, it'll be a paper and an easier way for the math folks to anchor this "empirical result" into some new research direction or some older concepts

Credit Photo: NASA, Comet Lovejoy, ISS030-E-014350 (21 Dec. 2011) --- Comet Lovejoy is visible near Earth’s horizon in this nighttime image photographed by NASA astronaut Dan Burbank, Expedition 30 commander, onboard the International Space Station on Dec. 21, 2011.


Thursday, December 22, 2011

Everyone knows this is impossible: Is Ankylography an instance of Compressive Sensing ?

It's a tale we have heard before, reviewers tell us in their infinite wisdom that "everyone knows this is impossible". You can always doubt some paper, but in this particular case, the authors gave access to their codes. If it is impossible, it really means then maybe you're lazy. The ever outstanding ArXiv blog points us to the particular story of Ankylography In short, the authors propose an algorithm that can perform a 3D reconstruction of an object based on the diffraction pattern observed on a spherical shell. Examples and the attendant code are available here..I learned very quickly about the intricacies of the system by reading this well done REU report, where we get to see that the algorithm fills a 3D volume based on data gathered from a CCD or a CMOS from which some linear curve interpolation has been performed.. At some point, somebody is going to make the connection with sparsity and low rank systems I am sure, wink wink...



The algorithm

All the images I have seen from these papers could be construed as sparse or low rank. What would be interesting is provide this group with some sense at to how the phase transition like that of Donoho-Tanner will make their solvers unable to perform the reconstruction. 


The newest paper is:  Potential and Challenge of Ankylography by Jianwei Miao, Chien-Chun Chen, Yu Mao, Leigh S. Martin, Henry C. Kapteyn. The abstract is here:
The concept of ankylography, which under certain circumstances enables 3D structure determination from a single view[1], had ignited a lively debate even before its publication[2,3]. Since then, a number of readers requested the ankylographic reconstruction codes from us. To facilitate a better understanding of ankylography, we posted the source codes of the ankylographic reconstruction on a public website and encouraged interested readers to download the codes and test the method[4]. Those who have tested our codes confirm that the principle of ankylography works. Furthermore, our mathematical analysis and numerical simulations suggest that, for a continuous object with array size of 14x14x14 voxels, its 3D structure can usually be reconstructed from the diffraction intensities sampled on a spherical shell of 1 voxel thick[4]. In some cases where the object does not have very dense structure, ankylography can be applied to reconstruct its 3D image with array size of 25x25x25 voxels[4]. What remains to be elucidated is how to extend ankylography to the reconstruction of larger objects, and what further theoretical, experimental and algorithm developments will be necessary to make ankylography a practical and useful imaging tool. Here we present our up-to-date understanding of the potential and challenge of ankylography. Further, we clarify some misconceptions on ankylography, and respond to technical comments raised by Wei[5] and Wang et al.[6] Finally, it is worthwhile to point out that the potential for recovering 3D information from the Fourier coefficients within a spherical shell may also find application in other fields.
The original paper is: Three-dimensional structure determination from a single view by Kevin S. Raines, Sara Salha, Richard L. Sandberg, Huaidong Jiang, Jose A. Rodriguez, Benjamin P. Fahimian, Henry C. Kapteyn, Jincheng Du, Jianwei Miao. The abstract reads:
The ability to determine the structure of matter in three dimensions has profoundly advanced our understanding of nature. Traditionally, the most widely used schemes for 3D structure determination of an object are implemented by acquiring multiple measurements over various sample orientations, as in the case of crystallography and tomography (1,2), or by scanning a series of thin sections through the sample, as in confocal microscopy (3). Here we present a 3D imaging modality, termed ankylography (derived from the Greek words ankylos meaning 'curved' and graphein meaning 'writing'), which enables complete 3D structure determination from a single exposure using a monochromatic incident beam. We demonstrate that when the diffraction pattern of a finite object is sampled at a sufficiently fine scale on the Ewald sphere, the 3D structure of the object is determined by the 2D spherical pattern. We confirm the theoretical analysis by performing 3D numerical reconstructions of a sodium silicate glass structure at 2 Angstrom resolution and a single poliovirus at 2 - 3 nm resolution from 2D spherical diffraction patterns alone. Using diffraction data from a soft X-ray laser, we demonstrate that ankylography is experimentally feasible by obtaining a 3D image of a test object from a single 2D diffraction pattern. This approach of obtaining complete 3D structure information from a single view is anticipated to find broad applications in the physical and life sciences. As X-ray free electron lasers (X-FEL) and other coherent X-ray sources are under rapid development worldwide, ankylography potentially opens a door to determining the 3D structure of a biological specimen in a single pulse and allowing for time-resolved 3D structure determination of disordered materials.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, December 21, 2011

Sparsity-based single-shot sub-wavelength coherent diffractive imaging

Thanks to Sylvain Gigan for the heads-up on this one. We have covered it before but there seems to be additional findings (simpler solvers,....)


We present the experimental reconstruction of sub-wavelength features from the far-field intensity of sparse optical objects: sparsity-based sub-wavelength imaging combined with phase-retrieval. As examples, we demonstrate the recovery of random and ordered arrangements of 100 nm features with the resolution of 30 nm, with an illuminating wavelength of 532 nm. Our algorithmic technique relies on minimizing the number of degrees of freedom; it works in real-time, requires no scanning, and can be implemented in all existing microscopes - optical and non-optical.
Much could be gathered from the Supplemental Information Section but I cannot seem to locate it. However, I note from their paper the following:

.....It is in fact a universal scheme for recovering information beyond the cut-off of the response function of  a general system, relying only on a priori knowledge that the information is sparse in a known basis. As an exciting example, we have recently investigated the ability to utilize this method for recovering the actual shape of very short optical pulses measured by a slow detector [36]. Our preliminary theoretical and experimental results indicate, unequivocally, that our method offers an improvement by orders of magnitude beyond the most sophisticated deconvolution methods. 
Looks like FROG may have a competitor. If you recall FROG  tries to perform a certain kind of convolution/deconvolution so that, in Rick Trebino's words:

In order to measure an event in time, you must use a shorter one. But then, to measure the shorter event, you must use an even shorter one. And so on. So, now, how do you measure the shortest event ever created?

More can be found on this hardware based convolution/deconvolution solution in the following entries:
Eventually, while the method may look like they are competing, it would be interesting if we ever find out how theoretically  FROG provides additional constraints thereby allowing recovery of less than sparse signals.


An earlier paper from that group (and behind a paywall):

We demonstrate experimentally pulse-shape reconstruction at resolution that significantly exceeds the photodiode inherent resolution limit. The knowledge that pulses are inherently sparse enables us to retrieve data that is otherwise hidden in the noise.


Pre-publication Peer Review and Lazy Science Reporting

So it looks like, again, a not so serious paper makes it through "peer-review" (Fukushima: Alarmist Claim? Obscure Medical Journal? Proceed With Caution). Apparently, the lesson from all this is that you now have to pay attention to a journal's ranking where that paper appeared. Let us recall the Lancet study on MMR which, taken at face value, was responsible for hundred of thousands of parents shedding their kid's MMR vaccination with disastrous consequences. Instead, I propose we should avoid "science reporting" altogether or avoid reading lazy journalists:

How do you figure these out? A good way of sighting these offending reporters is their need to balance the subjects out with other studies, thereby framing the works as from opposing sides:

....As it turns out, the authors, Joseph Mangano and Janette Sherman, published a version of this study in the political newsletter Counterpunch, where it was quickly criticized. The critics charged that the authors had cherry-picked federal data on infant deaths so they would spike around the time of the Fukushima disaster. Passions over nuclear safety further muddied the debate: both researchers and critics had activist baggage, with the researchers characterized as anti-nuke and the critics as pro-nuke.

Here is the thing that gets to me. It doesn't matter whether you debunk it, it's all part of a theater play where two sides are fighting in some epic battle. It doesn't matter if the data is cherry picked (let's not even talk about the quite common misunderstanding that goes with statistics and it's attendant p-values). A party that shows the study to be wrong gets to be labeled in the opposing team. The investigating reporter cannot read the blog entry, it's too difficult, too many numbers at play. Let us continue reading that post:
As Scientific American's Michael Moyer writes: "The authors appeared to start from a conclusion—babies are dying because of Fukushima radiation—and work backwards, torturing the data to fit their claims"
And the only way to learn the truth is through the trustworthiness of another journalist. There is a nice herd mentality at play here: While this particular subject is godsend for journalists because the issue at play has been well polarized over the years, what happens when one of the two sides does not exist ? 
So how did such a seemingly flawed study wind up in a peer-reviewed journal?

Part of the answer at the end of the post seems to be to check the journal's ranking, wow. No, the only way to figure out if some science is right is to check the scientific argument, not trust pre-publication peer review or a journal ranking. The reason pre-publication peer review seems to exist is solely to give a compass to lazy journalists and science reporters who do not know what science is: Another good reason to make it go away. Post publication peer-review is the way to go (and yes I am making the statement that if the physicist and media darling Brian Cox expressed the view that pre-publication peer review is the only way to do Science then he does not understand Science).

Some good data about peer-review can be found here.



Tuesday, December 20, 2011

Two Post-Docs in Paris

Jerome Bobin let me know of these two postdoc positions:


Two post-doctoral positions (18 months), signal and image processing at CEA Saclay.
Two postdoctoral positions will be open at CEA Saclay in the area of signal and image processing to work in the data analysis group of Jean-Luc Starck. The candidates will work on sparse representations of multivalued (multi/hyper-spectral, etc ...) data for different applications such as (semi)-blind source separation, target detection/identification and compressed sensing.
Candidates should have a PhD in signal/image processing or applied mathematics. Previous experience in sparse representations (wavelets, curvelets, etc ...) and source separation is preferred but experience in related areas is also suitable.
The positions are funded for 18 months. The gross income salary will be 34,000€ annually (~2,260€), and will be adjusted according to experience and family situation. 
The usual funding support of any French institution (medical insurance, etc) is included. Applicants are requested to send a CV, a list of publications and a brief statement of research interests. This material together with two letters of reference should be sent by the closing date to jbobin@cea.fr
The closing date for receipt of applications is :
  • March 1st, 2012 for the first post-doctoral position
  • September, 1st 2012 for the second post-doctoral position




Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, December 19, 2011

Compressive Sensing This Week

Today we have a few papers and presentations. First, I just found some slides by Gabriel Peyré on Compressed Sensing.

Next, you'd think the following paper is not about compressive sensing and you'd be very wrong: 4%! with a Hadamard transform, I wonder if this number can go down ?


Abstract. A compressive sensing method combined with decomposition of a matrix formed with image frames of a surveillance video into low rank and sparse matrices is proposed to segment the background and extract moving objects in a surveillance video. The video is acquired by compressive measurements, and the measurements are used to reconstruct the video by a low rank and sparse decomposition of matrix. The low rank component represents the background, and the sparse component is used to identify moving objects in the surveillance video. The decomposition is performed by an augmented Lagrangian alternating direction method. Experiments are carried out to demonstrate that moving objects can be reliably extracted with a small amount of measurements.




Weighted Measurement Reuse for Compressive Video Sensing Reconstruction by Ivan Lee. The abstract reads:
Compressive sensing is an emerging technique which samples signals with sparsity below Nyquist sampling rate. For video compression, encoding individual frames independently using compressive sensing is inefficient since inter-frame similarities in temporal domain are not utilised. This paper investigates the performance of weighted compressive video sensing, which brings the function of predicted frames and bi-directional predicted frames in traditional video codecs to compressive video sensing. The weighted compressive video sensing applies only to the decoder to reconstruct video frames, without increasing the computational complexity for compressive video sensors. This paper examines weighted measurement reuse on different scenarios with previous frame only, subsequent frame only, and both previous and subsequent frames.


Ori Katz let me know of the following compressive sensing hardware: Compressive laser ranging by Wm. Randall Babbitt, Zeb W. Barber, and Christoffer Renner
Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.

Noise Robust Joint Sparse Recovery using Compressive Subspace Fitting by Jong Min Kim, Ok Kyun Lee, Jong Chul Ye. The abstract reads:
We study a multiple measurement vector (MMV) problem where multiple signals share a common sparse support set and are sampled by a common sensing matrix. Although we can expect that joint sparsity can improve the recovery performance over a single measurement vector (SMV) problem, compressive sensing (CS) algorithms for MMV exhibit performance saturation as the number of multiple signals increases. Recently, to overcome these drawbacks of CS approaches, hybrid algorithms that optimally combine CS with sensor array signal processing using a generalized MUSIC criterion have been proposed. While these hybrid algorithms are optimal for critically sampled cases, they are not efficient in exploiting the redundant sampling to improve noise robustness. Hence, in this work, we introduce a novel subspace fitting criterion that extends the generalized MUSIC criterion so that it exhibits near-optimal behaviors for various sampling conditions. In addition, the subspace fitting criterion leads to two alternative forms of compressive subspace fitting (CSF) algorithms with forward and backward support selection, which significantly improve the noise robustness. Numerical simulations show that the proposed algorithms can nearly achieve the optimum.

This paper proposes a compressed  sensing (CS) framework for the acquisition and reconstruction of frequency-sparse signals with chaotic dynamical systems. The sparse signal is acting as an excitation term of a discrete-time chaotic system and the compressed measurement is obtained by downsampling the system output. The reconstruction is realized through the estimation of  the excitation coefficients with principle of impulsive chaos synchronization. The 1l -norm regularized nonlinear least squares is used to find the estimation. The proposed framework is easily implementable  and creates secure measurements. The Henon map is used as an example to illustrate the principle and the  performance.


Prior image constrained compressed sensing: Implementation and performance evaluation by Pascal Thériault Lauzier, Jie Tang, and Guang-Hong Chen. The abstract reads:
Purpose: Prior image constrained compressed sensing (PICCS) is an image reconstruction framework which incorporates an often available prior image into the compressed sensing objective function. The images are reconstructed using an optimization procedure. In this paper, several alternative unconstrained minimization methods are used to implement PICCS. The purpose is to study and compare the performance of each implementation, as well as to evaluate the performance of the PICCS objective function with respect to image quality.
Methods: Six different minimization methods are investigated with respect to convergence speed and reconstruction accuracy. These minimization methods include the steepest descent (SD) method and the conjugate gradient (CG) method. These algorithms require a line search to be performed. Thus, for each minimization algorithm, two line searching algorithms are evaluated: a backtracking (BT) line search and a fast Newton-Raphson (NR) line search. The relative root mean square error is used to evaluate the reconstruction accuracy. The algorithm that offers the best convergence speed is used to study the performance of PICCS with respect to the prior image parameter α and the data consistency parameter λ. PICCS is studied in terms of reconstruction accuracy, low-contrast spatial resolution, and noise characteristics. A numerical phantom was simulated and an animal model was scanned using a multirow detector computed tomography (CT) scanner to yield the projection datasets used in this study.
Results: For λ within a broad range, the CG method with Fletcher-Reeves formula and NR line search offers the fastest convergence for an equal level of reconstruction accuracy. Using this minimization method, the reconstruction accuracy of PICCS was studied with respect to variations in α and λ. When the number of view angles is varied between 107, 80, 64, 40, 20, and 16, the relative root mean square error reaches a minimum value for α ≈ 0.5. For values of α near the optimal value, the spatial resolution of the reconstructed image remains relatively constant and the noise texture is very similar to that of the prior image, which was reconstructed using the filtered backprojection (FBP) algorithm.
Conclusions: Regarding the performance of the minimization methods, the nonlinear CG method with NR line search yields the best convergence speed. Regarding the performance of the PICCS image reconstruction, three main conclusions can be reached. (1) The performance of PICCS is optimal when the weighting parameter of the prior image parameter is selected to be near α = 0.5. (2) The spatial resolution measured for static objects in images reconstructed using PICCS from undersampled datasets is not degraded with respect to the fully-sampled reconstruction for α near its optimal value. (3) The noise texture of PICCS reconstructions is similar to that of the prior image, which was reconstructed using the conventional FBP method.

Laurent me know that just  before MIA2012, Justin Romberg will give a course on compressive sensing in Lyon, the capital city of gastronomy:


Lecturer : Justin Romberg (Georgia Institute of Technology, Atlanta)
Outline of the school :“The dogma of signal processing maintains that a signal must be sampled at a rate at least twice its highest frequency in order to be represented without error. However, in practice, we often compress the data soon after sensing, trading off signal representation complexity (bits) for some error (consider JPEG image compression in digital cameras, for example). Clearly, this is wasteful of valuable sensing resources. Over the past few years, a new theory of “compressive sensing” has begun to emerge, in which the signal is sampled (and simultaneously compressed) at a greatly reduced rate.
Compressive sensing is also referred to in the literature by the terms: compressed sensing, compressive sampling, and sketching/heavy-hitters.” [by courtesy of: http://dsp.rice.edu/cs]
  • Basis decompositions and frames (3-4 hours)
  •  fundamentals of basis and frame decompositions
  •  the discrete cosine transform and applications to image/video compression
  •    the lapped orthogonal transform
  •     wavelets
  •     thresholding for noise reduction
  • Sparsest decomposition from a dictionary (3-4 hours)
  •    omp and basis pursuit for selecting atoms
  •     uncertainty principles and sparsest decomposition
  •     the “spikes+sines” dictionary
  •     general unions of orthobases
  • Introduction to compressive sampling and applications (2 hours)
  • Recovering sparse vectors from linear measurements (6 hours/ 1 day)
  •    review of classical least-squares theory: the svd, pseudo-inverse, stability analysis, regularization
  •    sparse recovery conditions: l1 duality, sparsest decomposition revisited (with random support)
  •     the restricted isometry property and sparse recovery : l1 for perfect recovery from noise-free measurements,
  • l1 stability, l2 stability
  • Random matrices are restricted isometries (2 hours)
  • Optimization (6 hours / 1 day)
  • conjugate gradients
  •   log-barrier methods
  •       first-order l1 solvers
  •   greedy algorithms and iterative thresholding
  •  recursive least-squares
  •    the Kalman filter
  •     dynamic l1 updating
  • Low-rank recovery (2 hours)
Dates : 9-13/01/2012
Correspondant local : Paulo Goncalves



I forgot to mention the following presentation last time, here is Classification With Invariant Scattering by Joan Bruna, Stéphane Mallat, Laurent Sifre, Irène Waldspurger. In the same meeting there was also







Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly