Computational Information Geometry Wonderland

Information geometry and computation with applications to machine learning and computer vision

Statistical Region Merging (SRM), Consensus Region Merging (CRM)

I have been interested in image segmentation for over a decade, and from time to time, I extend our initial technique, called Statistical Region Merging (SRM, PAMI 2004).

Usually, time is a constraint, and many approaches have been designed (the usual suspects are mean shift and normalized cuts)


I have been concerned with the following problem: Can we design a segmentation algorithm that always (smoothly) improve with time?

I will present a first attempt (with a first theoretical analysis) at the 2nd IAPR Asian Conference on Pattern Recognition (ACPR2013).

Here is the poster, with the paper.

It is fast and delivers soft contour segmentation (SRM is hard contour segmentation).

CRM-examples

Advertisements

Total Jensen divergences, total Bregman divergences

Here are the slides corresponding to the recently introduced total Jensen divergences:
arXiv 1309.7109v1.
It gives a principled geometric technique to build conformal divergences.

Higher-order Chi distances for affine exponential families

Here are the slides for the paper On the Chi square and higher-order Chi distances for approximating f-divergences

Conformal divergences, and total Jensen divergences

Image

Total Jensen divergence induced by a convex functional: Instead of taking the ordinal distance, we project orthogonally the point (pq)_alpha,F(pq)_alpha)) onto the line passing through (p,F(p)) and (q,F(q)). This geometrically designed divergence is invariant to rotations of axis, and give birth to a conformal divergence:

{\mathrm{tJ}}_\alpha(p:q) = \rho_J(p,q) J_\alpha(p:q), where
\rho_J(p,q)=\sqrt{\frac{1}{1+\frac{\Delta_F^2}{< {\Delta},{\Delta}>}}}
is symmetric and independent of the skew factor.
\Delta_F=F(q)-F(p) and \Delta=q-p.

Here is the technical report:
http://arxiv.org/pdf/1309.7109v1.pdf

Approximating f-divergences (using Chi type divergences)

When exponential families have affine natural parameter space, we can calculate in closed-form the chi square distances, and higher order chi distances. We can then cascade this formula with a Taylor expansion of the generator to get a numerical approximation scheme of f-divergences.

 

The details are reported in:

On the Chi square and higher-order Chi distances for approximating f-divergences

http://arxiv.org/abs/1309.3029

A first post on the new system

Hi there! This blog is a continuation of the

Computational Information Geometry Wonderland blog.

I do not need to update anymore the software, and can use easily \LaTeX too!

Let us see… The Chernoff statistical distance is defined by:

\displaystyle  C(P_1 , P_2) = -\log \min_{\alpha\in(0,1)} \int_{x\in\mathcal{X}} p_1^{\alpha}(x)p_2^{1-\alpha}(x) \mathrm{d}\nu(x)\geq 0,

See Frank Nielsen: An Information-Geometric Characterization of Chernoff Information. IEEE Signal Process. Lett. 20(3): 269-272 (2013)

Please update your links!