Saturday, January 31, 2009

The Incidence Matrix: A

The incidence matrix will be denoted as 'A' throughout the rest of this project. The importance of this matrix is that is is the combinatorial gradient operator and its transpose is the combinatorial divergence operator. These operators will be defined in detail in a future post. 
Now, the incidence matrix is defined as:
where eij is the edge adjacent to nodes i and j; and vk is the node k

So, by defining the columns as eij and the rows as vk we end up with a matrix which includes a diagonal of 1s since for every k=i, the value is +1. From there, for every node j adjacent to k=i, the value in the matrix is -1. The orientation for the edges here is assigned by taking i=k for node k (always) and then the adjacent node would just be j.

To demonstrate, I have made the incidence matrix for the graph we have been using throughout the blog.
The incidence matrix is always symmetrical.

What I don't understand though is what new information this provides over the adjacency matrix. I realize that it is represented differently by having a diagonal row of 1s, but don't both matrices just show the adjacent nodes of every node?

Thursday, January 29, 2009

Manipulation to Laplacian Matrix

The previous Laplacian matrix assumed al the edges were the same weight, hence the edge matrix placed a 1 (or -1) at every edge. To accomodate the edge weights, we change the definition of the matrix as follows:
where di is the degree of vertex i and wij is the weight of the edge adjacent to i and j.

Considering the graph we've been using throughout this blog, the Laplacian matrix using the new definition is:



Thursday, January 22, 2009

The Laplacian Matrix

The Laplacial matrix is significant because the Dirichlet problem uses it. More information will be posted when more information is understood with regards to why it is needed. This is a just a step towards the complete post explaining the Dirichlet problem.

So, The Laplacian matrix is a representation of a graph. It can represent both directed and undirected graphs but directed graphs are outside the scope of this project so they will not be discussed. Mathematically, the Laplacian matrix is the difference of two matrices, the degree matrix and the adjacency matrix.

The degree matrix is a diagonal nxn matrix that holds information about the degree of each node (vertex) in the graph. The prerequisite for the implementation of this matrix is that the graph must be labelled. Taking node with label i, we write its degree in the location aii of the degree
 matrix. 

[The degree of a vertex is the number of edges adjacent to it.]

The following image (taken from Wikipedia) demonstrates a degree matrix.


Now, the adjacency matrix is a representation of all the adjacent vertices of a
 vertex. It is also an nxn matrix. Element aij is 1 if vertices i and j are adjacent, otherwise it is 0. Therefore, an adjacency matrix must always be symmetrical.  The following image (also from Wikipedia) shows a graph and its respective adjacency matrix.


So, in order to complete the Laplacian matrix, we subtract the adjacency matrix from the degree matrix. The following image (again, Wikipedia) shows the Laplacian matrix for the same graph used in the above 2 images.


This vector can be directly achieved using the following definition:


The Dirichlet Problem

The Dirichlet problem is still very confusing so I will slowly be building upon it until I eventually reach a solid understanding a write a complete, thorough post on it.

Random Walker in a nutshell

Random Walker is an image segmentation algorithm based on the statistical algorithm random walk which finds all likelihoods of a certain point taking a next step. Random Walker then assigns the step with the highest probabibility to that particular steps' respective label. 

To understand this more easily, we can make the analogy of an electric circuit. Each pixel in the image would be represented as a node in the circuit. Each node is connected to every other node by a "resistor". These resistor values can differ depending on what the segmention-determinant is (intensity, colour, brightness, etc). Of course, an equation is derived for each of these determinants. 

Once the resistor values are in place, we need to take each seed (represented by a voltage source), one at a time, and figure out the potential between the voltage source and the node, going through all the resistors in between. Analogously, we would be finding which seed each pixel belongs to based on the numeric values between the pixel and each seed and assinging the pixel to the seed which produces the greatest value (potential).

To find these values, we use the Combinatorial Dirichlet problem (as far as I understand). The boundary limits to this problem would be the location of the pixel being examined and the location of the seed being examined.