# Edge Detection with a basic convolution filter visualized in Excel

…because one picture says more than 1000 words…

http://ntucsie2007.wikidot.com/convolution-filter-visualized

## What is Edge Detection?

Edge detection filters work essentially by looking

for contrast in an image. This can be done a number

of different ways, the convolution filters do it

by applying a negative weight on one edge, and a

positive on the other. This has the net effect

of trending towards zero if the values are the same,

and trending upwards as contrast exists.

Source:

http://www.codeproject.com/cs/media/edge_detection.asp

Edge detection is the process of finding sharp contrasts in intensities in an image. This process significantly reduces the amount of data in the image, while preserving the most important structural features of that image. Canny Edge Detection is considered to be the ideal edge detection algorithm for images that are corrupted with white noise.

http://cnx.org/content/m13218/latest/

## Excellent Tutorial in Edge Detection

http://www.gamedev.net/reference/programming/features/imageproc/page2.asp

## Convolution Kernel Java Applet

http://micro.magnet.fsu.edu/primer/java/digitalimaging/processing/convolutionkernels/index.html

## Convolution Kernel Examples

The default kernel is a standard Sobel edge detector. This contains two filters - one vertical, one horizontal - to be applied and then added together. For HW4_3-5 we implemented only one kernel

(to make it easier).

http://www.websupergoo.com/helpie/source/2-effects/convolution.htm

## Convolution Filter Boundary Processing

The approach we implemented for HW4_3-5 is a centered, zero boundary superposition:

An important issue that arises in the convolution process centers on the fact that the convolution kernel will extend beyond the borders of the image when it is applied to border pixels. One technique commonly utilized to remedy this problem, usually referred to as **centered, zero boundary superposition, is simply to ignore the problematic pixels** and to perform the convolution operation only on those pixels that are located at a sufficient distance from the borders. This method has the disadvantage of producing an output image that is smaller than the input image.

http://micro.magnet.fsu.edu/primer/java/digitalimaging/processing/convolutionkernels/index.html

## What is convolution and how to process boundary pixels

Convolution involves the multiplication of a group of pixels in the input image with an array of pixels in a convolution mask or convolution kernel. The output value produced in a spatial convolution operation is a weighted average of each input pixel and its neighboring pixels in the convolution kernel. This is a linear process because it involves the summation of weighted pixel brightness values and multiplication (or division) by a constant function of the values in the convolution mask.

http://micro.magnet.fsu.edu/primer/java/digitalimaging/processing/convolutionkernels/index.html

### Boundary processing…

Boundary effects on the convolution are more difficult. We have two choices: (1) ignore pixels where the convolution filter goes beyond the edge of the image and (2) do the best job you can with the pixels near the boundary. We choose the second. For pixels on the corners of the image, for example, we only have about 1/4 of the neighbors to use in the convolution that we have for pixels where the full filter can be used. The sum over those neighboring pixels should be normalized by the actual number of pixels used in the sum. The method we use is (1) for every pixel, use all possible pixels in the convolution, staying within the source image, but normalize as if we had used the entire filter, and (2) then make a second pass for the boundary pixels, adjusting the normalization upwards by the inverse of the fraction of the filter pixels that were actually used at each dest pixel. The first part gives values that are too small for convolutions near the boundary; the second pass increases these pixel values to their correct normalization, depending on exactly which row and column the pixel is in. Doing the normalization this way avoids overflow in the destination pixels. The result has no visible boundary pixel artifacts in the convolution for typical grayscale images.

http://www.leptonica.com/convolution.html

## Visualziation of Slidding the mask over an image

http://www.pages.drexel.edu/~weg22/edge.html

## Sobel Operator (Convolution Mask)

**Important to notice**

The center of the mask is placed over the pixel you are manipulating in the image.

And the I & J values are used to move the file pointer so you can mulitply,

for example, pixel (a22) by the corresponding mask value (m22).

It is important to notice that pixels in the first and last rows,

as well as the first and last columns cannot be manipulated by a 3x3 mask.

This is because when placing the center of the mask over a pixel in the

first row (for example), the mask will be outside the image boundaries.

http://www.pages.drexel.edu/~weg22/edge.html

## Edge Detection

http://zone.ni.com/devzone/cda/tut/p/id/2752

## Introduction to edge detection

http://en.wikipedia.org/wiki/Edge_detection

http://library.wolfram.com/examples/edgedetection/

## Edge Detector Comparison

http://marathon.csee.usf.edu/edge/edge_detection.html

Canny Edge Detection Tutorials

http://www.pages.drexel.edu/~weg22/can_tut.html

http://www.pages.drexel.edu/~nk752/Research/cannyTut2.html

# Open Question

which algorithm (sobel, canny, laplace) did we use for HW4-3_5 (edge detection)?