For Fa2024, I am taking an image processing class. One assignment is exploring Filters and Frequencies. Assignment

# Part 1 Link to heading

This is the source image:

## Part 1.1 Link to heading

I used finite difference kernels to detect edges naively:

### Gradient magnitude Link to heading

Gradient magnitude is taking the gradient (in our discrete case, using the finite difference filters to take the approximate x and y gradients), and then computing the norm using the Pythagorean theorem. This is called the gradient magnitude and can be used to measure how much of a change there is in the region.

## Part 1.2 Link to heading

We now use blurring to preprocess the image before applying the different filters for a better picture.

First, we blur the image:

Then convolve finite difference:

Combined:

The difference from before is that the fine details get filtered out, so the edge detection gives us a much smoother image representing changes of bigger signals in the image.

### Commutativity of Convolutions Link to heading

We can also convolve the finite difference and Gaussian blur before convolving with the image to get the same picture:

Combined:

You can see the result is the same as above because of the commutativity of convolutions! (maybe slightly different because I used slightly different thresholds for them)

# Part 2 Link to heading

## Part 2.1 Link to heading

I have created an unsharp filter by subtracting the Gaussian blur kernel from the identity kernel.

Here is what it looks like.

You can’t see the Gaussian part in the notebook I have.

Here are comparison images in order (blurred, unsharpen applied on blurred, original)

You can notice especially on the scaffolding of the Taj picture, that the unsharpen made the dark beams thicker. The gradient between the details seems to be bigger too, making the pixels contrast with each other more than the original image.

## Part 2.2 Link to heading

We will now make hybrid images that look like one thing when you are far away, and another image when you are close!

To do this, I did a lowpass filter (gaussian blur) on one image, and a highpass filter on the other (subtracting the Gaussian blurred image from the original), then averaged them. Here they are with their Fourier transforms:

### Pikachu and Panda Link to heading

This one came out good.

### Cat + Human Link to heading

### Between expressions (happy, surprised) Link to heading

### Me when I was young vs now Link to heading

This one is a bit of a fail because my poses don’t line up so the low frequency dominates even at a close distance.

## Part 2.3 Link to heading

The stack illustrations are boosted by x20 because before it was all black even with scalling to max and min values of pixels. So there will occasionally be washed out pixels in the graphs.

### An orange or an apple? Link to heading

Now I will replicate this paper of creating an Or-Apple (Orange + Apple).

First I will make the Gaussian and Laplacian stack (progressively applying Gaussian blurs, and taking the differences between the blurred images).

What the Orapple Gaussian and Laplacian stack is:

The Gaussian stack for the mask that we will use to combine the Laplacian stack is:

### futuristic Space Station + Space cliff Link to heading

I used this technique to merge two space photos:

from

Laplacian stack of the merged images:

This is the Gaussian stack of the mask I use to blend them:

Here are each image’s gaussian stack, laplacian stack, and masked Laplacian stack used to make the blended image in order:

### Lion + Panda Link to heading

Here I used acustom mask aroud the panda face

### Mt.Fuji + Mountain Link to heading

Here I used a horizontal mask to try blending Mt Fuji and a cliff but it didn’t turn out so well…