Data Science Asked by Jeremy Barnes on February 18, 2021
In my class I have to create an application using two classifiers to decide whether an object in an image is an example of phylum porifera (seasponge) or some other object.
However, I am completely lost when it comes to feature extraction techniques in python. My advisor convinced me to use images which haven’t been covered in class.
Can anyone direct me towards meaningful documentation or reading or suggest methods to consider?
In images, some frequently used techniques for feature extraction are binarizing and blurring
Binarizing: converts the image array into 1s and 0s. This is done while converting the image to a 2D image. Even gray-scaling can also be used. It gives you a numerical matrix of the image. Grayscale takes much lesser space when stored on Disc.
This is how you do it in Python:
from PIL import Image
%matplotlib inline
#Import an image
image = Image.open("xyz.jpg")
image
Example Image:
Now, convert into gray-scale:
im = image.convert('L')
im
will return you this image:
And the matrix can be seen by running this:
array(im)
The array would look something like this:
array([[213, 213, 213, ..., 176, 176, 176],
[213, 213, 213, ..., 176, 176, 176],
[213, 213, 213, ..., 175, 175, 175],
...,
[173, 173, 173, ..., 204, 204, 204],
[173, 173, 173, ..., 205, 205, 204],
[173, 173, 173, ..., 205, 205, 205]], dtype=uint8)
Now, use a histogram plot and/or a contour plot to have a look at the image features:
from pylab import *
# create a new figure
figure()
gray()
# show contours with origin upper left corner
contour(im, origin='image')
axis('equal')
axis('off')
figure()
hist(im_array.flatten(), 128)
show()
This would return you a plot, which looks something like this:
Blurring: Blurring algorithm takes weighted average of neighbouring pixels to incorporate surroundings color into every pixel. It enhances the contours better and helps in understanding the features and their importance better.
And this is how you do it in Python:
from PIL import *
figure()
p = image.convert("L").filter(ImageFilter.GaussianBlur(radius = 2))
p.show()
And the blurred image is:
So, these are some ways in which you can do feature engineering. And for advanced methods, you have to understand the basics of Computer Vision and neural networks, and also the different types of filters and their significance and the math behind them.
Correct answer by Dawny33 on February 18, 2021
This great tutorial covers the basics of convolutional neuraltworks, which are currently achieving state of the art performance in most vision tasks:
http://deeplearning.net/tutorial/lenet.html
There are a number of options for CNNs in python, including Theano and the libraries built on top of it (I found keras to be easy to use).
If you prefer to avoid deep learning, you might look into OpenCV, which can learn many other types of features, line Haar cascades and SIFT features.
Answered by jamesmf on February 18, 2021
As Jeremy Barnes and Jamesmf said, you can use any machine learning algorithms to deal with the problem. They are powerful and could identify the features automatically. You just need to feed the algorithm the correct training data. Since it is needed to work on images, convolution neural networks will be a better option for you .
This is a good tutorial for learning about the convolution neural network. You could download the code also and could change according to your problem definition. But you need to learn python and theano library for the processing and you will get good tutorials for that too
Answered by Arun Sooraj on February 18, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP