Deep Learning Neural Network

A neural network is a set of neurons (activation functions) in layers that are processed sequentially to relate an input to an output. This example implements a multi-layer perceptron (MLP) algorithm that trains using Backpropagation.

Advantages: Effective in nonlinear spaces where the structure of the relationship is not linear. No prior knowledge or specialized equation structure is defined although there are different network architectures that may lead to a better result.

Disadvantages: Neural networks do not extrapolate well outside of the training domain. They may also require longer to train by adjusting the parameter weights to minimize a loss (objective) function. It is also more challenging to explain the outcome of the training and changes in initialization or number of epochs (iterations) may lead to different results. Too many epochs may lead to overfitting, especially if there are excess parameters beyond the minimum needed to capture the input to output relationship.

from sklearn.neural_network import MLPClassifier
clf = MLPClassifier(solver='lbfgs',alpha=1e-5,max_iter=200,\
                    activation='relu',hidden_layer_sizes=(10,30,10),\
                    random_state=1, shuffle=True)
clf.fit(XA,yA)
yP = clf.predict(XB)
assess(yP)

Optical Character Recognition with Neural Network in Python

Optical character recognition (OCR) is a technology that enables the recognition of text characters in digital images. This technology can be used to automatically convert scanned documents, pictures, or other digital images that contain text into machine-readable text.

Neural networks are a type of machine learning algorithm that is often used for OCR. These algorithms are inspired by the structure and function of the human brain, and are composed of many interconnected "neurons" that process and transmit information.

In the context of OCR, a neural network can be trained to recognize text characters in images by learning from a large dataset of labeled images. The network is presented with many examples of each character, and uses these examples to learn the visual patterns that are associated with each character.

Once the network has been trained, it can be used to make predictions on new images. Given an image of a text character, the network will analyze the visual patterns in the image and make a prediction about which character is most likely to be present. Here is an example of how this might be implemented in Python using the TensorFlow and Keras libraries with a Convolutional Neural Network (CNN).

# Import necessary libraries
import tensorflow as tf
from keras.utils import np_utils
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split

# Load the dataset of images of handwritten digits
X,y = load_digits(return_X_y=True)
X = X/255.0
y = np_utils.to_categorical(y,10)
X = X.reshape(-1, 8, 8, 1)

# Split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X,
                                                    y,
                                                    random_state=0)

# Create a neural network model
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv2D(32, (3, 3),
          input_shape=X.shape[1:],
          activation='relu'))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(120, activation='relu'))
model.add(tf.keras.layers.Dense(10, activation='softmax'))

# Compile the model
model.compile(optimizer='adam',
              loss='categorical_crossentropy',
              metrics=['accuracy'])

# Train the model
model.fit(X_train, y_train,
          validation_data=(X_test,y_test),
          epochs=30, batch_size=128)

In this code, a neural network model is created using the Sequential class from the tensorflow.keras.models library. The model is composed of several layers, including a convolutional layer that is designed to extract visual features from images, and dense layers that use these features to make predictions. The model is then compiled and trained on a dataset of labeled images (represented by the X and y variables), and can be used to make predictions on new data (represented by the X_test variable) using the predict method. Another example uses the scikit-learn MLPClassifier to make predictions and display a number and prediction from the test set.

from sklearn import datasets
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import numpy as np

from sklearn.neural_network import MLPClassifier
classifier = MLPClassifier(solver='lbfgs',alpha=1e-5,max_iter=200,\
                    activation='relu',hidden_layer_sizes=(10,30,10),\
                    random_state=1, shuffle=True)

# The digits dataset
digits = datasets.load_digits()
n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))

# Split into train and test subsets (50% each)
X_train, X_test, y_train, y_test = train_test_split(
    data, digits.target, test_size=0.5, shuffle=False)

# Learn the digits on the first half of the digits
classifier.fit(X_train, y_train)

# Test on second half of data
n = np.random.randint(int(n_samples/2),n_samples)
print('Predicted: ' + str(classifier.predict(digits.data[n:n+1])[0]))

# Show number
plt.imshow(digits.images[n], cmap=plt.cm.gray_r, interpolation='nearest')
plt.show()

Neural Network Architectures

There are many different types of neural network architectures, each with its own strengths and weaknesses. Some common types of neural network architectures include feedforward neural networks, convolutional neural networks, recurrent neural networks, and autoencoders.

Feedforward neural networks are the simplest type of neural network architecture. They consist of an input layer, one or more hidden layers, and an output layer. The neurons in the input layer receive the input data, and each neuron in the hidden and output layers performs a computation based on the inputs it receives from the neurons in the previous layer. The output of the network is the result of this computation. Feedforward neural networks are commonly used for regression and classification tasks.

Convolutional neural networks (CNNs) are a type of neural network architecture that is specifically designed to process data that has a grid-like structure, such as an image. They consist of an input layer, one or more convolutional layers, and one or more fully-connected layers. The convolutional layers are responsible for extracting features from the input data, and the fully-connected layers are responsible for making predictions based on those features. CNNs are commonly used for image classification and other tasks that involve processing images.

Recurrent neural networks (RNNs) are a type of neural network architecture that is designed to process sequential data, such as time series data or natural language. They consist of an input layer, one or more hidden layers, and an output layer. The hidden layers contain "memory" cells that can retain information from previous time steps, allowing the network to make predictions based on the entire sequence of inputs. RNNs are commonly used for tasks such as language translation and speech recognition.

Autoencoders are a type of neural network architecture that is used for unsupervised learning. They consist of an input layer, one or more hidden layers, and an output layer. The goal of an autoencoder is to learn a compressed representation of the input data in the hidden layers, and then to use this representation to reconstruct the original input data in the output layer. Autoencoders are commonly used for dimensionality reduction and data denoising.


MATLAB Live Script


✅ Knowledge Check

1. Which of the following statements about neural networks is INCORRECT?

A. They are inspired by the structure and function of the human brain.
Incorrect. Neural networks are indeed inspired by the structure and function of the human brain.
B. Neural networks are particularly useful for extrapolation outside of the training domain.
Correct. Neural networks do not extrapolate well outside of the training domain.
C. They can effectively model nonlinear relationships in data.
Incorrect. One of the main advantages of neural networks is their ability to model nonlinear relationships in data.
D. Neural networks require iterative training processes to adjust parameter weights.
Incorrect. Neural networks indeed require iterative processes to adjust their weights, usually through algorithms like Backpropagation.

2. Which of the following architectures is specifically designed for processing data with a grid-like structure, like images?

A. Recurrent Neural Networks (RNNs)
Incorrect. RNNs are designed to process sequential data, not necessarily grid-like structures.
B. Feedforward Neural Networks
Incorrect. Feedforward neural networks are the most basic type of neural network architecture and are not specifically designed for any particular type of data structure.
C. Convolutional Neural Networks (CNNs)
Correct. CNNs are specifically designed to process data that has a grid-like structure, such as images.
D. Autoencoders
Incorrect. Autoencoders are used for unsupervised learning, typically for dimensionality reduction and data denoising, but not specifically for grid-like structures.

Return to Classification Overview