Artificial Neural Network (ANN) Basics

  • This lesson explains how Artificial Neural Networks work and introduces neurons, layers, and weighted connections.
  • Biological vs Artificial Neuron

    Aspect

    Biological Neuron

    Artificial Neuron

    Function

    Transmit electrical signals

    Process input data

    Components

    Dendrites, Soma, Axon

    Input values, Weights, Activation function

    Signal

    Electrical impulse

    Mathematical output

    Learning

    Strength of synapses

    Adjusting weights & bias

    Analogy Diagram:

    Input (X1, X2, ...) --> Weights (W1, W2, ...) --> Summation + Bias --> Activation Function --> Output


    Structure of ANN

    • Input Layer: Receives raw features (X1, X2, …)

    • Hidden Layers: Perform computations, extract patterns

    • Output Layer: Produces prediction (Y)

    Diagram (simplified 3-layer ANN):

    Input Layer → Hidden Layer(s) → Output Layer

    • Each connection has a weight (w)

    • Each neuron has a bias (b)

    • Activation function introduces non-linearity


    Weights & Bias

    • Weights (w): Determine the importance of each input

    • Bias (b): Helps the neuron shift the activation function for better fit

    Neuron output formula:

    Z=∑(Xi⋅Wi)+bZ = \sum (X_i \cdot W_i) + bZ=∑(Xi​⋅Wi​)+b Y=f(Z)(Activation function)Y = f(Z) \quad \text{(Activation function)}Y=f(Z)(Activation function)


    Forward Propagation

    • Forward propagation → process of calculating outputs from inputs through the network.

    Steps:

    1. Multiply input by weights

    2. Add bias

    3. Apply activation function (ReLU, Sigmoid, etc.)

    4. Pass output to next layer

    Example (1 neuron):

    X = [2, 3], W = [0.5, 0.7], b = 0.1

    Z = 2*0.5 + 3*0.7 + 0.1 = 3.2

    Output Y = sigmoid(3.2) ≈ 0.96


    Network Architecture

    • Shallow ANN: 1 hidden layer → simpler problems

    • Deep ANN: Multiple hidden layers → complex problems

    Common Components:

    Component

    Description

    Layers

    Input, Hidden, Output

    Neurons per layer

    Determines learning capacity

    Activation Function

    Sigmoid, ReLU, Tanh, etc.

    Loss Function

    MSE, Cross-Entropy, etc.

    Optimizer

    Gradient Descent, Adam, etc.

    Example: Simple ANN with Keras

Deep Learning Neural Network Example in Python using TensorFlow (XOR Classification)

This Python example demonstrates how to build and train a simple Deep Learning Neural Network using TensorFlow and Keras to solve the XOR classification problem. The code defines a Sequential model with a hidden layer using the ReLU activation function and an output layer with Sigmoid activation. The model is compiled with the Adam optimizer and binary cross-entropy loss, trained for multiple epochs, and evaluated to display the accuracy.

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
import numpy as np

# Dataset (XOR Problem)
X = np.array([[0,0],[0,1],[1,0],[1,1]])
y = np.array([0,1,1,0])

# Step 1: Define Model
model = Sequential()
model.add(Dense(4, input_dim=2, activation='relu'))  # Hidden Layer
model.add(Dense(1, activation='sigmoid'))            # Output Layer

# Step 2: Compile Model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

# Step 3: Train Model
model.fit(X, y, epochs=500, verbose=0)

# Step 4: Evaluate
loss, accuracy = model.evaluate(X, y)
print("Accuracy:", accuracy)
  • Output:

    Accuracy: 1.0

    • The ANN learns a non-linear XOR pattern, which a linear model cannot solve.