Neural Network Architecture Visualization¶
Run the Neural Network Visualization Fullscreen
About This MicroSim¶
This interactive visualization demonstrates how information flows through a neural network. Students can configure the network architecture, visualize forward propagation, and understand how activations change as data moves through layers.
Iframe Embedding¶
<iframe src="https://dmccreary.github.io/Digital-Transformation-with-AI-Spring-2026/sims/neural-network-visualization/main.html"
height="652px"
width="100%"
scrolling="no">
</iframe>
How to Use¶
- Start Animation: Click "Start" to watch data flow through the network
- Configure Architecture: Adjust hidden layers and nodes per layer
- Randomize Inputs: Click "Random Inputs" to see different activation patterns
- Hover Nodes: Hover over any node to see its activation and weights
- Observe Patterns: Watch how connection colors indicate positive/negative weights
Neural Network Components¶
| Component | Description |
|---|---|
| Input Layer | Receives initial data values (fixed at 4 nodes) |
| Hidden Layers | Intermediate processing layers (configurable 1-4 layers) |
| Output Layer | Final predictions (fixed at 3 nodes) |
| Weights | Learned parameters connecting nodes |
| Activations | Node output values (0-1 range shown) |
Understanding the Visualization¶
Node Colors¶
- Gray → Low activation (close to 0)
- Blue → High activation (close to 1)
Connection Colors¶
- Green → Positive weight (increases activation)
- Red → Negative weight (decreases activation)
- Thickness → Magnitude of weight
Forward Propagation¶
The animation shows how data flows through the network:
- Input values are set in the input layer
- Each hidden layer computes weighted sums of previous layer
- Activation function (ReLU-like) determines output
- Process continues until reaching output layer
Mathematical Foundations¶
For a node \(j\) in layer \(l\), the activation is computed as:
Where: - \(a_i^{(l-1)}\) = activation from previous layer - \(w_{ij}^{(l)}\) = weight connecting node \(i\) to node \(j\) - \(b_j^{(l)}\) = bias term - \(\sigma\) = activation function
Learning Objectives¶
After using this tool, students should be able to:
- Understand (Bloom's L2): Explain how information flows through network layers
- Apply (Bloom's L3): Trace information flow through a neural network
- Analyze (Bloom's L4): Relate network architecture to parameter count
Lesson Plan¶
Activity 1: Architecture Exploration (10 minutes)¶
- Start with 1 hidden layer, 3 nodes
- Gradually increase complexity
- Observe how parameter count changes
- Discuss the implications for training
Activity 2: Activation Patterns (15 minutes)¶
- Run animation with different input values
- Identify which connections have most impact
- Discuss why some neurons "die" (stay at 0)
Discussion Questions¶
- How does adding hidden layers affect the network's representational power?
- Why might a network with more parameters be harder to train?
- What happens when many weights are negative?
Network Architecture Guidelines¶
| Use Case | Recommended Architecture |
|---|---|
| Simple patterns | 1 hidden layer, 4-8 nodes |
| Moderate complexity | 2 hidden layers, 8-16 nodes |
| Complex relationships | 3+ hidden layers, 32+ nodes |
Related Concepts¶
- Chapter 1: Digital Transformation and AI Foundations
- Perceptron
- Deep Learning
- Backpropagation
References¶
- Nielsen, M. (2015). Neural Networks and Deep Learning. Determination Press.
- Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.
- 3Blue1Brown Neural Network Series: https://www.3blue1brown.com/topics/neural-networks
Self-Assessment Quiz¶
Test your understanding of neural network architecture.
Question 1: What are the three main types of layers in a basic neural network?
- Data, Storage, and Output layers
- Input, Hidden, and Output layers
- Memory, Processing, and Display layers
- Forward, Backward, and Lateral layers
Answer
B) Input, Hidden, and Output layers - Neural networks consist of input layers that receive data, hidden layers that process information, and output layers that produce predictions or classifications.
Question 2: What do the connection colors (green vs. red) indicate in the visualization?
- Temperature of the computer
- Positive weights (green) vs. negative weights (red)
- Speed of processing
- Memory usage
Answer
B) Positive weights (green) vs. negative weights (red) - Green connections increase the activation of the connected node, while red connections decrease it, showing how information is transformed through the network.
Question 3: What happens during "forward propagation" in a neural network?
- The network deletes all data
- Data flows from input through hidden layers to produce an output
- The network moves backward in time
- Users manually enter each value
Answer
B) Data flows from input through hidden layers to produce an output - Forward propagation is the process where input values are transformed through successive layers by multiplying by weights and applying activation functions.
Question 4: Why does adding more hidden layers typically increase a network's capabilities?
- More layers always make the network faster
- Additional layers allow the network to learn more complex patterns and representations
- Extra layers reduce the need for training data
- More layers decrease the parameter count
Answer
B) Additional layers allow the network to learn more complex patterns and representations - Each hidden layer can learn increasingly abstract features, enabling deep networks to model complex relationships that shallow networks cannot.
Question 5: What is an "activation" in the context of neural networks?
- The power switch for the computer
- The output value of a node after applying a non-linear function to its inputs
- The brand name of a neural network
- The user clicking a button
Answer
B) The output value of a node after applying a non-linear function to its inputs - Activation values (shown as node colors from gray to blue in the visualization) represent how strongly each node responds to its inputs, enabling the network to learn non-linear patterns.