Network Stack | HTTP and TCP/IP
Gamified networking lesson connecting OSI, TCP/IP, MTU, and deployment from GitHub Pages to AWS EC2
- Mission: Ship One Request From GitHub Pages to EC2
- Deployment Story: What the Request Is Actually Doing
- Packet Delivery Challenge
- OSI vs TCP/IP
- Deployment Path Map
- Transport Layer: Why TCP Is Usually the Deployment Default
- Network, Data Link, and Physical: The Internet Trip to AWS
- MTU Boss Fight: What Happens to Data as It Moves Down the Layers?
- Nginx as the Deployment Checkpoint
Mission: Ship One Request From GitHub Pages to EC2
This lesson treats networking like a deployment quest. Your frontend is hosted on GitHub Pages. Your backend lives on AWS EC2, usually behind Nginx, often inside Docker, and it may talk to a SQL database. The mission is simple:
A browser click must become a real deployed request, survive the internet, reach the backend, and return useful data.
Deployment Loadout
- GitHub Pages serves the static frontend: HTML, CSS, and JavaScript.
fetch()creates the HTTP request in the browser.- DNS turns a domain name into an IP address.
- TLS/HTTPS protects the request in transit.
- TCP keeps the bytes ordered and reliable.
- IP routes packets across networks to AWS.
- Ethernet/Wi-Fi moves frames across each local hop.
- Nginx receives traffic on ports
80and443and forwards it to the right app. - Docker keeps backend services packaged and repeatable.
- Flask or Spring handles the API route and business logic.
- SQL stores or retrieves persistent data.
Win Condition
By the end of this notebook, you should be able to explain:
- How one deployed request moves down the stack on the client and back up on the server.
- Why the OSI model is a teaching model while TCP/IP is the model used more directly in deployment.
- How MTU limits packet size and causes large data to be split into multiple TCP segments and IP packets.
- Why deployment tools such as DNS, TLS, Nginx, Docker, and cloud networking matter in real systems.
Deployment Story: What the Request Is Actually Doing
Use this notebook as a trace of one real deployment path:
- A student opens a GitHub Pages frontend.
- The page runs JavaScript and calls
fetch("https://flask.opencodingsociety.com/api/users"). - DNS resolves the API domain to a public server IP.
- TLS negotiates encryption so the browser can safely talk to the server.
- TCP opens a reliable connection to destination port
443. - IP routes the packets across home Wi-Fi, the ISP, and internet routers toward AWS.
- The EC2 machine receives the traffic and Nginx acts as the reverse proxy.
- Nginx forwards the request to the correct internal service such as Flask on
localhost:8587. - The backend may call a SQL database and build a JSON response.
- The response travels back through the same stack in reverse.
Where Deployment Bugs Show Up
- Application bugs: wrong route, wrong JSON, CORS mistakes, authentication issues.
- Transport bugs: blocked ports, broken handshakes, timeouts, retransmissions.
- Network bugs: wrong DNS records, bad security groups, unreachable IPs.
- Data link or physical issues: unstable Wi-Fi, damaged cables, packet loss on a hop.
That is why deployment is not only a coding problem. It is a networking problem too.
Packet Delivery Challenge
This is the actual learning game for the lesson. Drive one request from frontend deployment to backend deployment and answer networking checkpoints that appear along the way.
How to Play
- Use WASD or the arrow keys to move the request packet across the deployment map.
- Touch each checkpoint, then answer the question in the side panel to keep the request alive.
- You have 3 integrity points. Wrong answers damage the packet.
- Use the AI Debugger Hint button whenever you need help with OSI vs TCP/IP, MTU, routing, or Nginx.
- Finish the board, then complete the short post-run quiz built into the game.
The deployment run includes its own post-run quiz so the lesson stays playable on localhost and production builds.
OSI vs TCP/IP
The OSI model is excellent for teaching because it separates responsibilities into 7 layers. The TCP/IP model is better for describing real internet deployments because software and hardware stacks are usually implemented closer to its 4 layers.
| OSI Layer | TCP/IP Match | Deployment Example |
|---|---|---|
| 7. Application | Application | fetch(), HTTP, DNS, API routes |
| 6. Presentation | Application | TLS encryption, JSON encoding, compression |
| 5. Session | Application | WebSocket upgrades, session state, login sessions |
| 4. Transport | Transport | TCP, ports 443 and 80, reliability |
| 3. Network | Internet | IP addressing, routing to AWS |
| 2. Data Link | Link | Ethernet frames, Wi-Fi, MAC addresses |
| 1. Physical | Link | Fiber, copper, radio signals |
The Contrast That Matters
- OSI asks: “What job does each layer do?”
- TCP/IP asks: “How does the internet stack actually move traffic?”
- OSI is more granular.
- TCP/IP is more practical for deployment diagrams, packet captures, and debugging.
When students treat them as identical, they usually get confused about Layers 5 and 6. In practice, those jobs still exist, but they are commonly discussed inside the Application layer in TCP/IP.
Deployment Path Map
Use this as the simplified visual path for the lesson:
GitHub Pages browser fetch()
-> HTTP / DNS / TLS
-> TCP
-> IP routing across the internet
-> Ethernet or Wi-Fi hop
-> AWS EC2 public interface
-> Nginx reverse proxy
-> Dockerized Flask or Spring app
-> Database if needed
-> Response back to browser
Read the Map in Both Models
- OSI 7-5 roughly matches TCP/IP Application in this lesson.
- OSI 4 matches TCP/IP Transport.
- OSI 3 matches TCP/IP Internet.
- OSI 2-1 roughly matches TCP/IP Link.
Transport Layer: Why TCP Is Usually the Deployment Default
For a typical web app deployment, the browser and server usually use TCP because web traffic needs reliability.
What TCP Adds
- Ports so the browser can find services like HTTPS on
443. - Sequence numbers so the server can reorder segments correctly.
- Acknowledgments so sender and receiver know what arrived.
- Retransmission so lost segments can be sent again.
- Flow control so a fast sender does not overwhelm a slower receiver.
Deployment Connection Example
- Browser source: random high-numbered port such as
53144 - Destination:
443onflask.opencodingsociety.com - Nginx receives TCP traffic and proxies it to an internal app port such as
8587
Without transport reliability, your deployed app would randomly corrupt JSON, cut responses short, or drop requests under ordinary internet conditions.
Network, Data Link, and Physical: The Internet Trip to AWS
Network Layer
At Layer 3, the TCP segment becomes part of an IP packet. Routers read the destination IP and decide the next hop toward AWS. This is where cloud deployment meets the public internet.
Data Link Layer
At Layer 2, each hop wraps the IP packet in a local frame. The frame changes as the packet moves from laptop to router, router to ISP, and through more network equipment. MAC addresses are local-hop addresses, not end-to-end addresses.
Physical Layer
At Layer 1, the frame becomes signals: radio over Wi-Fi, light over fiber, or electrical pulses over copper. Every deployment still depends on physical transport, even if cloud tooling hides it from you.
Why Students Should Care
If DNS and code look correct but the request still fails, the missing clue may be lower in the stack: packet loss, unstable Wi-Fi, blocked ports, or a routing problem.
payload = '{"user":"avery","action":"deploy","note":"Ship this request from GitHub Pages to EC2 with enough bytes to force segmentation."}'
MTU = 1500
IP_HEADER = 20 # IPv4 without options
TCP_HEADER = 20 # TCP without options
ETHERNET_HEADER = 14
ETHERNET_FCS = 4
TLS_OVERHEAD = 25 # rough teaching estimate, varies in real traffic
payload_bytes = len(payload.encode('utf-8'))
application_bytes = payload_bytes + TLS_OVERHEAD
max_tcp_payload = MTU - IP_HEADER - TCP_HEADER
segments = []
remaining = application_bytes
segment_number = 1
while remaining > 0:
tcp_payload = min(max_tcp_payload, remaining)
ip_packet_size = tcp_payload + IP_HEADER + TCP_HEADER
ethernet_frame_size = ip_packet_size + ETHERNET_HEADER + ETHERNET_FCS
segments.append({
'segment': segment_number,
'app_or_tls_bytes': tcp_payload,
'tcp_header': TCP_HEADER,
'ip_header': IP_HEADER,
'ip_packet_total': ip_packet_size,
'ethernet_frame_total': ethernet_frame_size,
})
remaining -= tcp_payload
segment_number += 1
print('Original app payload bytes:', payload_bytes)
print('Payload after adding rough TLS overhead:', application_bytes)
print('MTU:', MTU)
print('Max TCP payload per packet at this MTU:', max_tcp_payload)
print()
print('How the request is packaged:')
for row in segments:
print(row)
print()
print('Layer-by-layer view for one full-sized segment:')
print({
'Application data': max_tcp_payload,
'TCP segment total': max_tcp_payload + TCP_HEADER,
'IP packet total': max_tcp_payload + TCP_HEADER + IP_HEADER,
'Ethernet frame total': max_tcp_payload + TCP_HEADER + IP_HEADER + ETHERNET_HEADER + ETHERNET_FCS,
})
MTU Boss Fight: What Happens to Data as It Moves Down the Layers?
MTU is the largest packet payload a network link can carry at the IP layer without fragmentation. A common Ethernet MTU is 1500 bytes.
Common Classroom Numbers
- Ethernet MTU: 1500 bytes
- IPv4 header: about 20 bytes
- TCP header: about 20 bytes
- Remaining TCP payload: about 1460 bytes
Down the Stack
- Application creates data such as JSON.
- Presentation/Session may add TLS and connection-state behavior.
- Transport splits large data into TCP segments.
- Network wraps each segment in an IP packet.
- Data Link wraps each packet in an Ethernet or Wi-Fi frame.
- Physical sends the frame as signals.
Back Up the Stack on the Server
- The EC2 network interface receives the bits.
- The operating system checks the frame and extracts the IP packet.
- IP delivers the TCP segment to the right socket.
- TCP reorders segments if needed and rebuilds the byte stream.
- TLS is decrypted.
- Nginx and then the backend application read the HTTP request.
Why MTU Matters in Deployment
- Large API responses are split across multiple TCP segments.
- If MTU assumptions are wrong, performance suffers or packets may fragment.
- VPNs, tunnels, and cloud networking layers can reduce the effective path MTU.
- A deployed app may be logically correct but still feel slow because packet sizing is inefficient.
Nginx as the Deployment Checkpoint
Nginx is where the abstract networking lesson becomes a real deployment architecture.
- It listens on standard public ports like
80and443. - It can terminate HTTPS.
- It can answer CORS preflight requests before the app sees them.
- It routes traffic to the right internal service or container.
- It can support WebSocket upgrades for live features.
- It hides internal app ports from the public internet.
That makes Nginx a practical bridge between the internet-facing layers and your deployed backend code.
server {
# ==============================================================================
# REVERSE PROXY: DEPLOYED ENTRY POINT FOR NETWORK TRAFFIC
# ==============================================================================
# Browser -> DNS -> public IP -> Nginx on EC2 -> internal app port
#
# Teaching path for one HTTPS API request:
# GitHub Pages frontend
# -> browser fetch()
# -> DNS lookup for flask.opencodingsociety.com
# -> TCP connection to port 443
# -> TLS encryption
# -> IP routing across the internet
# -> Nginx reverse proxy on EC2
# -> Flask app on localhost:8587
#
# OSI vs TCP/IP reminder:
# OSI 7-5 ~= TCP/IP Application
# OSI 4 = TCP/IP Transport
# OSI 3 = TCP/IP Internet
# OSI 2-1 ~= TCP/IP Link
# ==============================================================================
listen 80;
listen [::]:80;
server_name flask.opencodingsociety.com;
# ==============================================================================
# WEBSOCKET UPGRADE: SESSION-LIKE BEHAVIOR ON TOP OF TCP
# ==============================================================================
# Regular HTTP is request/response.
# WebSocket starts as HTTP, then upgrades the existing TCP connection so both
# sides can send messages whenever they want.
# ==============================================================================
location /socket.io/ {
proxy_pass http://localhost:8500;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# ==============================================================================
# MAIN APP TRAFFIC: NGINX FORWARDS TO THE INTERNAL BACKEND
# ==============================================================================
location / {
proxy_pass http://localhost:8587;
proxy_http_version 1.1;
proxy_set_header Host $host;
# Preserve enough origin information for app logging and CORS decisions.
add_header "Access-Control-Allow-Credentials" "true" always;
set $cors_origin "";
if ($http_origin ~* "^https://(.*\.)?opencodingsociety\.com$") {
set $cors_origin $http_origin;
}
if ($http_origin = "https://open-coding-society.github.io") {
set $cors_origin $http_origin;
}
add_header "Access-Control-Allow-Origin" "$cors_origin" always;
if ($request_method = OPTIONS) {
add_header "Access-Control-Allow-Methods" "GET, POST, PUT, DELETE, OPTIONS, HEAD" always;
add_header "Access-Control-Allow-MaxAge" 600 always;
add_header "Access-Control-Allow-Headers" "Authorization, Origin, X-Origin, X-Requested-With, Content-Type, Accept" always;
return 204;
}
}
}