Week 1–2: Hardware & Baseline Inference

Week 1–2: Hardware & Baseline Inference

Craig Nielsen

Craig Nielsen

March 2, 2026

Lets take some notes for the hardware

table of contents

Raspberry Pi 5 vs Jetson Nano vs Jetson Orin Nano Super

Quick Verdict

Jetson Orin Nano Super Developer Kit is the clear winner for AI vision + quantization — nearly 2× the performance of the standard Orin Nano at the same price point.


Full Comparison

FeatureRaspberry Pi 5Jetson Nano (2019)Jetson Orin Nano Super
Price~€70–80~€100–150~€250 (devkit)
CPUCortex-A76 × 4Cortex-A57 × 4Cortex-A78AE × 6
GPUVideoCore VII128-core Maxwell1024-core Ampere
AI Performance~0.1 TOPS~0.5 TOPS67 TOPS
CUDA GenerationMaxwell (old)Ampere (modern)
TensorRTTRT 8 (limited)TRT 10 (full)
INT8 Quantizationvia TFLite onlyPartial✅ Full
FP16 / BF16FP16 only✅ Both
RAM4–8GB LPDDR54GB LPDDR48GB LPDDR5
JetPack VersionN/A4.x (EOL soon)6.x (current)
PyTorch support✅ CPU onlyAging builds✅ Modern builds
Power5–7W10W7–25W
Camera / CSI✅ 2× MIPI CSI
StoragemicroSDmicroSDmicroSD + NVMe M.2
GPIO40-pin40-pin40-pin

Why Orin Nano Super Dominates for Your Use Case

1. 67 TOPS — Not 40, Not 0.5

The "Super" revision nearly doubles the standard Orin Nano's AI compute. For context:

  • 134× more AI performance than the old Jetson Nano
  • 670× more than Raspberry Pi 5
  • Real-time multi-stream vision inference becomes trivial

2. Ampere GPU = Modern Quantization Stack

  • Full INT8 + FP16 + sparse inference support
  • TensorRT 10 with latest optimizations
  • Directly mirrors the workflow on A100/H100 in the cloud — your skills transfer cleanly

3. Developer Kit = Ready Out of the Box

  • Carrier board, power supply, ports, and camera connectors all included
  • Flash JetPack via NVIDIA SDK Manager and you're running
  • All NVIDIA tutorials, DeepStream examples, and community answers target this exact hardware

4. Future-Proof Ecosystem

  • JetPack 6 = Ubuntu 22.04, CUDA 12, cuDNN 9
  • Same product family as Orin NX, AGX Orin — quantized TensorRT models scale up seamlessly
  • Old Jetson Nano is stuck on Ubuntu 18.04 / CUDA 10 — a dead end

Model Performance Estimates (Real-time = 30 FPS)

ModelRPi 5Jetson NanoOrin Nano (40T)Orin Nano Super (67T)
YOLOv8n (INT8)~8 FPS~15 FPS~120 FPS~200+ FPS
YOLOv8s (FP16)~3 FPS~8 FPS~60 FPS~100+ FPS
MobileNetV3~20 FPS~35 FPS~200 FPS~330+ FPS
ResNet-50 (INT8)~5 FPS~12 FPS~100 FPS~165+ FPS

Final Verdict on Devices

OptionVerdict
RPi 5Fine for non-AI tasks; underpowered for vision
Jetson Nano (old)Avoid — aging ecosystem, dead end
Jetson Orin Nano 8GBGood, but superseded
Jetson Orin Nano SuperBest-in-class edge AI for prototyping — modern stack, 67 TOPS, devkit ready

If you're serious about prototyping and deploying quantized vision models, the Jetson Orin Nano Super Developer Kit is the definitive choice. The ~€100 premium over a bare Orin Nano module gets you everything you need to start immediately, with headroom to push serious models.

Why get the "Jetson Orin Nano Super" dev kit?

jetson

What is a Carrier Board?

Think of it like a motherboard for the Jetson module. The Jetson SOM (System on Module) is essentially just the brain — CPU, GPU, RAM — packed onto a small board with no ports or connectors of its own. It's useless alone. The carrier board gives it everything needed to actually function.

This is exactly why the Developer Kit is the right buy for prototyping — it includes the carrier board, power supply, cooling, and all connectors bundled. The module alone is just a chip with nowhere to go.


The Parts

SOM (System on Module)

The Jetson chip itself. A small PCB with the processor, GPU, RAM, and storage controller. Plugs into the carrier board via a high-density connector. This is what you're paying for performance-wise — 67 TOPS lives here.

Carrier Board

The base board the SOM sits on. Breaks out all the SOM's signals into usable ports and connectors. Take A Look

Port / FeatureWhat It's For
USB-A portsKeyboard, mouse, peripherals
USB-CPower or data
Gigabit EthernetNetwork, SSH access
M.2 NVMe slot2x Fast SSD storage
microSD slotBoot drive or extra storage
40-pin GPIO headerSensors, LEDs, I2C/SPI devices
2× MIPI CSI connectorsDirect camera module input
Fan headerActive cooling control
DC barrel jackPower input

MIPI CSI Camera Connectors

Flat ribbon-cable connectors for dedicated camera modules (e.g. IMX219, IMX477). Much higher framerates and lower latency than USB webcams — essential for real-time vision pipelines.

40-pin GPIO Header

Same form factor as a Raspberry Pi header. Communicate with external hardware over I2C, SPI, UART, or digital I/O. Useful for triggering hardware or reading sensors alongside your vision pipeline.

M.2 NVMe Slot

Slot for a fast SSD. Important for storing large model weights, fast dataset loading during development, and a much snappier OS than running off microSD.

Cooling (Fan + Heatsink)

At 67 TOPS the Orin Nano Super runs hot under sustained inference load. The included active cooler keeps it from throttling during long training or inference sessions.


Mental Model

[ Jetson SOM ] ← the brain
      ↕ plugs into
[ Carrier Board ] ← the body
      ↕ connects to
[ Cameras / Storage / Network / Peripherals ]

In production you'd swap the devkit carrier for a custom or third-party board designed for your product's form factor — smaller, ruggedized, only the ports you need. For prototyping, the devkit carrier gives you everything in one.