The Unseen World: Why 32×32 Thermal Pixels Are More Powerful Than You Think
You hold in your hand a brand new smartphone, its camera boasting an almost absurd 108 megapixels—108 million points of light captured to form a stunningly detailed image. Next to it, you see an ad for an entry-level thermal imager. Its resolution? A seemingly pathetic 32 by 32 pixels. That’s not a typo. It captures a grand total of 768 pixels.
In a world of 4K televisions and gigapixel photography, this feels like a technological joke. It triggers an immediate, visceral reaction: the “pixel anxiety.” Why is this technology so primitive? Is a picture with fewer pixels than a 1980s video game icon even useful?
This anxiety, while understandable, is based on a fundamental misunderstanding. Comparing a thermal imager’s resolution to a regular camera’s is like comparing the horsepower of a cargo ship to that of a Formula 1 car. Both are powerful, but they are designed for vastly different tasks, and judging them by the same metric is meaningless. The truth is, those 768 thermal pixels, when understood and used correctly, can reveal more about the world around you than millions of visual ones. To understand why, we first need to bust a few myths and dive into the fascinating physics of seeing the unseen.

Myth #1: Thermal Cameras “See Heat” Through Walls
The most common misconception is that thermal imagers have a magical ability to “see heat” as if it were a substance, allowing them to peer through solid objects. The reality is both more scientific and more interesting.
A thermal imager does not see heat. It sees infrared radiation, a spectrum of light invisible to our eyes. Everything in the universe with a temperature above absolute zero (-273.15°C) constantly emits this infrared light, and hotter objects simply emit more of it. Your camera is essentially a light meter for a color of light we can’t perceive.
This is why a thermal camera absolutely cannot see through walls. It’s still just a camera capturing light. When you point it at a wall, it can’t see the hot water pipe inside. What it sees is the surface of the drywall being warmed up by the pipe through a process called conduction. You’re seeing the thermal shadow of the pipe on the wall’s surface, not the pipe itself. It’s a crucial distinction: you are always, without exception, reading the temperature of the surface you are pointed at.

The Heart of the Matter: Why Thermal Pixels Are So Big and “Few”
So, if it’s just a camera, why the low resolution? The answer lies in the fundamentally different way a thermal sensor works. Your phone camera has a CMOS sensor that captures photons of visible light. A thermal imager has a microbolometer array.
Let’s break that down. A microbolometer is, in essence, a microscopic thermometer. The “array” in your 32×32 imager is a grid of 768 of these tiny thermometers, all suspended in a vacuum on a silicon chip. When infrared radiation from the scene hits one of these micro-thermometers, it physically heats up by a minuscule fraction of a degree. This temperature change alters its electrical resistance, which is then measured, processed, and assigned a color or shade on your screen.
This process is vastly more complex and physically delicate than simply counting photons. Making these physical detectors smaller and packing more of them onto a chip is an immense engineering and physics challenge, which is why high-resolution thermal sensors are astronomically expensive, often reserved for military and scientific applications. At the entry level, 32×32 or 80×60 is the current pinnacle of affordability.

Rethinking Resolution: What Can 768 Pixels Actually Show You?
Here is the central argument: for most diagnostic tasks, you don’t need millions of pixels. You need just enough pixels to reveal a pattern.
Think about it. To know that a breaker in your electrical panel is dangerously hot, you don’t need to be able to read the amperage number printed on it. You just need to see a distinct, bright “blob” of heat where there should be none. To find a draft, you don’t need to see the individual dust motes in the air; you need to see the large, sweeping pattern of cold air intrusion across a window frame.
This highlights a key concept: the further away you are from an object, the larger the area each of your precious pixels has to cover. This is often called the “spot size.” If you’re three meters away from a wall, a single pixel on a 32×32 camera might be “seeing” an area several centimeters wide. This is why these cameras are fantastic for scanning large surfaces like walls, ceilings, and entire machines for significant anomalies. It’s also why they are not well-suited for inspecting tiny components on a circuit board from a distance—the pixel would be larger than the component itself, averaging out its temperature with its cooler surroundings and rendering the hot spot invisible.
The low-resolution imager’s power lies in revealing large-scale patterns and significant temperature differences, which happen to be the most common problems in buildings and machinery.

The Master Key to Accuracy: Why Emissivity Matters More Than Pixels
Now for the most critical point. You could have a thermal camera with a million pixels, but if you don’t understand the concept of emissivity, the beautiful, detailed image it produces will be a complete work of fiction.
As we’ve discussed, objects radiate infrared energy. Emissivity is a measure of how efficiently they do so, on a scale of 0 to 1. A perfect blackbody (like the theoretical physics concept) has an emissivity of 1.0. A perfect thermal mirror would have an emissivity of 0.
Matte, non-metallic objects (wood, paint, brick, skin) are high-emissivity (around 0.95). They are very good at radiating their own energy, so the camera gets a true reading. Shiny, reflective objects (polished steel, aluminum) are low-emissivity (often below 0.1). They are terrible at radiating their own energy and instead act like mirrors, reflecting the infrared radiation of everything around them.
An uneducated user with a high-resolution camera who points it at a shiny electrical bus bar will see a confusing, often cool, temperature reading that is mostly a reflection of the ceiling or their own face. An educated user with a 32×32 camera knows this reading is false. They will either adjust the camera’s emissivity setting to a very low number or, better yet, place a piece of high-emissivity electrical tape on the bar to get a reliable reading.
In the world of thermography, an understanding of emissivity is infinitely more valuable than a high pixel count. A low-resolution image of accurate data is a diagnostic tool. A high-resolution image of inaccurate data is just noise.

A Tool, Not a Toy
Let’s return to our initial anxiety. The feeling that 768 pixels is “not enough” comes from our experience with visual cameras, where the goal is to replicate reality with aesthetic detail. But a thermal imager is not a camera in that sense. It is a data visualization tool. Its purpose is not to create a beautiful picture, but to reveal the hidden patterns of energy that govern the comfort, safety, and efficiency of our world.
You are not buying a “bad” camera; you are buying a highly specialized instrument. By ceasing to count its pixels and instead learning to speak its language—the language of Delta-T, spot size, and emissivity—you transform it. It ceases to be a low-resolution camera and becomes a high-information data-gathering device. In this unseen world, the knowledge of the operator, not the number of pixels, is what grants true vision.