Image Classification
The Image Classification node enables you to classify images using ONNX models directly within Node-RED. It supports both pre-trained and custom models, allowing you to identify objects, detect scenes, or categorize images without requiring an external AI service.
This node is ideal for computer vision tasks such as image labeling, content moderation, or feature recognition at the edge.
Inputs
General
- Property: input
- Type: object,buffer,stringor tensor.
- Description: The input image or tensor to classify. See the Details section for supported input formats.
Model Selection
- model: Path to a local ONNX model file or the name of a model to download from Hugging Face.
- type: Data type used when loading the model (only applicable when using a model name). Supported types include q8(default, quantized Int8),fp16(Float16),fp32(Float32), and others.
Note: When a model name is provided, the node automatically downloads and caches it locally if it is not already available.
Configuration
- topK: The number of top predictions to return. This can be set manually or passed dynamically via a message property.
- threshold: Minimum confidence score (0.0–1.0) required for predictions to be included in the output. Predictions below this score are filtered out. This value can also be provided dynamically through a message property.
Outputs
- Property: payload
- Type: object or array
- Description: Contains the classification results returned by the model. The structure of the output depends on the model used.
Details
Supported Input Formats
The node supports multiple input formats depending on the model’s requirements:
- 
Buffer — Binary image data, typically from a file or camera input. 
- 
Base64 string — Base64-encoded image data. 
- 
Jimp Image Object — An image object (e.g, output from node-red-contrib-image-tools).
- 
Tensor — A pre-processed tensor object in the following format: {
 "data": [0.0, 0.1, 0.2, ...],
 "type": "float32",
 "dim": [1, 3, 224, 224]
 }
TIP: If the model supports batching, the input can be an array of images in one of the supported formats.
Model Selection
The model property defines which ONNX model to use. You can either:
- Provide a local path (for example, /data/models/resnet50.onnx), or
- Specify a model name available on Hugging Face (for example, MobileNet-v3-Large).
When a model name is used, the node automatically downloads and caches it locally for reuse.
Model Type Options
- auto— Automatically selects the most suitable type.
- fp32— Standard 32-bit floating-point model.
- fp16— Half-precision 16-bit floating-point model.
- int8— 8-bit integer quantized model.
- uint8— 8-bit unsigned integer model.
- q8— Quantized Int8 model (default).
- q4— Quantized Int4 model.
- q4f16— Quantized Int4 with Float16 model.
- bnb4— BNB4 quantized model.
Configuration Options
- topK: Defines how many top predictions to return in the output. Use this to limit results to the most relevant classes.
- threshold: Filters predictions by their confidence score. Only predictions above the threshold are included.
Example Output
[
  {
    "label": "golden retriever",
    "score": 0.9812
  },
  {
    "label": "labrador retriever",
    "score": 0.0143
  },
  {
    "label": "cocker spaniel",
    "score": 0.0021
  }
]Each object in the output array includes:
- label: The predicted class name.
- score: The confidence score for that prediction.
Notes
- The node supports any ONNX-compatible image classification model, such as ResNet, MobileNet, or Vision Transformer (ViT).
- Quantized models (q8,int8) are recommended for edge deployments due to improved performance and lower memory usage.
- Ensure that your ONNX model is trained for image classification and compatible with ONNX Runtime.
- When using a Hugging Face model name, ensure network connectivity during the first run so that the model can be downloaded and cached locally.