NunchakuFluxDiTLoader NoneType Object Is Not Subscriptable: How to Fix It

NunchakuFluxDiTLoader NoneType Object Is Not Subscriptable: How to Fix It

You're deep into a ComfyUI workflow, trying to get that perfect FLUX generation, and then it happens. A red box. A crash. The dreaded NunchakuFluxDiTLoader 'NoneType' object is not subscriptable error.

It's frustrating. Honestly, it’s one of those bugs that makes you want to walk away from the computer for a bit. But the fix is usually simpler than it looks. You've basically got a situation where the Nunchaku node is looking for a piece of data—like a model weight or a configuration key—and finding absolutely nothing instead.

In Python-speak, "subscriptable" just means something you can put square brackets after, like a list or a dictionary. When the loader says a 'NoneType' object isn't subscriptable, it means it tried to do something like model_data['transformer'] but model_data was actually None.

Why NunchakuFluxDiTLoader Breaks

Most of the time, this happens because of how the Nunchaku engine handles SVDQuant models. Nunchaku isn't just a standard loader; it’s a high-performance inference library designed by the MIT HAN Lab to squeeze 4-bit performance out of FLUX.1 models.

If you feed it the wrong file type, it chokes.

✨ Don't miss: Gray Zone Warfare Up the Stream: Why This Shift is Changing Modern Conflict

I've seen this happen most often when people try to load a raw .safetensors file that wasn't properly prepared for the Nunchaku backend. Unlike the standard Load Diffusion Model node, the NunchakuFluxDiTLoader expects a specific structure. If you’re pointing it at a single transformer file when it expects a folder or a merged weight set, it returns None during the initial scan. Then, when the code tries to "subscript" that result to find the layers, the whole thing falls apart.

The Model Path Mismatch

Check your model_path immediately. If you are using the Nunchaku custom nodes in ComfyUI, you shouldn't necessarily be pointing it at your standard models/unet folder unless you’ve put the SVDQuant-specific files there.

Wait. Are you using the latest version?

GitHub issues from late 2025 and early 2026 suggest that a lot of users ran into this after a ComfyUI update. There was a specific bug in version 1.0.1 of the Nunchaku nodes where the loader would fail to find the model's internal dictionary if the attention parameter was set to flash-attention2 on a GPU that didn't fully support it.

Common Triggers

  • Missing Config Files: SVDQuant models often need a companion .json or config file in the same directory.
  • Incompatible CUDA: If your CUDA version doesn't match the Nunchaku wheel, the backend might fail to initialize, returning None to the node.
  • Wrong Node for the Job: Trying to load a standard FP8 or dev-version FLUX model through the Nunchaku loader instead of the Nunchaku-quantized version.

Step-by-Step Fixes

Don't just reinstall everything. That's a waste of time. Try these targeted fixes first.

1. The Merge Node Trick

If you're looking at a folder full of multiple .safetensors files (like the transformer blocks), you can't just point the loader at one of them. You often need to use the Nunchaku Merge Node. This takes that messy folder and presents it to the DiT loader as a single, subscriptable object.

2. Check Your Attention Settings

Inside the NunchakuFluxDiTLoader node, there's a dropdown for attention.

  • Change it from flash-attention2 to nunchaku-fp16.
  • Sometimes Flash Attention fails to initialize silently on certain 30-series or 40-series mobile GPUs, leading to the NoneType error.

3. Verify the SVDQuant Format

Nunchaku is picky. It specifically wants models quantized with the SVDQuant technique. If you're trying to use a GGUF or a standard bitsandbytes 4-bit model, it won't work. You’ll get that subscriptable error because the loader is searching for SVD-specific keys in the file header that just aren't there.

Dealing with "Node Does Not Exist"

Sometimes the NoneType error is actually a symptom of the node failing to import correctly. If you look at your terminal and see "NunchakuFluxDiTLoader import failed," the subscriptable error is just the downstream result of ComfyUI trying to execute a ghost node.

👉 See also: How to Delete Messages on iMessage: Why Your Phone is Still Saving Them

Go to your custom_nodes/ComfyUI-nunchaku folder.
Run the install script manually:
python install.py or pip install -r requirements.txt

The Nunchaku backend requires specific wheels that don't always install automatically through the ComfyUI Manager. If those wheels are missing, the loader object never gets created properly.

Actionable Next Steps

If you're still stuck, do these three things right now:

👉 See also: Why was the moon landing real and why does it still spark so much debate?

  1. Check the Terminal: Look for the "Traceback" text. If the error mentions latent_image['samples'], it means your VAE or your latent input is the problem, not the loader itself.
  2. Update the Backend: Ensure your nunchaku python package matches your torch version. If you updated Torch recently, you must update the Nunchaku wheel.
  3. Pathing: Ensure your model_path is a direct string to the file, and that the file ends in .safetensors.

Getting FLUX to run at these speeds is incredible, but the tech is still a bit "bleeding edge." A little patience with the configuration goes a long way.