Maybe I am butting in where not needed, and maybe I am misunderstanding if, and/or what, the misunderstanding is here.
Here is what happens..
When you play live, your level/volume will be much louder than "bedroom level." Even if you go direct, the signal will be amplified. If you have a difference of "X" that seems minute at bedroom level, such difference will be multiplied as the level of amplification increases; what may be, say, a 0.2-2dB difference at 50dB could become a 3-6dB difference at 100dB. Where you may not really discern a difference at low volume, at "stage/live volume," those differences WILL become (very) apparent. I made this mistake (once), and it was not fun.
So, what Scott is saying (once again.. I think) is that you should take your cleanest of clean patches.. as this will be the most dynamic (uncompressed output), cutting sound.. and use that as your "target volume/SPL." Raise/lower all of your other patches to equal it, verifying along the way that you are not clipping anywhere. You should now have a set of patches that will be at *relative SPL unity*. If you play with a lower volume, they stand up under scrutiny as any differences are reduced as the volume lowers. By setting everything up with the loudest scenario in mind, ANY scenario at that level or lower, by default therefore, will have the same, even result (whereas the inverse is absolutely NOT true!).
Personally, I find that what ccroyalsenders wrote
I find that after unifying everything, I always go back and dial overdriven tones up 2-3 dB to account for the ear's perception of the more compressed, driven tones as "quieter" than the punchy cleans.
can often be true. To check this, I have placed a mic in the room, set levels, and then played through my settings (not with the Axe).. and then jammed with the band. Listening back, I can get a decent idea of any little tweaks I might need to do. Experience will provide the ability to better and more easily tweak them.