You're fundamentally misunderstanding the way the input sensitivity works.
Very possible. To try and say it another way, my idea right now is this:
You're standing in front of a Fender amp with a low output single coil strat, and a high output humbucker-loaded Les Paul. If you measure the transient sizes of either you get (made up values) 1.5V on the strat, and 4.5V on the LP.
When you plug into your fender, you are either sending it 1.5 V or 4.5 V depending on the guitar, for heavy picked transients. The amp's topology, gain staging, and settings configure how that behaves, but you're lilkely to start clipping (overloading one of the tubes, and getting distortion) sooner with your higher output pickups. As they are outputting about 3x the voltage swing.
Now you plug into your Axe Fx III, and you adjust the input sensitivity. Here you're adjusting the range that the ADCs listen at to get the largest signal compared to noise. For your LP, you adjust it so that it can read +/-5 V, to tickle the red on your transients, and in the strat you adjust it to +/- 2 V to do the same.
The end result is that inside the Axe FX III, both guitars, after adjusting sensitivity, are outputting a signal of about unity gain (0 dB, or maybe higher/lower in practice, but similar to each other for the purposes of this thought process). And now both of these signals are entering the Fender amp model at unity gain, and where before your 3x amplitude voltage swing lead to some sort of decreased headroom, now both signals are sending at a similar amplitude.
But I do remember reading that the input sensitivity is configured in such a way that it doesn't increase input signal, it just reduces noise floor, so maybe that is my fundamental misunderstanding.
And if that's the case, my question becomes: does that mechanism still preserve was was originally a 3x difference in transient amplitudes, or does it equalize them to a degree, and therefore start feeding the amp models with larger signal than they get in reality, leading to people getting preamp distortion at levels lower than they're used to getting in the real world.
Or are they just not used to being able to hear the sound clinically through headphones and recorded because normally they'd be at stage volume.