A control would be a huge advantage but you wouldn't need to use it if you didn't want too. Even Apple Music has a bunch of global eqs that can be helpful for a quick fix. An alternative could be a Fractal FRFR. Obviously the player is THE most important part but the whole idea of modelling without a zero point is problematic. If everyone starts from a random place you can get great results but there is no reason not to add it.
Again, if you're
LISTENING to an audio demo of someone else's Axe-Fx preset over the
SAME SPEAKERS that you
PLAY that preset through with
your Axe-Fx, then you've controlled for the playback system. And if you're
LISTENING to an audio demo of someone else's Axe-Fx preset in the
SAME ROOM that you
PLAY that preset in with
your Axe-Fx, then you've controlled for the acoustic space. Understand? In both instances, the speakers and acoustic space you're using to hear the demo are the same speakers and acoustic space you're using to play the preset with your Axe-Fx, thus the speakers and room are controlled for because they're
identical. Again... the primary variables that are NOT controlled for in that instance are the guitar, pickups, and playing style.
If your goal is for everyone to be able to hear
exactly the same thing, the same way, on any playback system in any room, well, that's an entirely different discussion. It's also impossible. Trust me, if there was a system that would allow perfect translation of audio from one listening environment to any other listening environment, engineers wouldn't need to listen to their mixes on various playback systems and in different listening environments to ensure that their recordings translate properly.