I don't know how the speech in noise recognition (and similar tests) really work and what they show about our ability to hear sounds, but I know for sure how sound works after being a sound / music producer for over a decade.
There is a series of frequencies in each millisecond of a sound that you need to hear in order to perceive it. If, for whatever reason, you hear half of them well and half badly, you hear only some of them, or you hear most but not all, your ability to perceive goes down. Just like hearing a voice at your front versus behind a wall.
Word recognition, in its essence, is a sum of those audio spectrums/ms and how well you hear the frequencies of each letter/vowel/etc of each word in order to be able to form an idea of what you just heard.
So, I believe it's impossible to have improvement in word recognition without improvement in audiograms or in frequency spectrum in general. Let's hope further trials show something like this.