As shipped, the speech recognition engine responds to any speaker but will make too many mistakes. To be useful in practice, it is necessary to train the engine to a particular user resp. a particular voice. It is possible to perform training with different speakers and to switch between speakers. Because the engine training also adapts to some characteristics of the microphone, it is sometimes recommended to use different speaker trainings when switching microphones. When the engine misrecognises a word, the training for this particular word should be repeated, to increase accuracy in the future.
Currently training and corrective training can only be performed with ViaVoice Dictation. Xvoice can not so far handle engine training, but is able to use an already trained engine (speaker switching is via ViaVoice Dictation, or editing a text configuration file). Retraining the engine on misrecognised words is not possible with xvoice.
In theory, the SDK allows to program an application which can perform engine training. There's always scope for a capable programmer to work on xvoice...