Windows Phone 8 and Text-to-Speech

Speech Recognition & Text-to-Speech is a cool feature which can be used by the developers in their windows phone to prompt the user to provide inputs via speech and then later read the text to the user etc.

Text-to-Speech in Windows Phone App

To Use the Text-to-Speech in the Windows Phone App, the following capability should be enabled in the WMAppManifest.xml file.

<Capability Name="ID_CAP_SPEECH_RECOGNITION" />

There are 2 ways in which the text-to-speech can be implemented in your windows phone app. These include

1. Providing the input to the Speech Synthesizer via simple text.

2. Using the Speech Synthesis Markup Language File as input to the Speech Synthesizer.

1. Providing the input to the Speech Synthesizer via simple text

To integrate the text-to-speech in your app, you need to instantiate the Speech Synthesizer class and call the SpeakTextAsync method by providing the text for the Phone to speak.

public async System.Threading.Tasks.Task CallTextToSpeech()

{

SpeechSynthesizer speech = new SpeechSynthesizer();

await speech.SpeakTextAsync("Welcome to developerpublish.com");

}

The above example will use the default voice used in the phone for text-to-speech.

2. Speech Synthesis Markup Language File

Not just a simple text or string, you could also use the SSML (Speech Synthesis Markup Language) to define the speech for your App.

The Speech Synthesis Markup Language File is a simple xml file with predefined tags like speak, voice etc. which will be used for defining speech.

Below is a sample SSML file.

<?xml version="1.0" encoding="utf-8" ?>

<speak version="1.0" xmlns ="http://www.w3.org/2001/10/synthesis" xml:lang="en-us">

<voice gender="male">

Welcome to MobileOSGeek.com

</voice>

<voice gender="female">

This is a Speech Synthesis Markup Language File sample

</voice>

</speak>

The speak element is the root of the SSML document which includes the xml:lang to define the default speaking voice and the xmlns attribute to define the namespace for the SSML synthesizer.

The Voice element defines the speaking voice and the text to be spoken.

More details on the Speech Synthesis Markup Language and supported tags can be found at Speech Synthesis Markup Language (SSML) Version 1.0 page at W3.ORG

2. Using the Defined SSML file for the Speech Synthesizer

The final step is to integrate the speech defined in the SSML in your C# code. Just create an instance of the Speech Synthesizer and the assign the path of the SSML file to the SpeakSsmlFromUriAsync method and we are done .

async public Task CallTextToSpeechusingSSML()

{

SpeechSynthesizer speech = new SpeechSynthesizer();

string path = "ms-appx:///SSMLSample.xml";

Uri uri = new Uri(path, UriKind.RelativeOrAbsolute);

await speech.SpeakSsmlFromUriAsync(uri);

}

When we run the application, the device would pick the text from the SSML file and read it out for us.

Leave A Reply

Your email address will not be published. Required fields are marked *

You May Also Like

In this post, you’ll learn about the Win32 Error “0x000019E5 – ERROR_COULD_NOT_RESIZE_LOG” that you get when debugging system erors in...
In this post, you’ll learn about the error “CO_E_DBERROR 0x8004E02B” that is returned when working with COM based APIs or...
In this post, you’ll learn about the Win32 Error “0x000019D0 – ERROR_LOG_BLOCK_VERSION” that you get when debugging system erors in...