<< Chapter < Page Chapter >> Page >
The key to the Speak N Spell actually working the way it had to was the speech synthesis. At the time the program started it was considered an impossible task at any level of technology, not to speak of a "toy". But putting the heads together of two technologists, Richard Wiggins and Larry Brantingham, the appropriate compromises were made to give voice to the product.

Introduction

The state of the art for implementing real time speech synthesis in a single IC was well defined – it was impossible. These claims were based on the use of the higher performance IC technology known as NMOS. All we had available to us in TI’s Consumer Group was lower performance, much cheaper IC technology, known as PMOS. To no surprise, PMOS was significantly slower than NMOS. But, when all was accomplished, the collaborative effort of Richard and Larry, got the LPC-10 speech synthesis algorithm running using PMOS. What follows in this chapter is a summary of the tradeoffs and compromises made in order to make the Speak N Spell happen.

A quick summary of the compromises:

  • Sample rate
  • Frame rate
  • Multiplier size and data word size
  • Number of coefficients
  • Bits assigned to each coefficients
  • Other tricks

Before discussing the compromises and tradeoffs, it is best to do a quick introduction the concept of LPC here.

Linear predictive coding

After some discussions and analysis (see Richard Wiggins Engineering notebook in Appendix 2) we settled on linear predictive coding.

Appendix 2 is Richard’s complete set of entries into his Engineering Notebook on the Speak N Spell. Here is an excerpt from page 3 of Richard’s Engineering Notebook that does a good job of summarizing his thought process.

“The term linear predictive coding (LPC) refers to the highly successful representation of a human speech waveform as the output of an all-pole filter excited by either periodic pulses (for voiced speech) or by white noise (for unvoiced speech). In these systems, the compressed parameters are the filter coefficients (usually 10 or 12 numbers) and the excitation description (energy level, voiced/unvoiced parameter, and if voiced, a pitch period). This technique has had much recent success in communications systems where good quality speech transmission has been demonstrated at 2400 bits/second. In such systems, the LPC parameters are determined from segments or “frames” of digitized speech, usually about 20 milliseconds in length. Hence a description of the waveform is sent about 50 times a second, with each description being about 48 bits in length.”

The particular solution was called LPC-10 as it had ten coefficients. This was determined to be sufficient for our 8kHz sample rate. As I remember, the reasoning for 10 coefficients was that the number of coefficients for LPC needed to be the sample rate (in kHz) plus two. In this case 8 + 2 = 10. I also seem to remember another reason for 10 coefficients, the idea of needing two coefficients for each formant and two more for the general form of the speech envelope. Figure 1 shows the simplified block diagram for the synthesis algorithm. There were two inputs to the actual filter:

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, The speak n spell. OpenStax CNX. Jan 31, 2014 Download for free at http://cnx.org/content/col11501/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'The speak n spell' conversation and receive update notifications?

Ask