<< Chapter < Page Chapter >> Page >
Listing 1 . Beginning of the class named SquareWave.
import java.io.*; import java.nio.*;import java.util.*; public class SquareWave extends AudioSignalGenerator02{public SquareWave(AudioFormatParameters01 audioParams,String[] args,byte[] melody){super(audioParams,args,melody); }//end constructor

Beginning of the getMelody method

This method returns an array containing three seconds of monaural audio data for a square wave at 1000 Hz.

Listing 2 shows the beginning of the overridden getMelody method. (Recall that an abstract version of this method is inherited from the class named AudioSignalGenerator02 -- see Listing 9 )

The code in Listing 2 is essentially the same as the corresponding WhiteNoise code from the earlier module. Therefore, I won't discuss it further in this module.

Listing 2 . Beginning of the getMelody method.
byte[] getMelody(){//Recall that the default for channels is 1 for mono. System.out.println("audioParams.channels = " + audioParams.channels);//Each channel requires two 8-bit bytes per 16-bit sample.int bytesPerSampPerChan = 2;//Override the default sample rate. Allowable sample rates are 8000,11025, // 16000,22050,44100 samples per second.audioParams.sampleRate = 8000.0F;// Set the length of the melody in seconds double lengthInSeconds = 3.0;//Create an output data array sufficient to contain the melody// at "sampleRate" samples per second, "bytesPerSampPerChan" bytes per // sample per channel and "channels" channels.melody = new byte[(int)(lengthInSeconds*audioParams.sampleRate* bytesPerSampPerChan*audioParams.channels)]; System.out.println("melody.length = " + melody.length);

Required audio data format

As you learned in earlier modules, an object of the AudioPlayOrFile01 class accepts an object of the AudioFormatParameters01 class along with an audio data array object of type byte[] and a String object for a file name and uses that information to either play the data in the audio array immediately or write it into an audio output file oftype AU.

Normally the audio data array must be formatted in a specific way as partially defined by the contents of the AudioFormatParameters01 object. In the case of white noise, however, there is no order or organizationto the bytes of audio data. Therefore, our only requirement was to ascertain that the proper number of signed random byte values were used to populate the array.

That is not the case in this module. Our audio data is organized and we must meet the requiredformat for the audio array.

Given the values that we are using in the AudioFormatParameters01 object, the format requirements for monaural and stereo are shown below. (Note that in both cases, each audio value must be a signed 16-bit value decomposed into a pair of 8-bit bytes.)

Monaural, channels = 1

For mono, each successive pair of bytes in the array must contain one audio value. The element with the lower index must contain the most significanteight bits of the 16-bit audio value.

Stereo, channels = 2

For stereo, alternating pairs of bytes must each contain one audio value in the same byte order as for mono. One pair of bytes is routed to the left speaker and the other pair of bytes is routed to the right speaker (almost) simultaneously.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Accessible objected-oriented programming concepts for blind students using java. OpenStax CNX. Sep 01, 2014 Download for free at https://legacy.cnx.org/content/col11349/1.17
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Accessible objected-oriented programming concepts for blind students using java' conversation and receive update notifications?

Ask