<< Chapter < Page | Chapter >> Page > |
The reconstruction can only occur by means of a filter that cancels out all spectral images except for the one directlycoming from the original continuous-time signal. In other words, the canceled images are those having frequencycomponents higher than the Nyquist frequency defined as $\frac{{F}_{\mathrm{s}}}{2}$ . The condition required by the sampling theorem is equivalent to saying that no overlaps between spectral images are allowed. Ifsuch superimpositions were present, it wouldn't be possible to design a filter that eliminates the copies of the original spectrum. In case of overlapping, a filterthat eliminates all frequency components higher than the Nyquist frequency would produce a signal that is affected by aliasing . The concept of aliasing is well illustrated in the Aliasing Applet , where a continuous-time sinusoid is subject to sampling. If the frequency ofthe sinusoid is too high as compared to the sampling rate, we see that the the waveform that is reconstructed from samples is not theoriginal sinusoid, as it has a much lower frequency. We all have familiarity with aliasing as it shows up in moving images, forinstance when the wagon wheels in western movies start spinning backward. In that case, the sampling rate is given by the frame rate , or number of pictures per second, and has to be related with the spinning velocity of the wheels. This is one of several stroboscopic phenomena.
In the case of sound, in order to become aware of the consequences of the $2\pi $ periodicity of discrete-time signal spectra (see [link] ) and of violations of the condition of the sampling theorem, we examine a simple case.Let us consider a sound that is generated by a sum of sinusoids that are harmonics (i.e., integer multiples) of a fundamental. The spectrumof such sound would display peaks corresponding to the fundamental frequency and to its integer multiples.Just to give a concrete example, imagine working at the sampling rate of $44100$ Hz and summing $10$ sinusoids. From the sampling theorem we know that, in our case, we can represent without aliasing all frequencycomponents up to $22050$ Hz. So, in order to avoid aliasing, the fundamental frequency should be lowerthan $2205$ Hz. The Processing (with Beads library) code reported in table [link] implements a generator of sounds formed by $10$ harmonic sinusoids. To produce such sounds it is necessary to click on a point of the display window. The x coordinate would vary with thefundamental frequency, and the window will show the spectral peaks corresponding to the generated harmonics. When we click on a pointwhose x coordinate is larger than $\frac{1}{10}$ of the window width, we still see ten spectral peaks. Otherwise, we violate the sampling theorem andaliasing will enter our representation.
Aliasing test: Applet to experience the effect of aliasing onsounds obtained by summation of 10 sinusoids in harmonic ratio |
import beads.*; // import the beads library
import beads.Buffer;import beads.BufferFactory;
AudioContext ac;PowerSpectrum ps;
WavePlayer wavetableSynthesizer;Glide frequencyGlide;
Envelope gainEnvelope;Gain synthGain;
int L = 16384; // buffer sizeint H = 10; //number of harmonics
float freq = 10.00; // fundamental frequency [Hz]Buffer dSB;
void setup() {size(1024,200);
frameRate(20);ac = new AudioContext(); // initialize AudioContext and create bufferfrequencyGlide = new Glide(ac, 200, 10); // initial freq, and transition time
dSB = new DiscreteSummationBuffer().generateBuffer(L, H, 0.5);wavetableSynthesizer = new WavePlayer(ac, frequencyGlide, dSB);gainEnvelope = new Envelope(ac, 0.0); // standard gain control of AudioContext
synthGain = new Gain(ac, 1, gainEnvelope);synthGain.addInput(wavetableSynthesizer);
ac.out.addInput(synthGain);// Short-Time Fourier AnalysisShortFrameSegmenter sfs = new ShortFrameSegmenter(ac);
sfs.addInput(ac.out);FFT fft = new FFT();
sfs.addListener(fft);ps = new PowerSpectrum();
fft.addListener(ps);ac.out.addDependent(sfs);
ac.start(); // start audio processinggainEnvelope.addSegment(0.8, 50); // attack envelope
}void mouseReleased(){
println("mouseX = " + mouseX);}
void draw(){
background(0);text("click and move the pointer", 800, 20);frequencyGlide.setValue(float(mouseX)/width*22050/10); // set the fundamental frequency
// the 10 factor is empirically foundfloat[] features = ps.getFeatures(); // from Beads analysis library// It will contain the PowerSpectrum:
// array with the power of 256 spectral bands.if (features != null) { // if any features are returned
for (int x = 0; x<width; x++){
int featureIndex = (x * features.length) / width;int barHeight = Math.min((int)(features[featureIndex] * 0.05 *height), height - 1);
stroke(255);line(x, height, x, height - barHeight);
}}
}public class DiscreteSummationBuffer extends BufferFactory {
public Buffer generateBuffer(int bufferSize) { //Beads generic bufferreturn generateBuffer(bufferSize, 10, 0.9f); //default values
}public Buffer generateBuffer(int bufferSize, int numberOfHarmonics, float amplitude)
{Buffer b = new Buffer(bufferSize);
double amplitudeCoefficient = amplitude / (2.0 * (double)numberOfHarmonics);double theta = 0.0;
for (int k = 0; k<= numberOfHarmonics; k++) { //additive synthesis
for (int i = 0; i<b.buf.length; i++) {
b.buf[i]= b.buf[i] + (float)Math.sin(i*2*Math.PI*freq*k/b.buf.length)/20;}
}return b;
}public String getName() { //mandatory method implementation
return "DiscreteSummation";}
} |
Notification Switch
Would you like to follow the 'Media processing in processing' conversation and receive update notifications?