IEEE ©1998 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

Return to Table of Contents


Developing Real-Time Digital Audio Effects for Electric Guitar in an Introductory Digital Signal Processing Class

Mauro J. Caputi, Member, IEEE


Abstract - This paper describes the innovative challenges of including the development of real-time digital audio effect algorithms in an existing introductory digital signal processing lecture class. The basic course structure is discussed and the software, hardware, and two audio effect algorithms are described. The quality of the effect algorithms are compared with a commercially available effects unit at auditions during a student project demonstration day.


I. Introduction

Over the past two decades there has been a tremendous amount of work done in the area of digital audio. Many common products such as CD players, electronic music synthesizers, PC sound cards, and home stereo equipment use digital signal processing (DSP) techniques. Of course, all of these products perform digital audio signal processing in real-time, requiring high sample rates (typically 44.1 kHz), fast hardware processors, efficient optimized code, and large amounts of memory. Several Electrical Engineering departments, such as those at Rutgers and Rose-Hulman, have incorporated audio topics in their DSP courses, typically using separate lecture and laboratory courses.

An introductory DSP elective course at Hofstra University has recently been enhanced by involving the undergraduate Electrical Engineering students in the development of real-time audio algorithms. Due to the inherent complexities involved, incorporating real-time audio DSP within an existing introductory DSP lecture class is a challenging innovation for an instructor. A first course ought to primarily emphasize DSP fundamentals such as the sampling theorem, the discrete Fourier Transform, difference equations, z-transforms, FIR and IIR filters, and other theoretical underpinnings of the field. All of this fundamental theory can be taught without the use of any computers or hardware at all. However, seeing how DSP principles work in an actual computer application is a great leaning tool for students, who can realize the tangible and audible results of classroom work with satisfying immediacy.

For instructional purposes, a great deal of excellent application work can be accomplished using non real-time simulation of DSP algorithms. This has proven to be very popular for several reasons. Many fine software products (MATLAB, for example) are available in low cost Student Editions which provide specialized DSP routines to help simplify the programming burden. No hardware other than a PC needed is required, and simulation is an excellent visual aide for enhancing student insight into DSP techniques. The main focus of the course can still be acquiring an understanding of the significance of DSP fundamentals. Any algorithms that are simulated do not necessarily have to be the most efficient, since demonstrating how the theory works is the main focus, not how fast a matrix can be inverted in real-time, for example.

But even though using non real-time simulation is very effective, I still wanted to include a real-time audio DSP component to the course, to increase the level of student involvement in applications. Granted, pre-recorded audio files can be used as input to simulation routines and the results can be stored in a file for later playback. But this is not nearly as cool as playing an electric guitar connected to a DSP card and hearing the immediate results of an audio effect through loudspeakers.

Students become very enthusiastic about taking the DSP course knowing that they will apply their knowledge to real-time electric guitar effects. The sound of an electric guitar is commonly modified to fit the style and tone of many different styles of music such as rock, blues, jazz, and pop. Audio effects such as flanging, chorus, phasing, and delay are all used to enhance the electric guitar sound. In the past, these effects were implemented using only analog electronic methods. Today, using very affordable DSP hardware, these effects can be implemented extremely well using digital audio techniques.

The main difficulty in doing real-time DSP is that it increases the "effort burden" of work by several orders of magnitude. The course now has to be expanded to include topics such as DSP hardware architecture, C and/or Assembly language, using compiler or assembler software, and algorithm coding techniques. One can easily lose sight of the DSP fundamentals if the course transforms into a Microprocessor Hardware course. Trying to write and implement a digital filter routine would be hampered by long cycles of coding, debugging and re-coding to the point where the original focus on digital filter principles would be lost.

In the real world, real-time DSP algorithms are implemented in C and assembler language. Factors such as processing speed, memory constraints, compiler effectiveness, and hardware configurations all constrain the code to be as efficient and optimal as possible. Much more emphasis is placed on these factors than on the underlying theory itself. It is probably accurate to say that if the main emphasis is on producing efficient optimal code, a full understanding of the underlying DSP theory is not completely essential.

A separation of these two issues becomes apparent. Engineers well trained in DSP theory are needed to apply the proper theory in formulating a solution to a particular application area. Other engineers or computer scientists well trained in numerical algorithms and efficient coding practices are needed to effectively implement the solution in good working code. Combining both of these areas of expertise provides the solution that leads to actual working products.


II. Course Structure and Pedagogy

The DSP course is an elective for senior-level Electrical Engineering students. The prerequisite is linear system theory, including both Fourier and Laplace transform methods. It is a one-semester (fifteen-week), three-credit hour lecture course. Most of the lecture periods are held in a regular classroom setting, but a few are held in the Digital Audio Signal Processing laboratory, a special-purpose laboratory space, developed under the supervision of the author, which is dedicated primarily to the special real-time processing needs of the DSP course. During these periods, the students are guided through several tutorials, gaining experience with the two main software packages used in the course.

The course begins with Sampling and Reconstruction of analog signals. It proceeds to a thorough discussion of the Discrete-Time Fourier Transform, with an emphasis on frequency resolution and windowing effects. The students are not only taught how to calculate values using the necessary equations, but are also presented with an integrated understanding of the concepts behind the equations. This enables them to conduct a more meaningful and significant frequency response analysis. Discrete-time systems and z-transforms are introduced next, building on their previous linear system theory background. These topics are drawn together through a detailed presentation of several equivalent mathematical descriptions of discrete-time Transfer Functions.

After these important fundamentals are covered, the main DSP flavor of the course arises. FIR filtering is explained using the Window method, examining various types of window functions. IIR filtering is covered next using the bilateral transformation method. Butterworth, Chebyshev I and II, and Elliptic filters are examined. Various implementations of FIR and IIR filters are discussed along with detailed coverage of the pros and cons of each filter type.

The course culminates in the material on Digital Audio Effects. The fundamental concepts previously covered are now expanded as the theory behind the currently used set of audio effects - delays, echos, comb filters, flanging, chorusing, phasing, and parametric equalizers - is explained using both time-domain and frequency-domain methods.

Student progress is assessed using homework problems, simulation projects, real-time projects, and a final oral presentation. Students work individually on homework problems which involve a mix of "by hand" analysis and computer analysis. The homework problems allow students to gain the skills needed to work on the various projects.

The simulation and real-time projects are performed by groups of three students (or two students if absolutely necessary). These projects involve an in-depth analysis of an existing DSP algorithm or system, the design of a new algorithm, or the modification of an existing algorithm. Typically, a combination of several different concepts, covered by two or more different homework assignments, is needed to fully work through and complete a project.

One simulation project combines Sampling, Aliasing and Discrete-Time Fourier Transform concepts in order to analyze an analog signal composed of a sum of an unknown number of sinusoids. The students are to determine the exact number of sinusoidal terms along with the approximate values of their frequencies and amplitudes. This is a challenging open-ended project for students in that many different parameter set choices can be found to determine the correct results. Each group produces one project report including a mandatory, intensely descriptive Analysis of Results section documenting the understanding of their work.

The real-time projects are conducted in a similar fashion as the simulation projects. Two of the real-time projects are discussed in Section V. The course ends with a Student Project Demonstration Day in which each project group makes a final oral presentation and gives a live demonstration of one of their audio effects. This is discussed further in Section VI.


III. A Software Solution For Real-Time DSP

Enhancing the introductory DSP course with real-time guitar audio effects projects had to be done in such a way as to keep the additional "effort burden" described in Section I down to a minimum. Several software packages including DSPower from SignaLogic and DSP Workshop from The MathWorks are promising CAD tool alternatives to conventional C and Assembly language programming.

Another CAD tool software program, Block Diagram, developed by Hyperception, Inc., was selected for use in the DSP course. It was chosen due to its many features [2] that reduce the "effort burden". Block Diagram provides a graphical design environment that allows complex DSP algorithms to be visually programmed.

Visual DSP programming is a way of developing DSP algorithms and systems graphically, rather than textually, by simply connecting functional block icons together with point-and-click methods in the same manner as schematics are designed with CAD software. Reusable software components, called blocks, are used as the basis for algorithm development. Block diagrams built from these blocks may also be inserted into other block diagrams and used in a hierarchical fashion.

By arranging and connecting icons (blocks), signal processing algorithms can be programmed, adjusted, and tested. The icons represent input, processing, output, and display functions. Each block has parameters associated with it that can be modified before or even during execution to control the block's function by simply double clicking on the block's icon. Many functions related to DSP and general engineering are provided, and the user can add new function blocks using any Windows-compatible C compiler. Entire screens can shrink to a single block, and multiple levels of this hierarchy are supported, simplifying the management of complex DSP systems.

An important feature of Block Diagram is RIDE: the Real-time Integrated Development Environment. RIDE supports real-time signal processing that allows DSP algorithms designed visually to be run and tested in real-time on several industry-standard DSP boards. This streamlines the algorithm design and test process by eliminating manual code creation, loading, and debug steps. Algorithms can be easily modified by graphically changing the block diagram and/or modifying block parameters and running again. Custom real-time blocks can be user-generated with the supplied Block Generator application.

The real-time support is transparent to the user when designing block diagrams. The Block Diagram/RIDE application uses a Windows DSP board driver to communicate with the DSP board, download real-time code to it, control algorithm execution, and upload data from the DSP boards for displays and file output. The user does not need to get involved with any of these operations.

The real-time DSP board driver allows for asynchronous communication with respect to the DSP board and the PC. This implies that the DSP board may execute algorithms at full speed without concern for the speed of the inherently slower PC. In situations where the PC needs data from the DSP board for display or file output, the PC effectively polls data as fast as it can. This asynchronous transfer allows for the most efficient mechanism for combining real-time DSP concerns with high-level simulation functions, such as displaying and storing data, which are inherently slower processes.


IV. A Hardware Solution For Real-Time DSP

The are many fine DSP hardware boards available for real-time algorithm development. One DSP board that is excellent for audio DSP and is supported by Block Diagram/RIDE is the TIGER 31/PC plug-in board for PC ISA systems, developed by DSP Research, Inc. [1]. This board is a high-performance, flexible DSP development platform based on the TMS320C31 32-bit floating point processor from Texas Instruments.

The boards used in our Digital Audio Signal Processing laboratory have been configured to run at 50 MHz with 1 Mbyte of zero wait state SRAM. Analog I/O is provided by a Crystal Semiconductor CS4216 stereo CD quality A/D-D/A converter. The CS4216 has software programmable sample rates up to 50 kHz. Input can be switched from either microphone, line or telephone interface; output is line level or telephone. This board has the essential features needed to run high-quality digital audio effects for electric guitar.


V. Real-Time Electric Guitar Audio Effect Projects

Gaining access to high-quality professional digital audio effects is not easy. Intense competition among music industry companies results in virtually all professional audio DSP algorithms to remain closely guarded trade secrets. There is a good deal of information in the public domain on audio effects, but gathering it requires a diligent literature search.

Fortunately, two excellent and comprehensive sources of digital audio effects information have recently been published. The first source is found in the introductory DSP textbook by Orfanidis [3]. In the Signal Processing Applications chapter, he provides an informative section on several digital audio effects such as echo, flanging, reverberation, and compression. In another section he thoroughly discusses the design of IIR parametric equalizer filters. He also lists a large set of references on digital audio effects for further review.

The second source is a comprehensive three-part paper by Dattorro, which is intended to serve as a tutorial point of reference for digital audio effects that are frequently used by the electronic music industry. In Part One [4], reverberation, musical filtering, and Chamberlin filter topology algorithms are discussed. In Part Two [5], linear interpolation, all-pass interpolation, and white chorus effects including flanger algorithms are discussed. Part Three will be published at a later date.

There are four main guitar effects projects that the students currently concentrate on in the DSP course: 1) delay+echo, 2) flanging, 3) chorusing, and 4) parametric equalization. The theoretical foundations of these effects are developed in [3-5] and the students produce Block Diagram/RIDE worksheets to implement them in real-time. A brief description of the delay/echo and flanging effects are given below.

A. Delay+Echo Effect

The delay effect [3] is a very simple time-based guitar effect generated using the following FIR (finite impulse response) difference equation

The direct signal is added to a single attenuated and delayed copy of itself . The delay represents the round-trip travel time from the source to a reflecting surface. The coefficient is a measure of the reflection and propagation losses, such that .

The echo effect is a multiple delay effect. The direct signal is now added to several attenuated and delayed copies of itself. For example, using three successive delays, the echo difference equation becomes

If an infinite number of successive delays are added, using z-transforms and a geometric series summation [3], the echo difference equation can be represented recursively as

This IIR (infinite impulse response) difference equation roughly imitates the reverberating nature of a room.

Fig. 1 shows a Block Diagram/RIDE worksheet displaying an enhanced Delay+Echo effect. The left half of the diagram is the Delay effect. The monophonic direct guitar signal from the A/D block is sent to the Delay block. The Delay slider is used to set the amount of delay between 0 and 250 msec (0 - 7350 samples using a 29.4 kHz sampling rate). The direct and delayed signals are summed together using the Blend Hierarchy block. The Blend slider controls the mix of direct (dry) and delayed (wet) signals.

 

Fig. 1. Block Diagram/RIDE implementation of a Delay/Echo effect.

The right half of the diagram is the Echo effect. The output of the Add block is sent to the delay (Echo) block and gain (Echo Depth) blocks, and then fed back to the Add block. The Echo slider controls the amount of echo delay (0 - 150 msec), and the Echo Depth slider controls the echo feedback gain (0 - 0.9). The output of the Add block is also sent to a Gain (Volume) block and then to the D/A block. The Volume slider controls the output level.

The block diagram within the Blend Hierarchy block is shown in Fig. 2.

 

Fig. 2. Blend Hierarchy block.

This block diagram is condensed into a hierarchy block to simplify the appearance of the main diagram of Fig. 1. The output of the Blend algorithm is

where is the dry (direct) input and is the effect (delayed) input. The Gain Value range is , so that produces a totally dry output, produces a totally delayed output, and produces a 50/50 blended output of dry and delayed inputs.

The following sound files (WAV format) are provided to demonstrate the Block Diagram/RIDE Delay+Echo effect. To keep the size of the audio files manageable, all of the recordings in this paper were made with the following specifications: 22,050 Hz sampling rate, 8-bit mono. This reduced both the size and quality of the recordings compared to the common audio standard of 44,100 Hz sampling rate, 16-bit stereo. However, the differences between un-effected and effected audio sound files are sufficiently distinct for the purpose of making audible comparisons.

The sound below is a muted single string thump played twice to allow you to hear what the Echo effect does to an electric guitar signal.

String Thump with Echo

The first sound below is an electric guitar musical piece played dry with no effects. The second sound is the same as the first, only this time adding the Delay effect. The glitches and distortions are due to the unfortunate circumstance of a faulty hard-disk controller. However, for comparison purposes, the two files serve their purpose.

Guitar with NO Delay

Guitar with Delay

The first sound below is another electric guitar musical piece played dry with no effects. The second sound is the same as the first, only this time adding the Echo effect. Again, despite the glitches, a sufficient comparison can be made.

Guitar with NO Echo

Guitar with Echo

B. Flanging Effect

The flanging effect is similar to the delay effect, except the amount of delay now varies dynamically with time [3, 5]. The basic FIR difference equation is

where is a periodically varying delay value between 0 and 2 msec, with a low frequency no greater than 1 Hz. Flanging uses very short delays, while the previous delay and echo effects use much longer delays. The shorter delays of flanging give rise to comb filtering. It is the variance of the comb filter via the varying delay that gives flanging its sound [6]. For a good discussion on comb filtering, see [3]. The flanging effect produces a pronounced whooshing sound, similar to a jet aircraft on take off. The flanging effect was originally produced by playing music through two reel-to-reel tape players simultaneously and periodically slowing down one tape by manually pressing the flange of the tape reel.

Fig. 3 shows the Block Diagram/RIDE implementation of Flanging effect. Several slider controls are included to enable the Block Diagram/RIDE effect to mimic the features of a Yamaha FX770 digital guitar effects processor, which serves as a standard for effects comparison.

 

Fig. 3. Block Diagram/RIDE implementation of the Flanging effect.

The monophonic direct guitar signal from the A/D block passes through the Add block to the Variable Delay block. The LFO Hierarchy block is a Low Frequency Oscillator which produces the variable delay . A portion of the delayed signal is fed back through the Gain block to the Add block to enhance the sound quality of the flanging effect [5].

The direct and effected signals are summed together using a Blend Hierarchy block, sent to a Gain (Volume) block and D/A block.

The block diagram within the LFO Hierarchy block is shown in Fig. 4. The shape of the LFO waveform is important to the sound of a flanger [6]. Several types such as triangle, square, sinusoidal and other waveshapes are used in practice. The LFO waveform implemented in Fig. 4 is sinusoidal.

 

Fig. 4. LFO Hierarchy block.

The output of the LFO algorithm is

is the midpoint of the variable delay value (in samples), controlled by the Range slider. The Range value is in msec and uses the "Time Delay (ms)", "fs (Hz)", and "1/(2 x 1000)" Gain blocks to convert msec to samples.

is the Amplitude of the sinusoidal portion of the LFO, controlled by the Depth slider.

is the frequency of the sinusoidal portion of the LFO, controlled by the Speed slider.

The following sound files (WAV format) are provided to demonstrate the Block Diagram/RIDE Flanging effect. The sound below is just a single chord played once to allow you to hear what the Flanging effect does to an electric guitar signal.

Single Chord Flanging

The first sound below is an electric guitar musical piece played dry with no effects. The second sound is the same as the first, only this time adding the Flanging effect. As before, the glitch in the second sound is due to a faulty hard-disk controller. However, for comparison purposes, the two files serve their purpose.

Guitar with NO Flanging

Guitar with Flanging


VI. Student Project Demonstration Day

On the last day of the semester, the class holds a student project demonstration day. Every project group makes a final oral presentation of one of their audio effect projects. Each student in the group presents some portion of the project, consisting of a brief overview of the effect, an explanation of the theoretical aspects of the algorithm, a description of the Block Diagram/RIDE worksheets, an explanation of each of the user controls, and extensive auditions of how the effect sounds using inputs from a "live" electric guitar as well as previously recorded music on CD.

The high-quality sound of the student-designed effects compared very favorably with the FX770 digital guitar effects unit donated by Yamaha. Professionals working in the DSP industry have attended the demonstrations, providing valuable feedback of the students' work. All presentations are videotaped, serving as an excellent record of how well the class performed. At a later date, the class reviews the videotape, and the merits of each presentation are critiqued.


VII. Conclusions

Student response to the course has been very favorable. They like not having any exams or quizzes to study for, but they are certainly challenged by the projects, in that they have to understand the material well enough to write comprehensive descriptive Analysis sections in their project reports. They also appreciate the balance of theory, computer simulation, and real-time development that the course offers.

As previously mentioned in Section I, schools such as Rutgers and Rose-Hulman have separate lecture and laboratory courses. An important step to expand the DSP content in the Electrical Engineering program at Hofstra University would be to develop a second DSP course, with a main objective of DSP hardware programming using either assembler or C. The students can now focus their attention on issues such as processing speed, memory constraints, compiler effectiveness, and hardware configurations previously described in Section I. This will be an excellent complement to the existing DSP theory course, equipping students for entry-level hardware programming positions in the DSP industry.


Acknowledgements

The author gratefully acknowledges the support from Hyperception, DSP Research, Texas Instruments, Yamaha, and Hofstra University that made possible the development of this new DSP theory course and special-purpose Digital Audio Signal Processing laboratory.

The author would like to thank Todd Larsen for performing his original compositions on electric guitar for the audio samples used in this paper. Todd played a Fender Stratocaster Plus with three Lace Sensor pickups, modified from the original three Gold pickups to the new combination of a Red pickup in the neck position, Silver in the middle position, and Blue in the bridge position. The electric guitar was fed directly into the A/D input of the TIGER 31 DSP board.

The author would like to thank Gene Klimov for his expertise and help in producing this paper in html format and for his work as the "acting network administrator" of the DSP laboratory.

Finally, the author would like to thank the editor for her helpful suggestions and the reviewers for their excellent feedback and comments. Their input has served to strengthen both the content and clarity of this paper.


References

[1] TIGERtrax Newsletter, DSP Research, Vol. 7, No. 3., 1995. For more information see DSP Research.

[2] B. G. Carlson, "Hypersignal for Windows Block Diagram DSP development environment", presented at the 1994 DSPx Exposition & Symposium. For more information see Hyperception.

[3] S. J. Orfanidis, Introduction to Signal Processing, Prentice-Hall, Englewood Cliffs, NJ, 1996.

[4] J. Dattorro, "Effect Design: Part 1 Reverberator and Other Filters", J. Audio Eng. Soc., vol. 45, pp. 660-684, Sep. 1997. For more information see the Audio Engineering Society.

[5] J. Dattorro, "Effect Design: Part 2 Delay-Line Modulation and Chorus", J. Audio Eng. Soc., vol. 45, pp. 764-788, Oct. 1997.

[6] Comments submitted by Reviewer 1.


Author Contact Information

Mauro J. Caputi
Hofstra University
Department of Engineering
104 Weed Hall
Hempstead, NY 11549-1330
USA
Phone: (516) 463-5549
Fax: (516) 463-4939
E-mail: eggmjc@hofstra.edu
Web Page: http://www.hofstra.edu/HCLAS/Engineering/high/who/caputi.html


Author Biography

Mauro J. Caputi is an Assistant Professor in electrical engineering at Hofstra University. He received the Ph.D. from Virginia Tech in 1991, the M.S. from Virginia Tech in 1984, and the B.S. from Manhattan College in 1981, all in electrical engineering. Prior to his current post at Hofstra, he was an Assistant Professor at the University of the Pacific, and an Instructor at Virginia Tech. Dr. Caputi's current research interests include digital audio signal processing, random signal analysis, and educational improvements in analog circuits and electronics.