Apparently there is a nice Wikipedia entry at http://en.wikipedia.org/wiki/Direct_digital_synthesis. Also Analog has a nice tutorial here:
http://www.analog.com/UploadedFiles/Tutorials/450968421DDS_Tutorial_rev12-2-99.pdf
This has been suggested:
* Use a longer Sine lookup table instead of interpolation.
* Store only 1/4 cycle of the sinewave in the lookup table and use bit operations on the address and output to map the 1/4 cycle onto the full wave.
* Use a dual-port RAM for the lookup table to simultaneously generate sine & cosine (useful for digital radio applications)
Inverse sinc filters are common ways to equalize the spectral droop caused by the zero-order hold nature of the DAC. Typically a simple FIR filter with a few taps (<10) can ‘lift’ the high frequency response of the signal to compensate for this rolloff. This page describes one way to do it:
http://www.maxim-ic.com/appnotes.cfm/an_pk/3853
Some more description in another post:
Interpolation in a DDS is usually handled differently than in, say, an interpolation filter. Normally it is done with a Taylor polynomial, which yields much better results than a linear interpolator. The usual problem with a Taylor polynomial is that it requires derivatives of the function. In a DDS, though, the derivitaves of the sine and cosine
functions are the very same sines and cosines (and their opposites). So with a BRAM-based lookup table with two read ports, you can read both sine and cosine at the same time, so you effectively also have, for free, the first (and second and third, etc.) derivatives of the outputs.
So then with little hardware you can make a first-order Taylor, and with a bit more you could even make a second-order, although rarely is this necessary. I’ll send you a Xilinx paper that explains this. It’s by Chris Dick and Fred Harris and called “Direct Digital Syhthesis -Some Options for FPGA Implementation”.
Quite a nice summary:
For high precision sinusoids in FPGA’s with multipliers, I’d try dusting off the technique from the vintage 1970
Tierney/Rader/Gold paper [Ref 1] and doing something like :
– Upper two{three} phase bits used for quadrant{octant} folding
– next N phase bits look up a ‘coarse’ IQ value ( coarse phase index, yet precise amplitude )
– next M phase bits look up a ‘fine’ IQ value ( residual rotation )
– complex multiply rotates coarse IQ by fine IQ
Figure six of their paper has a nice graphical summary of the technique.
The beauty of this scheme is that it is an exact computation, not an approximation; I haven’t worked out the error terms for 18×18 or 36×36 multipliers, but I’d expect you could easily do a computation to twenty-something bits of precision with two comfortably-fit-in-BRAM sized lookup tables and one complex multiply.
Their actual implementation with 1970-era TTL took some shortcuts to conserve hardware, e.g. approximate the fine cosine values as ~1.0 [Ref 2] is a great DDS reference that reprints that early paper, along with summaries of other sine computation methods [Ref 3, Ref 4]
[Ref 1] “A Digital Frequency Synthesizer”, Tierney/Rader/Gold,IEEE Transactions on Audio and Electroacoustics, March 1971
[Ref 2] “Direct Digital Frequency Synthesizers, Kroupa (ed) IEEE Press, 1996
[Ref 3] “The Optimization of Direct Digital Frequency Synthesizer Performance in the Presence of Finite Word Length Effects” Nicholas/Samueli/Kim, Proceedings of the 42nd Annual Frequency Control Symposium,1988
[Ref 4] “Methods of Mapping from Phase to Sine Amplitude in Direct Digital Synthesis”, Vankka IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, March 1997