Reply by Jamie March 17, 20132013-03-17
John Larkin wrote:

> On Sun, 17 Mar 2013 10:19:23 GMT, nico@puntnl.niks (Nico Coesel) wrote: > > >>rickman <gnuarm@gmail.com> wrote: >> >> >>>On 3/16/2013 8:56 AM, Nico Coesel wrote: >>> >>>>rickman<gnuarm@gmail.com> wrote: >>>> >>>> >>>>>That may be. The OP was talking about debugging the sinc reconstruction >>>>>and I've been thinking a little bit about just how useful that is. Is >>>>>there a downside to sinc reconstruction, other than the work required? >>>> >>>>It is useable if you have at least 5 samples per period. So that is >>>>0.2fs. The whole problem though is not the number of samples per >>>>period. According to sampling theory the signal is there but it just >>>>needs to be displayed properly so the operator can see a signal >>>>instead of some 'random' dots. With the proper signal reconstruction >>>>algorithm you can display signals up the the Nyquist limit (0.5fs). >>>> >>>> >>>>>I was thinking an aliased signal might interfere with this, but now that >>>>>I give it some thought, I realize they are two separate issues. If you >>>>>have an aliased tone, it will just be a tone in the display whether you >>>>>use sinc reconstruction or not. >>>> >>>>In my design I used a fixed samplerate (250MHz) and a standard PC >>>>memory module. 1GB already provides for more than 2 seconds of storage >>>>for 2 channels. That solves the whole interference issue and it allows >>>>to use a proper anti-aliasing filter. With polynomal approximation I >>>>could reconstruct a signal even when its close to the Nyquist limit. I >>>>tested it and I could get it to work for frequencies up to 0.45fs. >>> >>>I'm not following. Are you saying you need a long buffer of data in >>>order to reconstruct the signal properly? >> >>You need about 10 samples extra at the beginning and end to do a >>proper reconstruction. Lots of audio editing software does exactly the >>same BTW. Using a fixed samplerate solves a lot of signal processing >>problems but also dictates a lot of processing needs to be done in >>hardware to keep the speed reasonable. > > > Interesting stuff can be done with a really long record. You can do signal > averaging of a periodic waveform with no trigger. Our new monster LeCroy scope > can take a long record of a differential PCI Express lane (2.5 gbps NRZ data), > simulate a PLL data recovery loop of various dynamics, and plot an eye diagram, > again without any trigger. Well, if it doesn't crash. >
Oh, you have that problem too? Our lecroy goes belly up now and then for no apparent reason.. It appears to me there is some hardware to software random issue. Kind of reminds me back in the days when Visual Basic was first put on us in Windows 3.xx days. A serious app designed to operate fabric cutting machines for intricate designs would simply fault a plug in component because it would get stuck on some missed signal from the hardware and then time out or do a stack overflow. THe app was loaded using VB controls that just simply was not resource friendly and controlled properly. On top of that, this app cost clients upwards in the $10k range. I was offered a job where this app was developed and was allowed to see it in operation and saw its random failures, they wanted to to join the debugger team to resolve it and move forward. I declined on the offer. Jamie
Reply by John Larkin March 17, 20132013-03-17
On Sun, 17 Mar 2013 10:19:23 GMT, nico@puntnl.niks (Nico Coesel) wrote:

>rickman <gnuarm@gmail.com> wrote: > >>On 3/16/2013 8:56 AM, Nico Coesel wrote: >>> rickman<gnuarm@gmail.com> wrote: >>> >>>> That may be. The OP was talking about debugging the sinc reconstruction >>>> and I've been thinking a little bit about just how useful that is. Is >>>> there a downside to sinc reconstruction, other than the work required? >>> >>> It is useable if you have at least 5 samples per period. So that is >>> 0.2fs. The whole problem though is not the number of samples per >>> period. According to sampling theory the signal is there but it just >>> needs to be displayed properly so the operator can see a signal >>> instead of some 'random' dots. With the proper signal reconstruction >>> algorithm you can display signals up the the Nyquist limit (0.5fs). >>> >>>> I was thinking an aliased signal might interfere with this, but now that >>>> I give it some thought, I realize they are two separate issues. If you >>>> have an aliased tone, it will just be a tone in the display whether you >>>> use sinc reconstruction or not. >>> >>> In my design I used a fixed samplerate (250MHz) and a standard PC >>> memory module. 1GB already provides for more than 2 seconds of storage >>> for 2 channels. That solves the whole interference issue and it allows >>> to use a proper anti-aliasing filter. With polynomal approximation I >>> could reconstruct a signal even when its close to the Nyquist limit. I >>> tested it and I could get it to work for frequencies up to 0.45fs. >> >>I'm not following. Are you saying you need a long buffer of data in >>order to reconstruct the signal properly? > >You need about 10 samples extra at the beginning and end to do a >proper reconstruction. Lots of audio editing software does exactly the >same BTW. Using a fixed samplerate solves a lot of signal processing >problems but also dictates a lot of processing needs to be done in >hardware to keep the speed reasonable.
Interesting stuff can be done with a really long record. You can do signal averaging of a periodic waveform with no trigger. Our new monster LeCroy scope can take a long record of a differential PCI Express lane (2.5 gbps NRZ data), simulate a PLL data recovery loop of various dynamics, and plot an eye diagram, again without any trigger. Well, if it doesn't crash. -- John Larkin Highland Technology Inc www.highlandtechnology.com jlarkin at highlandtechnology dot com Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom timing and laser controllers Photonics and fiberoptic TTL data links VME analog, thermocouple, LVDT, synchro, tachometer Multichannel arbitrary waveform generators
Reply by Nico Coesel March 17, 20132013-03-17
rickman <gnuarm@gmail.com> wrote:

>On 3/16/2013 8:56 AM, Nico Coesel wrote: >> rickman<gnuarm@gmail.com> wrote: >> >>> That may be. The OP was talking about debugging the sinc reconstruction >>> and I've been thinking a little bit about just how useful that is. Is >>> there a downside to sinc reconstruction, other than the work required? >> >> It is useable if you have at least 5 samples per period. So that is >> 0.2fs. The whole problem though is not the number of samples per >> period. According to sampling theory the signal is there but it just >> needs to be displayed properly so the operator can see a signal >> instead of some 'random' dots. With the proper signal reconstruction >> algorithm you can display signals up the the Nyquist limit (0.5fs). >> >>> I was thinking an aliased signal might interfere with this, but now that >>> I give it some thought, I realize they are two separate issues. If you >>> have an aliased tone, it will just be a tone in the display whether you >>> use sinc reconstruction or not. >> >> In my design I used a fixed samplerate (250MHz) and a standard PC >> memory module. 1GB already provides for more than 2 seconds of storage >> for 2 channels. That solves the whole interference issue and it allows >> to use a proper anti-aliasing filter. With polynomal approximation I >> could reconstruct a signal even when its close to the Nyquist limit. I >> tested it and I could get it to work for frequencies up to 0.45fs. > >I'm not following. Are you saying you need a long buffer of data in >order to reconstruct the signal properly?
You need about 10 samples extra at the beginning and end to do a proper reconstruction. Lots of audio editing software does exactly the same BTW. Using a fixed samplerate solves a lot of signal processing problems but also dictates a lot of processing needs to be done in hardware to keep the speed reasonable. -- Failure does not prove something is impossible, failure simply indicates you are not using the right tools... nico@nctdevpuntnl (punt=.) --------------------------------------------------------------
Reply by rickman March 16, 20132013-03-16
On 3/16/2013 8:56 AM, Nico Coesel wrote:
> rickman<gnuarm@gmail.com> wrote: > >> That may be. The OP was talking about debugging the sinc reconstruction >> and I've been thinking a little bit about just how useful that is. Is >> there a downside to sinc reconstruction, other than the work required? > > It is useable if you have at least 5 samples per period. So that is > 0.2fs. The whole problem though is not the number of samples per > period. According to sampling theory the signal is there but it just > needs to be displayed properly so the operator can see a signal > instead of some 'random' dots. With the proper signal reconstruction > algorithm you can display signals up the the Nyquist limit (0.5fs). > >> I was thinking an aliased signal might interfere with this, but now that >> I give it some thought, I realize they are two separate issues. If you >> have an aliased tone, it will just be a tone in the display whether you >> use sinc reconstruction or not. > > In my design I used a fixed samplerate (250MHz) and a standard PC > memory module. 1GB already provides for more than 2 seconds of storage > for 2 channels. That solves the whole interference issue and it allows > to use a proper anti-aliasing filter. With polynomal approximation I > could reconstruct a signal even when its close to the Nyquist limit. I > tested it and I could get it to work for frequencies up to 0.45fs.
I'm not following. Are you saying you need a long buffer of data in order to reconstruct the signal properly? -- Rick
Reply by Nico Coesel March 16, 20132013-03-16
rickman <gnuarm@gmail.com> wrote:

>On 3/15/2013 6:06 AM, Nico Coesel wrote: >> rickman<gnuarm@gmail.com> wrote: >> >>> On 3/14/2013 12:03 PM, Nico Coesel wrote: >>>> >>>> Its a factor 16 attenuation you don't need to do in hardware. So if >>>> the hardware attenuator does 1:1.5, 1:10 and 1:100 (which is doable >>>> with 2 relays) you save quite some circuitry. 8 bits is probably more >>>> then enough. It will be hard to get the response so flat that more >>>> than 8 bits actually adds accuracy of the readout. >>> >>> I've worked with 8 bit scopes and the vertical clearly shows steps which >>> I find interfere with making reasonable measurements. That's why I said >> >> That has more to do with how the software shows the signal. >> Interpolation can solve a lot. > >That may be. The OP was talking about debugging the sinc reconstruction >and I've been thinking a little bit about just how useful that is. Is >there a downside to sinc reconstruction, other than the work required?
It is useable if you have at least 5 samples per period. So that is 0.2fs. The whole problem though is not the number of samples per period. According to sampling theory the signal is there but it just needs to be displayed properly so the operator can see a signal instead of some 'random' dots. With the proper signal reconstruction algorithm you can display signals up the the Nyquist limit (0.5fs).
>I was thinking an aliased signal might interfere with this, but now that >I give it some thought, I realize they are two separate issues. If you >have an aliased tone, it will just be a tone in the display whether you >use sinc reconstruction or not.
In my design I used a fixed samplerate (250MHz) and a standard PC memory module. 1GB already provides for more than 2 seconds of storage for 2 channels. That solves the whole interference issue and it allows to use a proper anti-aliasing filter. With polynomal approximation I could reconstruct a signal even when its close to the Nyquist limit. I tested it and I could get it to work for frequencies up to 0.45fs. -- Failure does not prove something is impossible, failure simply indicates you are not using the right tools... nico@nctdevpuntnl (punt=.) --------------------------------------------------------------
Reply by John Devereux March 16, 20132013-03-16
rickman <gnuarm@gmail.com> writes:

> On 3/15/2013 4:34 AM, John Devereux wrote: >> rickman<gnuarm@gmail.com> writes: >> >>> I've worked with 8 bit scopes and the vertical clearly shows steps >>> which I find interfere with making reasonable measurements. That's >>> why I said 2 spare bits out of 12. There is also a need for zooming >>> in on a portion of a captured trace. At 8 bits all you see is the >>> steps. With a full 12 bits you have a little bit of extra resolution >>> so you can actually get a bit of detail. >> >> I agree a 12 bit (or 16 bit!) scope would be nice. Lecroy make one I >> think but it is very expensive. >> >> The situation with the standard 8 bits is not quite as bad as you >> portray in a higher end scope. They can sample at the full maximum >> digitizer rate (5 or 20 GSPS say) then do real-time averaging/DSP on it >> so that each point plotted at lower sweep rates represents the average >> of hundreds of samples potentially. The noise at 20GSPS smears out the >> steps then the averaging smooths out the noise. Or something like that. >> Anyway the result is much better than you would think from the 8 bit >> input. > > You say that the 16 bit converters are expensive, then talk about > using a 20 GHz 8 bit ADC. Is that not expensive, not to mention the > clocking, the board for the high speed signals and the power supply to > make all this happen?
Yes, it is very expensive. I did say "high end scope", by which I mean "really expensive". They usually need to go to GHz anyway, so already have the high speed digitizer. At lower bandwidths they can utilise the excess samples to increase the apparent resolution.
> I can't imagine this is actually a better approach to designing a > scope with a stated goal of 20-25 MHz bandwidth. I would like to see > at least 300 MHz, but the OP says 25 is good enough.
Absolutely, to me the only point of a 25MHz scope would be if it was higher resolution, 16+ bit ideally. Otherwise you may as well just use one of those cheap USB gadgets. A "dynamic signal analyser" that goes above 100kHz seems to be missing from the market AFAIK. So it could do good spectrum analysis, evaluate noise, servo loops, have a tracking generator and plot filter responses, that sort of thing. -- John Devereux
Reply by rickman March 15, 20132013-03-15
On 3/15/2013 6:06 AM, Nico Coesel wrote:
> rickman<gnuarm@gmail.com> wrote: > >> On 3/14/2013 12:03 PM, Nico Coesel wrote: >>> >>> Its a factor 16 attenuation you don't need to do in hardware. So if >>> the hardware attenuator does 1:1.5, 1:10 and 1:100 (which is doable >>> with 2 relays) you save quite some circuitry. 8 bits is probably more >>> then enough. It will be hard to get the response so flat that more >>> than 8 bits actually adds accuracy of the readout. >> >> I've worked with 8 bit scopes and the vertical clearly shows steps which >> I find interfere with making reasonable measurements. That's why I said > > That has more to do with how the software shows the signal. > Interpolation can solve a lot.
That may be. The OP was talking about debugging the sinc reconstruction and I've been thinking a little bit about just how useful that is. Is there a downside to sinc reconstruction, other than the work required? I was thinking an aliased signal might interfere with this, but now that I give it some thought, I realize they are two separate issues. If you have an aliased tone, it will just be a tone in the display whether you use sinc reconstruction or not. In fact, the *very* low end scope I was using may have had a limitation in the display itself! 8 bits is 256 steps. That shouldn't be too big a problem. -- Rick
Reply by rickman March 15, 20132013-03-15
On 3/15/2013 4:34 AM, John Devereux wrote:
> rickman<gnuarm@gmail.com> writes: > >> I've worked with 8 bit scopes and the vertical clearly shows steps >> which I find interfere with making reasonable measurements. That's >> why I said 2 spare bits out of 12. There is also a need for zooming >> in on a portion of a captured trace. At 8 bits all you see is the >> steps. With a full 12 bits you have a little bit of extra resolution >> so you can actually get a bit of detail. > > I agree a 12 bit (or 16 bit!) scope would be nice. Lecroy make one I > think but it is very expensive. > > The situation with the standard 8 bits is not quite as bad as you > portray in a higher end scope. They can sample at the full maximum > digitizer rate (5 or 20 GSPS say) then do real-time averaging/DSP on it > so that each point plotted at lower sweep rates represents the average > of hundreds of samples potentially. The noise at 20GSPS smears out the > steps then the averaging smooths out the noise. Or something like that. > Anyway the result is much better than you would think from the 8 bit > input.
You say that the 16 bit converters are expensive, then talk about using a 20 GHz 8 bit ADC. Is that not expensive, not to mention the clocking, the board for the high speed signals and the power supply to make all this happen? I can't imagine this is actually a better approach to designing a scope with a stated goal of 20-25 MHz bandwidth. I would like to see at least 300 MHz, but the OP says 25 is good enough. -- Rick
Reply by Tim Williams March 15, 20132013-03-15
"John Devereux" <john@devereux.me.uk> wrote in message 
news:87620tvz6t.fsf@devereux.me.uk...
> The situation with the standard 8 bits is not quite as bad as you > portray in a higher end scope. They can sample at the full maximum > digitizer rate (5 or 20 GSPS say) then do real-time averaging/DSP on it > so that each point plotted at lower sweep rates represents the average > of hundreds of samples potentially. The noise at 20GSPS smears out the > steps then the averaging smooths out the noise. Or something like that. > Anyway the result is much better than you would think from the 8 bit > input.
It also reduces aliasing. Mine has a "high res" mode which does this -- only works below a certain range, of course. Tim -- Deep Friar: a very philosophical monk. Website: http://seventransistorlabs.com
Reply by Nico Coesel March 15, 20132013-03-15
rickman <gnuarm@gmail.com> wrote:

>On 3/14/2013 12:03 PM, Nico Coesel wrote: >> rickman<gnuarm@gmail.com> wrote: >> >>> On 3/12/2013 4:11 PM, Nico Coesel wrote: >>>> rickman<gnuarm@gmail.com> wrote: >>>>> >>>>> I would like to hear from others about why the front end is the hard >>>>> part. Exactly how do the attenuators work? Does the amp remain set to >>>>> a given gain and the large signals are attenuated down to a fixed low >>>>> range? >>>> >>>> You need to use capacitive dividers which need adjustment. In my >>>> design I used one varicap (controlled by a DAC) to do all the >>>> necessary adjustments for several ranges. Nowadays you could use a 12 >>>> bit ADC so you wouldn't need a variable gain amplifier. Another trick >>>> to get a programmable range is to vary the reference voltage of the >>>> ADC. I think I re-did the design of the front-end about 3 or 4 times. >>> >>> I don't get how a 12 bit ADC solves the attenuator problem. That is >>> only 2 bits more than what I would like to see in a front end. Once a >> >> Its a factor 16 attenuation you don't need to do in hardware. So if >> the hardware attenuator does 1:1.5, 1:10 and 1:100 (which is doable >> with 2 relays) you save quite some circuitry. 8 bits is probably more >> then enough. It will be hard to get the response so flat that more >> than 8 bits actually adds accuracy of the readout. > >I've worked with 8 bit scopes and the vertical clearly shows steps which >I find interfere with making reasonable measurements. That's why I said
That has more to do with how the software shows the signal. Interpolation can solve a lot. -- Failure does not prove something is impossible, failure simply indicates you are not using the right tools... nico@nctdevpuntnl (punt=.) --------------------------------------------------------------