Forums

DIY PC Oscilloscope

Started by Unknown March 9, 2013
On 3/16/2013 8:56 AM, Nico Coesel wrote:
> rickman<gnuarm@gmail.com> wrote: > >> That may be. The OP was talking about debugging the sinc reconstruction >> and I've been thinking a little bit about just how useful that is. Is >> there a downside to sinc reconstruction, other than the work required? > > It is useable if you have at least 5 samples per period. So that is > 0.2fs. The whole problem though is not the number of samples per > period. According to sampling theory the signal is there but it just > needs to be displayed properly so the operator can see a signal > instead of some 'random' dots. With the proper signal reconstruction > algorithm you can display signals up the the Nyquist limit (0.5fs). > >> I was thinking an aliased signal might interfere with this, but now that >> I give it some thought, I realize they are two separate issues. If you >> have an aliased tone, it will just be a tone in the display whether you >> use sinc reconstruction or not. > > In my design I used a fixed samplerate (250MHz) and a standard PC > memory module. 1GB already provides for more than 2 seconds of storage > for 2 channels. That solves the whole interference issue and it allows > to use a proper anti-aliasing filter. With polynomal approximation I > could reconstruct a signal even when its close to the Nyquist limit. I > tested it and I could get it to work for frequencies up to 0.45fs.
I'm not following. Are you saying you need a long buffer of data in order to reconstruct the signal properly? -- Rick
rickman <gnuarm@gmail.com> wrote:

>On 3/16/2013 8:56 AM, Nico Coesel wrote: >> rickman<gnuarm@gmail.com> wrote: >> >>> That may be. The OP was talking about debugging the sinc reconstruction >>> and I've been thinking a little bit about just how useful that is. Is >>> there a downside to sinc reconstruction, other than the work required? >> >> It is useable if you have at least 5 samples per period. So that is >> 0.2fs. The whole problem though is not the number of samples per >> period. According to sampling theory the signal is there but it just >> needs to be displayed properly so the operator can see a signal >> instead of some 'random' dots. With the proper signal reconstruction >> algorithm you can display signals up the the Nyquist limit (0.5fs). >> >>> I was thinking an aliased signal might interfere with this, but now that >>> I give it some thought, I realize they are two separate issues. If you >>> have an aliased tone, it will just be a tone in the display whether you >>> use sinc reconstruction or not. >> >> In my design I used a fixed samplerate (250MHz) and a standard PC >> memory module. 1GB already provides for more than 2 seconds of storage >> for 2 channels. That solves the whole interference issue and it allows >> to use a proper anti-aliasing filter. With polynomal approximation I >> could reconstruct a signal even when its close to the Nyquist limit. I >> tested it and I could get it to work for frequencies up to 0.45fs. > >I'm not following. Are you saying you need a long buffer of data in >order to reconstruct the signal properly?
You need about 10 samples extra at the beginning and end to do a proper reconstruction. Lots of audio editing software does exactly the same BTW. Using a fixed samplerate solves a lot of signal processing problems but also dictates a lot of processing needs to be done in hardware to keep the speed reasonable. -- Failure does not prove something is impossible, failure simply indicates you are not using the right tools... nico@nctdevpuntnl (punt=.) --------------------------------------------------------------
On Sun, 17 Mar 2013 10:19:23 GMT, nico@puntnl.niks (Nico Coesel) wrote:

>rickman <gnuarm@gmail.com> wrote: > >>On 3/16/2013 8:56 AM, Nico Coesel wrote: >>> rickman<gnuarm@gmail.com> wrote: >>> >>>> That may be. The OP was talking about debugging the sinc reconstruction >>>> and I've been thinking a little bit about just how useful that is. Is >>>> there a downside to sinc reconstruction, other than the work required? >>> >>> It is useable if you have at least 5 samples per period. So that is >>> 0.2fs. The whole problem though is not the number of samples per >>> period. According to sampling theory the signal is there but it just >>> needs to be displayed properly so the operator can see a signal >>> instead of some 'random' dots. With the proper signal reconstruction >>> algorithm you can display signals up the the Nyquist limit (0.5fs). >>> >>>> I was thinking an aliased signal might interfere with this, but now that >>>> I give it some thought, I realize they are two separate issues. If you >>>> have an aliased tone, it will just be a tone in the display whether you >>>> use sinc reconstruction or not. >>> >>> In my design I used a fixed samplerate (250MHz) and a standard PC >>> memory module. 1GB already provides for more than 2 seconds of storage >>> for 2 channels. That solves the whole interference issue and it allows >>> to use a proper anti-aliasing filter. With polynomal approximation I >>> could reconstruct a signal even when its close to the Nyquist limit. I >>> tested it and I could get it to work for frequencies up to 0.45fs. >> >>I'm not following. Are you saying you need a long buffer of data in >>order to reconstruct the signal properly? > >You need about 10 samples extra at the beginning and end to do a >proper reconstruction. Lots of audio editing software does exactly the >same BTW. Using a fixed samplerate solves a lot of signal processing >problems but also dictates a lot of processing needs to be done in >hardware to keep the speed reasonable.
Interesting stuff can be done with a really long record. You can do signal averaging of a periodic waveform with no trigger. Our new monster LeCroy scope can take a long record of a differential PCI Express lane (2.5 gbps NRZ data), simulate a PLL data recovery loop of various dynamics, and plot an eye diagram, again without any trigger. Well, if it doesn't crash. -- John Larkin Highland Technology Inc www.highlandtechnology.com jlarkin at highlandtechnology dot com Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom timing and laser controllers Photonics and fiberoptic TTL data links VME analog, thermocouple, LVDT, synchro, tachometer Multichannel arbitrary waveform generators
John Larkin wrote:

> On Sun, 17 Mar 2013 10:19:23 GMT, nico@puntnl.niks (Nico Coesel) wrote: > > >>rickman <gnuarm@gmail.com> wrote: >> >> >>>On 3/16/2013 8:56 AM, Nico Coesel wrote: >>> >>>>rickman<gnuarm@gmail.com> wrote: >>>> >>>> >>>>>That may be. The OP was talking about debugging the sinc reconstruction >>>>>and I've been thinking a little bit about just how useful that is. Is >>>>>there a downside to sinc reconstruction, other than the work required? >>>> >>>>It is useable if you have at least 5 samples per period. So that is >>>>0.2fs. The whole problem though is not the number of samples per >>>>period. According to sampling theory the signal is there but it just >>>>needs to be displayed properly so the operator can see a signal >>>>instead of some 'random' dots. With the proper signal reconstruction >>>>algorithm you can display signals up the the Nyquist limit (0.5fs). >>>> >>>> >>>>>I was thinking an aliased signal might interfere with this, but now that >>>>>I give it some thought, I realize they are two separate issues. If you >>>>>have an aliased tone, it will just be a tone in the display whether you >>>>>use sinc reconstruction or not. >>>> >>>>In my design I used a fixed samplerate (250MHz) and a standard PC >>>>memory module. 1GB already provides for more than 2 seconds of storage >>>>for 2 channels. That solves the whole interference issue and it allows >>>>to use a proper anti-aliasing filter. With polynomal approximation I >>>>could reconstruct a signal even when its close to the Nyquist limit. I >>>>tested it and I could get it to work for frequencies up to 0.45fs. >>> >>>I'm not following. Are you saying you need a long buffer of data in >>>order to reconstruct the signal properly? >> >>You need about 10 samples extra at the beginning and end to do a >>proper reconstruction. Lots of audio editing software does exactly the >>same BTW. Using a fixed samplerate solves a lot of signal processing >>problems but also dictates a lot of processing needs to be done in >>hardware to keep the speed reasonable. > > > Interesting stuff can be done with a really long record. You can do signal > averaging of a periodic waveform with no trigger. Our new monster LeCroy scope > can take a long record of a differential PCI Express lane (2.5 gbps NRZ data), > simulate a PLL data recovery loop of various dynamics, and plot an eye diagram, > again without any trigger. Well, if it doesn't crash. >
Oh, you have that problem too? Our lecroy goes belly up now and then for no apparent reason.. It appears to me there is some hardware to software random issue. Kind of reminds me back in the days when Visual Basic was first put on us in Windows 3.xx days. A serious app designed to operate fabric cutting machines for intricate designs would simply fault a plug in component because it would get stuck on some missed signal from the hardware and then time out or do a stack overflow. THe app was loaded using VB controls that just simply was not resource friendly and controlled properly. On top of that, this app cost clients upwards in the $10k range. I was offered a job where this app was developed and was allowed to see it in operation and saw its random failures, they wanted to to join the debugger team to resolve it and move forward. I declined on the offer. Jamie