QPSK abandoned for BPSK
The noise performance of the QPSK decoder wasn't what I was hoping after several iterations. I pretty quickly threw out the Costas loop because loss of phase lock was the weakest link in the chain, and moved to a system with a tight bandpass filter at the carrier frequency, and measuring zero crossings of the carrier against a local oscillator. But still the noise performance was not impressive at 300baud / 3kHz carrier.
I had hoped to go back to a damaged QPSK code recovery and to correct the data phase on the broken code bits, because we know what they should be according to the code. But there was no visible commonality between what was happening on the phase information that carried the code and the phase information carrying the data.
So I have moved back for now to a two-phase BPSK system interleaving the code and the payload symbol by symbol, in order to understand better how far the code can be pushed with noise.
Improved noise performance
Currently I use a 4.8kHz carrier and 1200baud symbols. Here is the performance today:
The RED line is the best decode seen for the correct match offset, the BLUE line is the best decode for all other (wrong) offsets. The PURPLE line is the mean decode for the correct match offset over the 100 runs. GREEN is the "quality cutoff" basically keeping us from getting near the blue line's false hits: because of the properties of the code it is extremely unlikely noise will get near the green line.
Here is a close-up on the noisy end of things:
So far in terms of detecting the code, we can on average do so when the input energy is 82% noise (-15dB SNR). We can still detect codes ~1% of the time even at 92% noise (-21dB SNR). Here is a further close-up (it is 1000 runs, not 100)
suggesting you can still get good recoveries ~0.1% of the time at 95% noise (-25.5dB SNR). And of course we are now measuring the whole system performance here including the demodulator part, not just damaging the code bits directly.
Current BPSK receiver
Here is the receiver for the current BPSK method:
I spent several days meddling with the QPSK version and arriving at the carrier zero-crossing method for phase detection. The original plan to have a symbol sampler running from a locked LO hung around causing lots of problems. The indications of a change in symbol -- detected by zero crossings -- were variously delayed by the phase itself and the filters used, making it difficult to convert their jittery indications into a guide for the symbol recovery clock. This resulted in double bits being sampled.
The code is the symbol clock
Because of this I eventually realized that no symbol clock was needed, by running the correlator at the sample frequency, and sampling symbols at a fixed period, because of the false match rejection properties of the code it would "discover" the correct phase and offset from the behaviour of the correlator output. And when the code was recovered best then the data interleaved with it will be recovered best too. This sounds power-unfriendly, but after the first offset is used, you don't run the correlator until enough time has passed for 256 symbols to be acquired, and then you should still be locked from when you did the first 256 symbols: if not you run the correlator a few dozen times to find lock again.
Also of note is that what is stored in the ringbuffer is a weighted average of the last four phase results, this includes information from the phase of multiple carrier cycles for the same symbol, helping to reduce the effect of noise on the decode.
The code is the data!
Well recovering the code at high SNR is interesting, but how useful is it if the data bits interleaved with it have been subject to the same beating without the properties of the code to protect them? We can use the autosyncing properties of the code to help with trying to get some payload signal gain through averaging, but I think I have seen where this is headed now... the code IS the signalling system for the payload data. That means throwing out the interleaved data concept.
A '0' can be signalled as the normal code, and a '1' as a time-reversed code. Because it's intended for low data rate communication at VERY low signal levels compared to noise, it's okay if we are reduced to 1 byte/sec, which will be the end result of this at 1200 baud. Basically with this we carry over all of the great robustness qualities of the code to be attributes of the payload data.
So what is the point compared to just blasting 128 symbol times of the same phase carrier, which is a hell of a lot simpler?
- Three high accuracy results, "no result", a '0' or a '1' detected. The carrier-only method will happily return a bogus result if there is noise energy at the carrier -- if the code method ever claims a '0' or a '1' you can be almost certain it is genuine
- "Fuzzy" robust damage-tolerant signal detection, better performance than a simple threshold comparator
- Automatic bit sync ("bit clock recovery") with almost no chance of wrong sync, sync recaptured each symbol; bit sync for the carrier-only version in high noise is unreliable
- Absolute phase polarity can be recovered from one symbol despite the 180 degree lock uncertainty for BPSK -- you can even lose lock one or more times inside the symbol and the code with absolute phase can still be recovered; the carrier-only concept needs a coding at a higher level to determine absolute recovered phase, in turn needing multiple correct symbols recovered without losing lock