Quantcast
Channel: KA7OEI's blog

Making a "Word Metronome" for pacing of speech

0
0

Figure 1:
The completed "Word Metronome".  There are two recessed
buttons on the front and the lights on on the left side.
Click on the image for a larger version.
One of the things that my younger brother's job entails is to provide teaching materials - and this often includes some narration.  To assure consistency - and to fall within the required timeline - such presentations must be carefully designed in terms of timing to assure that everything is said that should be said within the window of the presentation itself.

Thus, he asked me to make a "word metronome" - a stand-alone device that would provide a visual cue for speaking cadence.  The idea wasn't to make the speech robotic and staccato in its nature, but rather providing a mental cue to provide pacing - something that is always a concern when trying to make a given amount of material fit in a specific time window:  You don't want to go too fast - and you certainly don't want to be too slow and run over the desired time and, of course, you don't want to randomly change your rate of speech over time - unless there's a dramatic or context-sensitive reason to do so.

To be sure, there are likely phone apps to do this, but I tend to think of a phone as a general-purpose device, not super-well suited for most of the things done with it, so a purpose-built, simple-to-operate device with visual indicators on its side that could just sit on a shelf or desk (rather than a phone, which would have to be propped up) couldn't be beat in terms of ease-of-use.

Circuitry:

The schematic of the Word Metronome is depicted in Figure 2, below:

Figure 2:
Schematic of the "Word Metronome"
(As noted in the text, the LiIon "cell protection" board is not included in the drawing).
Click on the image for a larger version.

This device was built around the PIC16F688, a 14 pin device with a built-in oscillator.  This oscillator isn't super-accurate - probably within +/-3% or so - but it's plenty good for this application.

One of the complications of this circuit is that of the LEDs:  Of the five LEDs, three of them are of the silicon nitride "blue-green" type (which includes "white" LEDs) and the other two are high-brightness red and yellow - and this mix of LED types poses a problem:  How does one maintain consistent brightness over varying voltage.

As seen in Figure 3, below, this unit is powered by a single lithium-ion cell, which can have a voltage ranging from 4.2 volts while on the charger to less than 3 volts when it is (mostly) discharged.  What this means is that the range of voltage - at least for the silicon nitride types of LEDs, can range from "more than enough to light it" to "being so dim that you may need to strike a match to see if it's lighting up".  For the red and yellow LEDs, which need only a bit above two volts, this isn't quite the issue, but if one used a simple dropping resistor, the LED brightness would change dramatically over the range of voltages available from the battery during its discharge curve.

As one of the goals of this device was to have the LEDs be both of consistent brightness - and to be dimmable -  a different approach was required - and this required several bits of circuity and a bit of attention to detail in the programming.

The Charge Pump:

Perhaps the most obvious feature of this circuit is the "Charge Pump".  Popularized by the well-known ICL7660 and its many (many!) clones, this circuit uses a "flying capacitor" to step up the voltage - specifically, that surrounding Q1 and Q2.  In software - at a rate of several kHz - a pulse train is created, and its operation is thus:

  • Let is start by assuming that pin RC4 is set high (which turns off Q1) and pin RA4 is set low (which turns off Q2.)
  • Pin RA4 is set high, turning on Q2, which drags the negative side of capacitor C2 to ground.  This capacitor is charged to nearly the power supply voltage (minus the "diode drop") via D1 when this happens.
  • Pin RA4 is then set low, and Q2 is turned off.
  • At this point nothing else is done for a brief moment, allowing both transistors to turn themselves off.  This very brief pause is necessary as pulling RC4 low the instant RA4 is set low would result in both Q1 and Q2 being on for an instant, causing "shoot through" - a condition where the power supply is momentarily shorted out when both transistors are on, resulting in a loss of efficiency.  This "pause" need only be a few hundred nanoseconds, so waiting for a few instruction cycles to go by in the processor is enough.
  • After just a brief moment, pin RC4 is pulled low, turning on Q1, which then drags the negative side of C2 high.  When this happens, the positive side of C2 - which already has (approximately) the power supply voltage is at a potential well above that of the power supply voltage.
  • This higher voltage flows through diode D3 and charges capacitor C4, which acts as a reservoir:  This voltage on the positive side of C4 is now a volt or so less than twice the battery voltage.
  • Pin RC4 is then pulled high, turning of Q1.
  • There is a brief pause, as described above to prevent "shoot through", before we set RA4 high and turn Q2 on for the next cycle.

It is by this method that we generate a voltage several volts higher than that of the battery voltage, and this gives us a bit of "headroom" in our control of the LED current - and thus the brightness.

Current limiter:

Transistors Q3 and Q4 form a very simple current limiter:  In this case it is "upside-down" from the more familiar configuration as it uses PNP transistors - something that I did for no particular reason as the NPN configuration would have been just fine.

Figure 3:
Inside the "Word Metronome".  The 18650 LiIon cell is on
the right - a cast-off from an old computer battery pack.  The
buttons on the board are in parallel with those on the case and
were used during initial construction/debugging.
Click on the image for a larger version.

This circuit works by monitoring the voltage across R3:  If this voltage exceeds the turn-on threshold of Q3 - around 0.6 volts - it will turn on, and when this does it pulls the base voltage, provided by R5, toward Q4's emitter, turning off Q3.  By this action, the current will actually come to equilibrium at that which results in about 0.6 volts across R3 - and in this case, Ohm's law tells us that 0.6 volts across 47 ohms implies (0.6/47=0.0128 amps) around 13 milliamps:  At room temperature, this current was measured to be  a bit above 14 milliamps - very close to that predicted.

With this current being limited, the voltage of the power supply has very little effect on the current - in this case, that through the LEDs which means that it didn't matter whether the LED was of the 2 or 3 volt type, or the state-of-of charge of the battery:  The most that could ever flow through an LED no matter what was 14 milliamps.

With the current fixed in this manner, brightness could be adjusted using PWM (Pulse Width Modulation) techniques.  In this method, the duty cycle ("On" time) of the LED is varied to adjust the brightness.  If the duty cycle is 100% (on all of the time) the LED will be at maximum brightness, but if the duty cycle is 50% (on half of the time) the LED will be at half-brightness - and so-on.  Because the current is held constant, no matter what by the current limiter circuit, we know that the only think that affects brightness of the LED is the duty cycle.

LED multiplexing:

The final aspect of the LED drive circuitry is the fact that the LEDs are all connected in parallel, with transistors Q5-Q9 being used to turn them on.  When wiring LEDs in parallel, one mustmake absolutely sure that each LED is of the exact-same type or else that with the lowest voltage will consume the most current.

In this case, we definitely do NOT have same-type of LEDs (they are ALL different from each other) which means that if we were to turn on two LEDs at once, it's likely that only one of them would illuminate:  That would certainly be the case if, say, the red and blue LEDs would turn on:  With the red's forward voltage being in the 2.5 volt area, the voltage would be too low for the green, blue or white to even light up.

What this means is that only ONE LED must be turned on at any given instant - but this is fine, considering how the LEDs are used.  The red, yellow or green are intended to be on constantly to indicate the current beat rate (100, 130 or 160 BPM, respectively) with the blue LED being flashed to the beat (and the white LED flashing once-per-minute) - but by blanking the "rate" LED (red, yellow or green) LED when we want to flash the blue or white one, we avoid the problem altogether.

Battery charging:

Not shown in the schematic is the USB battery charging circuit.  Implementing this was very easy:  I just bought some LiIon charger boards from Amazon.  These small circuit boards came with a small USB connector (visible in the video, below) and a chip that controlled both charging and "cell protection" - that is, they would disconnect the cell if the battery voltage got too low (below 2.5-2.7 volts) to protect it.  Since its use is so straightforward - and covered by others - I'm only mentioning it in passing.

Software:

Because of its familiarity to me, I wrote the code for this device in C using the "PICC" compiler by CCS Computer Systems.  As it is my practice, this code was written for the "bare metal" meaning that it interfaces directly with the PIC's built-in peripherals and porting it to other platforms would require a bit of work.

The unit is controlled via two pushbuttons, using the PIC's own pull-up resistors.  One button primarily controls the rate while the other sets the brightness level between several steps, and pressing and holding the rate button will turn it off and on.  When "off", the processor isn't really off, but rather the internal clock is switched to 31 kHz and the charge pump and LED drivers are turned off, reducing the operating current of the processor to a few microamps at most.

Built into the software, there is a timer that, if there is no button press within 90 minutes or so, will cause the unit to automatically power down.  This "auto power off" feature is important as this device makes no noise and it would be very easy to accidentally leave it running.

Below is a short (wordless!) video showing the operation of the "Word Metronome" - enjoy!

 


This page stolen from ka7oei.blogspot.com

[END]



Using an ATX computer power supply to run KiwiSDRs - and as a general purpose 5 and 12 volt supply

0
0

At the Northern Utah WebSDR (link) we run a number of KiwiSDR receivers.  These receivers, which are inherently broadband (10 kHz to 30 MHz) allow a limited number of users to tune across the bands, allowing reception on frequencies that are not covered by the WebSDR servers.

At present there are six of these receivers on site:  Three are connected to the TCI-530 Omnidirectional antenna (covering 630-10 meters - 2200 meters is included via a separate E-field whip), two are on the east-pointing log-periodic beam antenna (which overs 40-10 meters) and the newest is connected to the northwest-pointing log-periodic beam antenna (which covers 30-10 meters).

Figure 1:
Power supply in a PC case!
The PC case housing the power supply was repurposed -
because, why not?
Click for larger version
The power requirements of a KiwiSDR are modest, being on the order of 600-800 mA, but the start-up current can briefly exceed 1.25 amps.  Additionally, they do not start up reliably if the voltage "ramps up" rather slowly - a problem often exacerbated by the fact that the extra current that they draw upon power-up can cause a power supply to "brown out".

Up to this point we had been running 5 KiwiSDRs:  Three of them were powered by a pair of 5 volt, 3 amp linear power supplies that are "dioded-ANDed" together to form a 6 amp power supply and the other two KiwiSDRs were powered from a heavily-filtered 5-volt, 3 amp switching power supply.

In recent months, the dual 3 amp linear supply had become problematic, not being able to handle the load of the three KiwiSDRs, so we had to power down KiwiSDR #3.  With the recent installation of the northwest-pointing log periodic antenna, we were also looking toward installing another KiwiSDR for that antenna and we were clearly out of power supply capacity.

Using an ATX supply as a general-purpose power supply - it's not just the green wire!

If you look around on the Web, you'll see suggestions that you just "ground the green wire" to turn on an ATX supply, at which point you may use it as a general-purpose supply.  While grounding the green wire does turn it on, it's not as simple as that - particularly if you leave the power supply unattended.

For example, what is there is a brief short on the output while you are connecting things, or what if the power browns out (or turns off) for just the "wrong" amount of time.  These sorts of things do happen, and can "trip out" the power supply and it may never restart on its own.

We couldn't afford for this to happen - so you'll see, below, how we remedied this.

Putting together another power supply:

With six KiwiSDRs, the power supply requirements were thus:

  • 5 amps continuous, making the assumption that a KiwiSDR's average current consumption would be about 830 mA - a number with generous overhead.
  • 9 amps on start-up, presuming that each KiwiSDR would briefly consume 1.5 amps upon power-up, again a value with a bit of overhead.
  • The power supply must not exhibit a "slow" ramp-up voltage as the KiwiSDRs did not "like" that.

In looking around for a power supply on which to base the design, the obvious choice was an computer-type ATX power supply.  Fortunately, I have on-hand a large number of 240 watt ATX supplies with active power factor correction which are more than capable of supplying the current demands, being rated for up to 22 amps load on the 5 volt supply - more than enough headroom as I would be using less than half of that, at least with the currently-planned usage.

Circuit description:

Refer to the schematic in Figure 2 for components in the description.

Added filtering:

While these power supplies were already known to be adequately RF-clean from their wide use for the WebSDR servers (important for a receive site!) because we would be conducting the DC outputs outside the box - and to receivers - I felt it important that additional filtering be added.  Having scrapped a number of PC power supplies in the past, I rummaged around in my box of random toroids and found two that had probably come from old PC power supplies, wound with heavy wire consisting of 4 or 5 strands in parallel.  These inductors measured in the 10s of microHenries, enough for HF filtering when used with additional outboard capacitance.

These filter networks were constructed using old-fashioned phenolic terminal lug strips.  These consist of a row of lugs to which components are soldered - typically with one or two of the lugs used for mounting, and also "grounding".  Rather than mount these lugs using a drill and screw, they were soldered to the steel case itself - something easily done by first sanding a "bare" spot on the case to remove any paint or oxide and then using an acid-core flux - cleaning it up afterwards, of course!

The heavier components (inductors, capacitors) were mechanically secured using RTV (silicone) adhesive to keep them from moving around - and to prevent the possibility of the inductor's wire from touching the case.

Looking at the schematic you may note that  C202, C302, C501, C502 and C503 are connected to a "different" ground than everything else.  While - at least for this power supply - the "Common"(black) wire is internally connected to the case, it's initially assumed that this lead - which comes from the power supply - may be a bit "noisy" in terms of RF energy, so they are RF bypassed to the case of the power supply.  This may have been an unneeded precaution, but it was done nonetheless.

Connectorizing and wiring the power supply:

The ATX power connector was extracted from a defunct PC motherboard to allow the power supply itself to be replaced in the future if needed.  On this connector, all of the pins corresponding with the 5 volt (red wires), 12 volt (yellow wires) and ground (black wires) were bonded together to form three individual busses and heavy (12 AWG) wires were attached to each:  This was done to put as many of the wires emerging from the power supply in parallel with each other to minimized resistive losses. 

The green wire (the "power" switch) and purple wire (the 5 volt "standby") were brought out separately as they would be used as well - and the remainder of the pins (3.3 volt, -12 volt, -5 volt, "power good", etc.) were flooded with "hot melt" glue to prevent anything from touching anything else that it shouldn't.

The 5 volt supply was split two ways - each going to its own L/C filter network (L501, L502, C502, C503, C504, C505) as shown in the schematic, this being done to reduce the total current through the inductor - both to minimize resistive losses, but also to reduce the magnetic flux in each inductor, something that could reduce its effective inductance.

Although I don't have immediate plans to use the 12 volt supply, a similar filter (L503, C506, C507) was constructed for the 12 volt supply lead.  On the output side of the 12 volt filter, a 3 amp self-resetting thermal fuse (F501) was installed to help limit the current should a fault occur. 

About the self-resetting fuses:

 These fuses - which look like capacitors - operate by having a very low resistance when "cold".  When excess current flows, they start to get warm - and if too much current flows, they get quite hot (somewhere around 200F, 100C) and their internal resistance skyrockets, dropping the current to a fraction of its original value:  It's this current flow and their heat that keeps the resistance high.

It's worth noting that these fuses don't "disconnect" the load - they just reduce the current considerably to protect whatever it is connected to it.  Since, when "blown", they are hot, they must be mounted "in the clear"away from nearby objects that could be damaged by the heat - and also to prevent lowering of their trip current by trapping heat or being warmed by another component - such as another such fuse.  

It should be noted that if the outputs - either 5 or 12 volts - are "hard shorted", the thermal fuse may not react quickly enough prevent the power supply from detecting an overcurrent condition and shutting down.  As an output short is not expected to be a "normal" occurrence, this behavior is acceptable - but it will require that the power supply be restarted to recover from shutdown, as described below.

In the case of the KiwiSDRs, they are connected with fairly long leads (about 6 feet, 2 meters) and often have enough internal resistance to reduce the current below the power supply's overcurrent limit and rather than allowing the full current of the power supply (which could be more than 20 amps) to flow through and burn up this cable, the fuse will trip as it should, protecting the circuit.  To "reset" the fuse, the current must be removed completely for long enough for the device to cool - something that is done with the 5 volt supplies as we'll see, below.


The controller:

As mentioned earlier, if you look on the web, you'll see other power supply projects that use an ATX power supply as a benchtop power source and most of those suggest that one simply connect the green (power on) wire to ground to turn it on - but this isn't the whole story.  In testing the power supply, I noticed two conditions in which doing this wouldn't be enough:

  • Shorting a power supply output.  If the output of a good-quality ATX power supply is shorted, it will immediately shut down - and stay that way until the mains power is removed (for a minute or so) or the power supply is "shut off" by un-grounding the green wire for a few seconds before reconnecting to "restart" the power supply.
  • Erratic mains power interruption.  It was also observed that if the mains power was removed for just the right amount of time, the power supply would also shut down and would not restart on its own.  It took the same efforts as recovering from an output short to restart the power supply.

Since this power supply would be at the WebSDR site - an unmanned location in rural, northern Utah - it would require additional circuitry to make this power supply usable.

Fortunately, an ATX power supply has a second built-in power supply that is independent of the main one - the "standby" power supply.  This is a low-power 5 volt supply that is unaffected by what happens to the main supply (e.g. not controlled by the power switch and not affected if it "trips off") and can be used to power a simple microcontroller-based board that can monitor and sequence the start-up of the main power supply.  For this task I chose the PIC16F688, a 14 pin microcontroller with A/D conversion capability and a built-in clock oscillator.

As seen in the schematic, the "5 volt standby" is dioded-ORed (D601, D602) with the main power supply (12 volts) so that it always gets power - from either the 5 volt standby, or from the 12 volt output - when mains is applied.  R603 and capacitor C602 provide a degree of protection to the voltage regulator should some sort of "glitch" appear on the 12 volt supply - possibly due to the 5 volt load being abruptly disconnected (or connected) as the 5 and 12 volt supplies are "co-regulated" in the sense that it's really only the 5 volt output that is being regulated well - the 12 volt power supply's output is pretty much a fixed ratio to the 5 volt and doesn't really have much in terms of separate regulation.

It should be noted that when operating from the standby +5 volt power source, the voltage from U2 (the 5 volt regulator) is on the order of 3 volts or so (drop through D602 and U2) but this is comfortably above the "brownout" threshold of the PIC, which is around 2.5 volts, so there isn't really a worry that the low-voltage brownout detector will trigger erroneously and prevent start-up.  If it had, I would have simply moved the cathode side of D602 to the +5V side of U2.

Figure 3: 
Inside the case!
Top right:  12 volt supply filtering and thermal fuse
Upper-middle:  Dual 5 volt filtering
Lower middle:  Controller board with FET switches
and thermal fusing.
The ATX power supply is in the lower-left corner.
Click on the image for a larger version.
Because the PIC microcontroller can monitor the 12 volt supply (via R601/R602) it "knows" when the main ATX supply is turned off.  Through the use of an NPN transistor (Q401) - the collector of which can be used to "ground" the green "power on" line, the controller can turn the main power supply on and off as follows:
  • When the microcontroller starts up, it makes sure that the ATX "power on" wire is turned off (e.g. un-grounded).  This is done by the microcontroller turning off Q401.
  • After a 10 second delay, it turns on the power supply by turning on Q401.

It also monitors the power supply to look for a fault.  If either the 5 or 12 volt output is shorted or faults out, both power supply outputs (but not the 5 volt "standby" output) disappear.

  • If, while running, the monitored 12 volt supply (via R601/R602 and "12V V_MON") drops below about half the voltage (e.g. trips out) the "power on" wire is turned off using Q401, disabling the ATX power supply.
  • A 10 second delay is imposed before attempting to turn the power supply back on.
  • Once the power supply is turned back on, monitoring of the voltage resumes.

In practice, if there is a "hard" short on the output, the power supply will attempt to restart every 10 seconds or so, but remember that a short on an output could occur with ANY sort of power supply, so this isn't a unique condition.

5 volt output sequencing and monitoring:

The other function of the controller is to sequence and monitor the 5 volt outputs.  As mentioned earlier, it was noted that the KiwiSDRs do not"like" a slow voltage ramp-up so a FET switch is employed to effect a rapid turn-on - and since there are two separately-filtered 5 volt busses, there are two such switches.  In order to reduce the peak current caused when the load is suddenly connected, each of these busses is turned on separately, a 10 second delay between the two of them.

The N-channel FET switches (Q203, Q303) are controlled by an NPN (Q201, Q301) transistor being turned on by the microcontroller which, in turns, "pulls" the base of a PNP transistor (Q202, Q302) low via a base resistor (R202/R302), turning it on - and other resistors (R203, R303) assure that these transistors are turned off as needed.

With the emitter of the PNP connected to the 12 volt supply, the gate voltage of the FET is approximately 7 volts higher than the drain voltage, assuring that it is turned on with adequately low resistance.  Capacitors (C201, C301) are connected between the FET's gates and sources to suppress any ringing that might occur when the power is turned on/off and as a degree of protection against source-gate voltage spikes while the 47k resistor (R207/R307) assure that the FET gets turned off.

The use of P-channel FETs was considered, but unless special "logic level" threshold devices were used, having only 5 volts between the gate and drain wouldn't have turned them fully "on" unless the -5 or -12 volt supply from the power supply was also used.  While this would certainly have been practical, N-channel FETs are more commonly available.

Figure 2: 
Schematic of the ATX controller with power supply filtering, voltage monitoring, and control.
See the text for a description.
Click on the image for a larger version.

In series with the 5 volt supply and the FET's source is a 5 amp self-resetting thermal fuse to limit current.  Should an overload (more than 5-ish amps) occur on the output bus, this fuse will heat up and go to high resistance, causing the output voltage to drop.  If this occurs, the microcontroller, which is using its A/D converter to look at the voltage divider on the outputs (R205/R206 for the "A" channel, R305/R306 for the "B" channel) will detect this dip in voltage and immediately turn off the associated FET.  After a wait of at least 10 seconds - for the fault to be cleared (in the event that it is momentary) and to allow the thermal fuse to cool off and reset - the power will be reconnected.  If there continues to be a fault, the reset time is lengthened (up to about 100 seconds) between restart attempts.

Finally, the status of the power supply is indicated by a 2-lead dual-color (red/green) LED (LED701) mounted to be visible from the front panel.  During power supply start-up, it flashes red, during the time delay to turn on the power supplies it is yellow, when operation is normal it is green - and if there is a fault, it is red.  Optionally, another LED (LED702) can be mounted to be visible:  This LED is driven with the algorithm that causes it to "breath"(fade on and off - and on, and off...) to indicate that "something" was working.  I simply ran out of time, so I didn't install it.

* * *

This power supply was put together fairly quickly, so I didn't take as many pictures as I usually would - and I omitted taking pictures of the back panel where the power supply connections are made.  Perhaps it's just as well as while I used a good-quality screw-type barrier strip, it was mounted to a small piece of 1/4" (6mm) thick plywood that was epoxied into the rectangular hole where one would connect peripherals to the motherboard.

As you would expect, the terminals are color-coded (using "Sharpies" on the wood!) and appropriately labeled.  While not pretty, it's functional!

(Comment:  The photo in Figure 3 was taken before I added the circuit to control the "Power On" wire (e.g. Q401) and the diode-OR power (D601, D602) - and it shows the dual-color LED on the board during testing.)

If you are interested in the PIC's code, drop me a note.

This page stolen from ka7oei.blogspot.com

[END]



An ultrasonic superheterodyne receive converter (e.g. "Bat Listener")

0
0

In the mid 90s I decided to throw together what I called a "Bat Listener" - a simple receiver used to convert ultrasonic sound down to the audible range.

Figure 1:
The exterior of the ultrasonic receiver, complete with fancy
labeling!
Click on the image for a larger version.

Two types of circuit:

Frequency division

There are several ways to do this, the simplest being the "divider" type which digitally converts ultrasonic frequencies to audible by integer division of the input to a lower frequency.

The problem with this simple approach is that it does not preserve the amplitude (loudness) of the original sound since it must take the input signal, amplify/convert it to a series of logic-level pulses - which loses any amplitude reference - and do a brute-force digital division.  Additionally, if there are multiple signals present, for the most part only the strongest one will be converted down.

Clearly, one cannot "tune" this type of circuit:  A signal at 40 kHz will always be divided down by a fixed integer amount,  Let's say that the circuit digitally divides by 32:  That 40 kHz signal will be at 1.25 kHz.

Additionally, the direct "A-B" frequency differences between ultrasonic signals is lost, instead being "(A-B)/N" where "N" is the number of divisions.  In other words, the relative frequency differences between signals is not preserved.

Frequency conversion:

I chose, instead, to build a heterodyning receiver to convert the input frequency to a lower one.  This can preserve the amplitude and frequency relationships  - plus it is fully tunable, allowing one to choose the frequency range to convert to audible sounds - and since it is a simple conversion, multiple signals present will also be preserved.

When it comes to frequency conversion, there are two ways:  The simplest - direct conversion - would involve mixing a variable oscillator with the incoming signal and filtering/amplifying the resulting audio.  This has the advantage of being the easiest, and it is the method described in this article:

     April, 2006 QST article, A Home-made Ultrasonic Power Line Arc Detector - link)

While I could have easily built something like this, as I'm sometimes wont to do I decided to make it a bit more complicated, constructing a superheterodyne converter.

While a direct-conversion simply mixes an oscillator with the desired signal to cause a frequency conversion, a superheterodyne receiver operates like a conventional AM or FM radio:  The desired signal is first converted to an IF (Intermediate Frequency) - and this IF is then converted to audio.  The advantage of the superheterodyne scheme is that filtering may be applied at the IF to limit the receive bandwdith - and since the IF is fixed, its width remains constant over the tuning range, just like that in a conventional radio/receiver.

Circuit description

Figure 2:
Schematic diagram of the superheterodyne ultrasonic receiver.
See text for a circuit description.
Click on image for a larger version.

As noted above, this circuit is more complicated than it needs to be, so make of it what you will!

VCO:

The heart of the unit is U1, the VCO (Voltage Controlled Oscillator) which uses the venerable CD4046 PLL chip.  Often used for frequency synthesis, we are using (only) the oscillator portion, which provides a linearly-tuned and fairly stable frequency source, adjusted by the voltage applied via R101 (and scaling resistor R102).  The values were chosen to provide an approximate frequency range of 125 to 185 kHz (more on this later) to allow tuning of audio signals from (ostensibly) 0 to about 60 kHz.  The actual tuning range is closer to 115-190 kHz as a bit of margin for the frequency range.

The only critical component here is C101 which should be a frequency-stable capacitor.  I used a polystyrene capacitor, but an NP0 (a.k.a. C0G) or silver-mica could be used, instead.  When I reverse-engineered this device, I noted that the marked capacitance value was unreadable, but back-of-the-envelope calculates indicate that a value of "about 150pf" should be in the ballpark.

R103, connected to the "R1" pin of U1, sets the approximate center frequency range while R104, connected to the "R2" pin - sets the lowest frequency - which important, since we want to constrain the tuning to 125-185 kHz.  Additionally, the low end of the tuning range was further refined by R102 on the "ground" side of the tuning potentiometer, which sets the minimum voltage that may be applied to the "VCOIN" pin.

The VCO output, a square wave, is buffered by U2, a hex inverter, and several sections are used to provide both a VCO signal and its inverted version to drive the mixer.

While the 4000 series CMOS chips throughout this receiver will happily run from 3-15 volts, they are operated from a regulated 5 volt supply - mainly to improve frequency stability and to provide a nice, stable voltage for a few other low-level circuits and to provide isolation from the main battery supply which will vary a bit, particularly at higher receive volumes:  This variance, if it gets back into some earlier stages, could cause instability of the receiver in the form of "motorboating" or some other type of feedback.

BFO:

Another circuit is the BFO (Beat Frequency Oscillator) which is used to convert the IF signal back down to audio - both being processes that we'll discuss shortly.  This uses an inexpensive 500 kHz ceramic resonator to form an oscillator using one of the sections of U2C, the signal being buffered by U2B.  This signal is divided-by-two using U3A, one half of a 4013 dual flip-flop - and then divided by two again using U3B, yielding a stable 125 kHz signal.  As with the VCO, two phases of this signal (normal and inverse) are available, this time using the "Q" and "!Q" outputs of the 4013.

Input signal path:

J1, a disconnect-type 3.5mm stereo jack is wired so that an internally-mounted electret "capsule" microphone is connected by default.  This microphone element (M301) is of the "2 wire" type or electret microphone in which a bias voltage is applied to the same pin from which audio is drawn - this voltage being applied via R301 from the 5 volt regulated supply.  The specific make/model of this electret element is unknown as it was selected from a small collection to find the best performer at ultrasonic.

At some point in the future, I'll replace this with a more modern MEMs microphone as described in THIS article:  Improving my ultrasonic sniffer for finding power line arcing by using MEMs microphones - link.

The signal from the microphone is applied to U4A which is wired as a unity-gain buffer.  For this, an LM833 is used, an inexpensive, low-noise dual op amp:  An LM358 or many other types may be used here as well - just make sure that it is is fairly low noise:  I'd avoid the use of the LM1458 here as it is quite noisy by comparison!

Section U4B amplifies the signal voltage by 10 (20 dB of power gain) and this signal is applied via R305 to a simple L/C high-pass filter consisting of C303, C304, L301 and L302 the latter two components being inexpensive 18 milliHenry inductors.  Certainly, a high-pass filter could have been constructed using U4B, but I chose not to do that for some reason.

Figure 3:
Inside the ultrasonic receiver, constructed on
prototype board and having been modified
several times over the years.
Click on the image for a larger version.

In simulation, the C303/C304/L301/L302 filter has a -3dB roll-off of about  23 kHz, it's down by 10dB at about 19.5 kHz, by 20dB at about 16 kHz and by 40 dB at 9 kHz and with the values shown, it's flat to within 1 dB between about 24 and 100 kHz.

The output of the filter is amplified by U5B - and then even more by U5A (which has a bit of roll-off from C307) to yield a whole lot of gain.  It's very possible that I over-did the gain here, but unless the signal source is quite close, there is no noted clipping on the output of U5A.

Its worth noting that a mid-supply voltage is created using R309/R310 to provide a "virtual ground" for the op amps and to maintain stability, it is heavily filtered by C306 and C302, each located near the respective op amp shown on the diagram.

Mixer and band-pass filter:

It is this next section that may seem unfamiliar to some - the use of a CMOS analog switch as a signal mixer.  For this, a CD4066 is used which consists of four separate analog switches.  The filtered and amplified ultrasonic input signal from U5A via C308 is applied to pins 2 and 10 of U6A/U6D.  When the respective signals on the control pins "VCO_A" and "VCO_B" go high, the switches are activated, and because VCO_A and VCO_B are inverts of each other, each of these switches is closed in turn.  The result of this is that the inputted signal is chopped up at the rate of the 125-185 kHz VCO and this produces two mixing products.  

For example, let's assume that there is a 40 kHz signal is present on the input that we wish to hear.  If the VCO is tuned 40 kHz above the 125 kHz IF (again, more on that momentarily) - to a frequency of 165 kHz - the switching action of U6A and U6D produces both the sum (165 + 40 = 205 kHz) and the difference (165 - 40 = 125 kHz).

T301 is a filter/transformer that passes only the 125 kHz signal - the difference signal in this case.  This transformer consists of two separate windings, each resonated using its internal capacitors and the externally-added 820 pF capacitors on each winding (e.g. C309/C310) to "pad" it down to 125 kHz.  This forms a fairly wide (8-10 kHz) filter that rejects signals outside the immediate vicinity of its 125 kHz frequency.  Because this filtering is at a fixed frequency, it does not vary with input tuning which means that its bandwidth is constant over frequency.

Of all of the components in this device, this transformer is unique:  It was originally a 262.5 kHz IF transformer from a 1970s/1980s Philco (Ford) AM-only car radio.  While I could have certainly used the original 262.5 kHz frequency, when I built this I decided to pad it down to 125 kHz using C309/C310  - a frequency that is conveniently 1/4th of the 500 kHz resonator.

It's been so long since I built this, I don't recall why I didn't simply divide the 500 kHz by two and readjust that transformer to 250 kHz.  Practically speaking, I could have also up-converted to 455 kHz and used either transformers or ceramic filters from a modern AM radio as 455 kHz ceramic resonators were certainly available at the time - but I didn't do that.

Each half of T301 has a center tap and to this, a bias voltage is applied via R315 to assure that the voltage on these switches was in the middle of the supply range, away from the protection diodes on the 4066's I/O pins, which could cause clipping/distortion should they be allowed to conduct if the signal voltage got too near the ground or supply rails.  To prevent coupling between the two halves of the transformer via the center tap, R314/C311 was added, the resistor adding isolation with the capacitor bypassing the remainder of the signal.  Practically speaking, being able to adjust the bias voltage was unnecessary as a simple resistive voltage divider to set the bias at 2.5 volts (1/2 the supply voltage) would have been just fine.

On the "other" side of the transformer is the other half of U6 (e.g. U6B/U6C) - this time, clocked from the fixed 125 kHz oscillator.  From this, the signal - previously converted up to 125 kHz is now converted back down to audio.

Post-mixer amp/LPF:

The output of the down-converting mixer is applied to U7B via R316, a 1k resistor and a 0.001uF capacitor, both of which form a simple R/C low-pass filter to attenuate any high-frequency leakage signals from the mixer.  Because the mixing process itself is a bit lossy (about 25% efficient) as is transformer/filter T301, U7B boosts the signal by a factor of 10 (20dB) and then applies it to U7A, which is configured as a variable gain amplifier section.  The output of this is then boosted again by U8, an LM386 which is capable of driving headphones or even a small speaker.

A few comments about the design:

Originally, the circuit lacked U7 at all, but it was added when the gain of U8 (the audio amplifier), by itself, was found to be inadequate.  Since U7 was "patched" into place, this explains the odd gain distribution:  If I were rebuilding this from scratch, I'd certainly not need two post-mixer amplifier sections and I could have likely eliminated one full dual op-amp package.  As it is, I may add a "high/low" gain switch somewhere around U5 to allow reduction of the gain somewhat when in the presence of possibly-high ultrasonic signal levels to prevent clipping prior to the band-pass filter which would surely degrade overall performance.

If I were to build this again I would likely use a 455 kHz IF, instead.  While not as plentiful, 455 kHz ceramic resonators are available to use for the BFO as are either transformer or ceramic-based band-pass filters.  I would also likely reconfigure U4B or U5 to perform the high-pass filter function rather than using harder-to-find inductors.

Again, I built this unit in the mid 1990s and have since lost my original notes, but I do recall that I modified it a few times since, simply tacking changes onto the old circuit rather than completely revising it.

Use as a longwave receiver:

While primarily intended to "hear" ultrasonic sounds such as those produced by bats, insects, leaking pipes, arcing power lines, etc., it is justa longwave radio receiver connected to a microphone:  If one connects a few 10s of feet/meters of wire to to J1 - and provides an Earth/ground reference to its shield connection - one can easily tune in the high-power transmitters used for submarine communications (around 20-30 kHz) plus the WWVB time signal at 60 kHz.  This must, of course, be done away from man-made noise sources such as power lines.

Alternatively, I have used a loop of about 1 foot (25cm) diameter of a dozen or so turns of wire along with a 10uF capacitor in series (to optionally block DC from R301) and been able to hear such signals - even in suburbia - but with this arrangement you'll also likely hear plenty of similar signals from the myriad switching supplies that likely inhabit your house as well!

Final comments:

The reader should be under no illusion that this is an optimized circuit or that I would do it this way again:  It was assembled fairly quickly to suit a need and to test a few random ideas, just to see if they would work.  Will I rebuild it at some point?  I don't know - it works as it should, so I don't plan to re-make something that is currently fit for purpose.

While I've heard very few bats with this - probably due to the deficiencies of the electret microphone at ultrasonic frequencies (which explains the future switch to MEMS-type microphones) - I've used it to find powerline noise (arcs are noisy at ultrasonic) and to test longwave receive antennas.

This page stolen from ka7oei.blogspot.com

[End]


Using an inexpensive PT2399 music reverb/effects board as an audio delay (for repeater use)

0
0

Figure 1:
Inexpensive PT2399-based audio delay board
as found on the usual Internet sites.
Click on the image for a larger version.

In an earlier blog post (Fixing the CAT Systems DL-1000 and PT-1000 repeater audio delay boards - LINK) I discussed the modification of a PT2399-based audio delay line for use with the CAT-1000 repeater controller - and I also hinted that it would be possible to take an inexpensive, off-the-shelf PT2399-based audio effects board and convert it into a delay board. 

Why might one use an audio delay in an amateur radio repeater?  There are several possibilities:

  • For example, the muting of DTMF ("Touch Tone") signals.  Typically, it takes a few 10s of milliseconds to detect such signals and being able to delay the audio means that they can be muted "after" they are detected.
  • Reducing the probability of cutting off the beginning of incoming transmissions due to the slow response of a subaudible tone.  By passing COS-squelched audio through the delay - but gating it after the delay, one may still get the benefits of a tone squelch, but prevent the loss of the beginning of a transmission.  This is particularly important on cascaded, linked systems where it may take some time for the system to key up from end-to-end.
  • The suppression of squelch noise burst at the end of the transmission.  By knowing "before-hand" when an input signal goes away, one can mute the delayed audio such that the noise burst is eliminated.

Making good on the threat in the previous article, I reverse-engineered one of the PT2399-based boards available from Amazon and EvilBay and here, I present this modification on using one of these boards as a general-purpose audio delay.

The board:

The PT2399 boards (the chip may have another prefix in front of the number, such as "AD2399" or "CD2399") are typically built exactly from the manufacturer's data sheet, and one of those found on the Internet for less than US$10 is depicted in Figure 1.

This board is surprisingly well-built, with plenty of bypassing of the voltage supply rails and a reasonable layout.  Despite the use of small surface-mount resistors, it is fairly easy to modify given a bit of care.  Most of the components have visible silkscreen markings, making it easy to correlate the reverse-engineered circuit (see below) with the on-board components. 

Figure 2:
Schematic diagram of the audio delay board, with modification instructions.
This diagram is reverse-engineered from the board depicted in Figure 1.
Click on the image for a larger version.

It should be noted that a few of the components do not have visible silkscreen markings (perhaps located under the components themselves?) and these are marked in the circuit diagram and the board layout diagram (below) with letters such as "CA", "CB", "RA", etc.

Figure 3: 
Board layout showing component designations of the board in Figure 1.
Note that some of the components have no silkscreen markings and are labeled with letters
that have been arbitrarily marked as "CA", "CB", "RA", etc.
Click on the image for a larger version.

This circuit is the "bog standard" reverb circuit from the app note - but it requires modification to be used as a simple audio delay as follows:

  • The output audio needs to be pulled from a different location (pin 14 rather than pin 15):
    • Remove R22, the 5.6k resistor in series with the output capacitor marked "CC".
    • A jumper needs to be placed between the junction of the (former) R22 and capacitor "CC" and pin 14 of the IC as depicted in Figure 4, below.
  • The feedback adjustments for the reverb need to be disabled and this involves the removal of capacitors C15 and C17.

Figure 4:
The modified PT2399 board, showing the jumper on pin 14
and the two flying resistors on the potentiometer, now used
for delay adjustment.  Note the deleted C15 and C17.
Click on the image for a larger version.

At this point the board is converted to being a delay-only board, but with the amount of delay fixed at approximately 200 milliseconds with the value of R27  being fixed at 15k.  This amount of delay is quite reasonable for use on a repeater to provide the aforementioned functions.

Optional delay adjustment:

By removing the need to be able to adjust the amount of echo/reverb, we have freed the 50k potentiometer, "RA", to be used as a delay adjustment as follows:

  • Remove R27, the 15k resistor and replace this with a 47k resistor.  This is most easily done by using a 1/4 or 1/8 watt through-hole resistor and soldering one end directly to pin 6 and the other to ground, using the middle "G" pin along the edge of the board.
  • Remove R21 and using a 1/4 or 1/8 watt leaded 4.7k resistor, solder one end across where R21 went (to connect the wiper of potentiometer "RA") to pin 6 of the IC.
  • The 4.7k resistor (and parallel 47k resistor) sets the minimum resistance at about 4.3k while the maximum resistance is set by the parallel 47k resistor and the 50k potentiometer in series with the 4.7k resistor at about 25.3k.  These set the minimum and maximum delay attainable by adjustment of the potentiometer.

Of course, one may also use surface-mount resistors rather than through-hole components, using jumper wires.

This modification provides a delay that is adjustable from a bit more than 300 milliseconds to around 80 milliseconds, adjustable via the variable potentiometer.   It's worth noting, however, that if you do NOT  require a variable delay, using fixed resistors may offer better reliability than an inexpensive potentiometer of unknown quality - something to consider if the board is to be located on a remote repeater site.

If variable delay is not required, one would not connect the 4.7k resistor at R21/"RA" and instead of replacing R27 with a 47k resistor, a fixed resistor would be used, the value chosen for the desired amount of delay as indicated in the following table:

Table 1:  The amount of audio delay versus the resistance of R27.  Also shown is the internal clock frequency (in MHz) within the chip itself and the THD (distortion) on the audio caused by the delay chip.  As expected, longer delays imply lower precision in the analog-digital-analog conversion which increases the distortion somewhat.  This data is from the PT2399 data sheet.
Delay (ms) 
Resistance (R27)
Clock frequency (MHz)
Distortion (%)
342
27.6k
2.0
1.0
273
21.3k
2.5
0.8
228
17.2k
3.0
0.63
196
14.3k
3.5
0.53
171
12.1k
4.0
0.46
151
10.5k
4.5
0.41
136.6
9.2k
5.0
0.36
124.1
8.2k
5.5
0.33
113.7
7.2k
6.0
0.29
104.3
6.4k
6.5
0.27
97.1
5.8k
7.0
0.25
92.2
5.4k
7.5
0.25
86.3
4.9k
8.0
0.23
81.0
4.5k
8.5
0.22
75.9
4k
9.0
0.21

The chart above shows examples of resistance to attain certain amounts of delays, but standard resistor values may be used and the amount of delay interpolated between it and the values shown in the table.  

While not specified in the data sheet, the amount of delay will vary with temperature to a slight degree so it is recommended that the needed delay be chosen such that it will allow a slight variance while still providing the amount of delay for the needed task.

Comment: 

If this is to be powered from a 12 volt supply, it's suggested that one place a resistor in series with the "+" input to provide additional decoupling of the power supply.  The (possible) issue is that the 470uF input capacitor ("CA" on the diagram) will couple power supply noise/ripple into the ground of the audio delay board and associated audio leads, potentially resulting in circulating currents (ground loop) which can induce noiseAdditionally, an added series resistance provides a modicum of additional protection against power supply related spikes.

The board itself draws less than 50 milliamps, and as long as at least 8 volts is present on the input of U4, the 5 volt regulator, everything will be fine.  A 1/4-watt 47 ohm resistor (any value from 33 to 62 ohms will work) will do nicely.


This page stolen from ka7oei.blogspot.com

[END]



An LCD Retrofit and color display for the SI 4031 Communications Test Set

0
0

Figure 1: 
The front panel and original green monochrome screen
of the 4031.  A close look shows the "blistering" on the
screen protectors due to delamination, making the
display more difficult to read.
Click on the image for a larger version.
The Schlumberger SI 4031 is a early-mid 1990s vintage communications test set (a.k.a. "Service Monitor") - a device that is designed to test both receivers and transmitters used in the telecommunications industry.  The 4031's frequency range is 400 kHz to 999.9999 MHz making it useful as a general-purpose piece of test equipment, particularly for the testing of amateur radio gear.

As you would expect from a device from the 1990s, the original display used a CRT (Cathode Ray Tube) based monitor operating at something "close" to PAL horizontal and vertical scan rates.  While the CRT monitor with this unit is still in reasonable shape - aside from requiring a "re-cap"(e.g. replacement of electrolytic capacitors) I decided to take on the challenge of putting a more "modern" LCD-type display in it - perhaps taking advantage of a minor savings in both weight and power consumption.

Note that this requires no electrical modification of the 4031 itself and only minor mechanical changes to mount the LCD panel and its related hardware.  (This may also work for the 4032, a version of this unit that covers up to 2 GHz - see below for comments.)

Is it "PAL"

While the pedants would say that a monochrome-only signal cannot be PAL, the reference is, instead, to the horizontal and vertical scan rates of 15.625 kHz and 50 Hz, respectively which are close to those found in the PAL system as used in Europe.  As is typical for pieces of non-consumer gear and test equipment, the horizontal and vertical synchronization signals are brought out independently of each other, and the video, each being represented as a TTL signal.

Figure 2:
The horizontal sync pulse train showing 25%
D.C. pulses at 15.625 kHz, TTL level.
Click on the image for a larger version.
The video display generator of the 4031 is interesting in that it uses a UPD7220A graphics controller to facilitate interaction with the CPU (e.g. access memory, produce characters, etc.) but two separate display RAMs (8k x 16 bits) with one being used for access by the UPD7220A and the other, copied from the first during the vertical interval, for pixel read-out - the latter function being done with a combination of "glue logic" and programmable logic devices.

The forgiving CRT monitor

One nice feature of a CRT monitor is that it can be quite forgiving of deviations from standard video applied to it.  Many - but not all - all-in-one sync decoder chips used in CRT monitors are happy with taking horizontal and vertical signals that are "close" to some standard - but not exact - and lock onto it satisfactorily.  Such is the case with the 4031:  While there are separate horizontal and vertical synchronization signals, neither is quite standard, but it's "close" enough for the old monitor.

Figure 3: 
The vertical sync, showing a 10% duty cycle
pulse at about 50 Hz.
Click on the image for a larger version.

For example, the horizontal synchronization signal is simply an uninterrupted 25% duty cycle pulse train occurring at the horizontal sweep rate of about 15.625 kHz (e.g. 16uSec long) while the vertical synchronization is a 50.08 Hz 10% duty cycle (e.g. 2 msec long) pulse train.  Unlike sync signals found in other applications, the horizontal signal does not contain any sort of blanking (suppression of pulses) during the vertical interval.

Within the 4031's original CRT monitor, the horizontal and vertical synchronization signals are handled completely separately(by a TDA2593 and TDA1170, respectively) so the fact that they are non-standard is irrelevant.

Unfortunately, any modern LCD display device that is expecting a PAL-like signal (in terms of timing) isn't going to be happy with separate, non-standard synchronization inputs.

Initial attempts:

Initially, I was hoping that an off-the-shelf LCD monitor display like the 7", 4:3 aspect CLAA070MA0ACW with a driver board could be made to work with these signals with no other hardware, but my work was thwarted by the fact that its VGA input - which might handle separate horizontal and vertical sync signals - would not function at PAL video rates - only VGA rates, which have roughly twice the horizontal and vertical frequencies.  While it may have been possible to modify the code on this board and re-flash it with one of the "customized" versions found in corners of the Internet, I chose not to do this.

I then attempted to make a simple analog sync combiner circuit and apply the signal to the composite video input, but found this to be unstable - plus there was the fact that the video display board itself did not have the capability of setting the horizontal and vertical size to fully-fill the screen to the edges - something desirable to make the active screen area fully-fit the window on the front and also align with the buttons along the bottom of the screen.

After a bit more research, I decided to get a GBS-8200 video converter board (Version 4.0), a relatively inexpensive digitizing board designed to convert the myriad of video formats from CRT-based arcade video games and computer inputs to VGA which would then be inputted to a standard monitor and the CLAA070MA0ACW display driver board.  As such, I presumed that it would be far more forgiving to variations from standard video signaling - and I was, fortunately, correct

Sync (re)processor:

While I was originally hopeful that I could simply apply the horizontal and vertical sync inputs to the GBS-8200, the non-standard sync timing (pulse width, lack of a gap of horizontal sync pulses during the vertical interval) did not produce stable results, so a simple circuit had to be devised to modify the sync signal:  This basic circuit is shown below.

Figure 4:
Diagram of the sync processor itself.
This circuit will produce a sync to which the GBS-8200 board can lock.  The single video output
is connected to the RGB input of the GBS-8200 to produce a monchrome (single color)
display as seen in Figure 6.
Click on the image for a larger version.

This circuit works as follows:  The horizontal and vertical sync pulses are input to and buffered by sections of a 74HC14, Schmidt-trigger inverters which server to "clean up" the input signals as necessary.  An inverted version of the vertical sync pulse holds U3, a 4017 counter in reset until a vertical interval occurs.

Figure 5:
The circuit in Figure 4 built
on a prototyping board, the
results seen in Figure 6.
Click for a larger image.

During the vertical pulse U3, the counter, is clocked by the horizontal sync pulse and on the 5th count, the timer is stopped, setting the input of U2b, a 4011 NAND gate wired as a simple inverter, high.  The output of this gate is combined with a "re-inverted" copy of the vertical sync to produce a new version of the vertical sync that is about 225 microseconds long rather than the original 2 milliseconds as depicted in Figure 7 (below).

FWIW, I used the 4011 NAND gate because I found a rail of them, but I couldn't find any 74HC00 which would have worked fine - albeit with a different pin-out.  Similarly, a standard CMOS 4017 counter would have been fine as well.  I would, however, recommend using only the 74HC14 (or 74HCT14) as it's plenty fast for the video data and it has fairly "strong" outputs (e.g. source/sink currents) as compared to the older and slower 4069 hex Schmidt inverter.

Note that while it would theoretically be possible to use a one-shot analog timer to generate a new, shorter pulse, doing so would result in visible jitter of the video signal (I tried - it did!) as that timing would neither be consistent or its length precisely synchronous with the horizontal timing:  The use of the horizontal sync to "re-time" the duration of this new vertical pulse assures that the timing of the new pulse is synchronous with both sets of pulses and completely jitter-free.

This new, re-timed vertical sync pulse is then applied to U2a which gates it with the horizontal sync:  The output is then inverted to produce a composite sync signal (see Figure 7, below) that, while not exactly up to PAL standards, is "close enough" for the GBA-8200 video converter to be happy.

Elsewhere in the diagram may be seen inverter sections U1d-U1f:  These are configured as buffers to condition the TTL video input and provide a drive signal to the video input of the GBA-8200.

Suitable for a monochrome image!

The circuit in Figure 4 is sufficient, by itself, to drive the GBS-8200 and produce a stable VGA version of the 4031's video signal.

Figure 6:
The monochrome output from the GBS-8200 board using the
sync processor seen in figures 4 and 5 via an external monitor.
Click on the image for a larger version.
The "VID_OUT" signal may be connected to the Red, Green and Blue video inputs of the GBS-8200 and the input potentiometers adjusted for a single color:  White will result if the individual channels' gains are set equally, but green, yellow or any other color is possible by adjustment of these controls.

Figure 6 shows the result of that:  The VGA output from the GBS-8200 was connected to an old 4:3 computer monitor that I had kicking around, producing a beautiful, stable, monochrome signal.

Full-color output from the 4031

The SI 4031's video output is a single TTL signal, meaning that there is not even any brightness information, making it capable of monochrome only.  Fortunately, it is possible to simulate context-sensitive color screens with the addition of a bit of extra circuitry and firmware as described below.

The portion of this circuit used for processing the sync pulses is based on that shown in Figure 4:  A few reassignments of pins were done in the sync re-timer, but the circuit/function is the same.  What is different is the addition of U5, a quad 74HC4066 analog switch and U6, a PIC16F88 microcontroller, and a few other components.

How it works:

The video signal is buffered by U1d-U1f and applied to R1, a 200 ohm potentiometer, the wiper of which is applied to Q1, a unity-gain follower to buffer the somewhat high-impedance video from R1 to something with a source impedance of a few ohms and, more important, constant output with varying load.  The "bottom" end of R1 is connected to U5c, on section of the 74HC4066 which, if enabled, will shunt some of the video signal to ground, reducing its intensity, adjustable via R1.  Via diode D1, this line is also connected to a pin of the microcontroller - the "MARK" pin - more on this later.

Figure 7:
Top (red) trace: The composite sync from the
circuit of Figures 4 & 8.  Bottom (yellow) trace:
The original vertical sync pulse  for comparison.
Click on the image for a larger version

The output of Q1 is then applied to U5a, U5b and U5d via 100 ohm resistors.  These analog switches will selectively pass the video to the Red, Green or Blue channels of the monitor, depending on microcontroller control.  At the outputs of each of these switches may be found a resistor and diode in series (e.g. D2/R6) and these are connected to output pins of the microcontroller:  If these pins are driven low by the microcontroller, the diode drop and series resistance of the 33 ohm resistor (e.g. R6) and the 100 ohm resistor (e.g. R3) and the output transistor of the microcontroller will reduce the amplitude on that channel to provide a means of brightness control.

I'd originally intended to place emitter-follower video drivers (e.g. the circuit of Q1) on each of the R, G, and B outputs, but the very short lead length to the input of the GBS-8200 - and the ability to adjust the RGB input gain via its three potentiometers eliminated this requirement as additional losses could be easily compensated.

Figure 8:
Added to the sync processor of Figure 4, above, is a PIC16F88 used to analyze the video from the 4031
and "colorize" the resulting image. 
See the text for information as to how this works.
Click on the image for a larger version.

With the combination of the three 4066 gates, the "!BRITE" pin, and the three "dim" pins (e.g. "!R_DIM", "!G_DIM" and "!B_DIM") over two dozen distinctly different colors and brightness levels may be generated under processor control.

The magic of the microcontroller

U6, a PIC16F88 microcontroller, is clocked at 20 MHz, its fastest rated speed.  Because its job is to operate the four switches comprising U5 - and the three "dim" pins on the video lines - it must "know" a bit about the video signal from the 4031:

  • The "!V_SYNC" pin gets a conditioned sample of vertical sync from the output of U1a:  It is via this signal that the U6 "knows" when the scan restarts at line one.
  • The "!H_SYNC" signal from the output of U1b is applied to pin RB0, which is configured to trigger an interrupt on the falling edge (the beginning) of the horizontal sync.
  • The "!VID" signal is applied to pin RA4, which is the input of Timer 0 within the microcontroller:  This is used to analyze the content of lines of video to determine the specific content as the timer is able to "count" the number of times that the video goes from low to high on these scan lines -  In other words, a sort of "pixel count".

In operation, the start of each horizontal sync pulse triggers an interrupt in the microcontroller.  If this coincides with the start of the vertical interval, the line count is restarted.

Video content analysis:

Figure 9:
Mounted inside the 4031, the sync processor board is on the
far left, the six pins of the ICSP (In Circuit Serial
Programming) connector being easily accessed.  The buttons
and controls for the other two boards are also accessible.
Click on the image for a larger version.

Visual inspection of each of the screens on the 4031 will reveal that they contain consistent - but unique - attributes.  Most obvious is the title of the screen located near the top, but other content may be present midway down the screen - or very near the bottom - which may be used to reliably identify exactly which screen is being displayed, having determined the "pixel count" for certain lines on each of these screens beforehand.

For each subsequent horizontal sync pulse and corresponding interrupt, the count contained within hardware timer 0 is read - and the timer is immediately reset.  For a number of specific scan lines, their unique counts are stored in RAM.

Attention to detail is required!

Determining the pixel count consistently requires a bit of care in the coding.  As mentioned, this count is based on an interrupt-driven routine that reads the content of hardware timer 0 - but this also means that the code must be written in a way that guarantees that the time between the start of the horizontal sync pulse (and subsequent entry into the interrupt service routine) and the read and reset of timer 0 is as consistent as possible, considering the asynchronicity of the timing of this interrupt and the CPU clock.

What this implies is that the reading this timer and its resetting must not only be done in an interrupt, but that it also be the first thing done within the interrupt function prior to any other actions, particularly any conditional instructions that could cause this timing to vary, resulting in inconsistent pixel counts - something that would preclude the use of anything other than a quickly-responding interrupt.  Another implication is that this interrupt may be the only interrupt that is enabled as and preemption by another one would surely disrupt our timing.

Immediately following this action is the setting of the color and brightness attributes by ANDing a copy of the current content port/pin registers to remove the brightness/color bits and then ORing that data with the pre-calculated color/brightness bit mask data into those same registers so that any changes in these attributes occur to the left of the visible pixels in the scan line.

A limitation of this hardware/software is that it is likely not possible to satisfactorily set different colors horizontally, along a scan line - it is possible only to change the color of complete scan lines:  To do this would, at a minimum, require extremely precise timing within the interrupt service routine, adding complexity to the code - and it's not certain that satisfactory results would even be possible.   To do it "properly" would certainly require more complicated hardware - possibly including the regeneration of another clock from the horizontal pixel rate - but doing this would be complicated by the fact that the pixel read-out rate is asynchronous with the sync as noted later.

Using the pixel counts:

At the beginning of the vertical interval, outside any interrupts, the previously-determined counts of low-to-high transitions is analyzed via a series of conditional statements and a variable is set indicating the operating "mode" of the 4031.  This "mode" information is then applied to another look-up table to determine the color to be used for that screen.

One complication is that like other analog video, that coming from the 4031 is interlaced meaning that for certain scan lines - particularly those with diagonal elements - that the pixel count may vary for a given scan line.  Unlike "true" video, the sync pulses from the 4031 contain no obvious timing offset (e.g. "serrations" in the sync) to offset by half a line or identify the specific video field, but with an analog monitor, this wasn't really much of an issue as it would simply paint a line on the screen in about the right place, anyway.

For most screens, simply looking at pixel counts of between four and six lines - most of them on lines from 4 to 15 - was enough to uniquely identify a screen, but others - particularly the "Zoom" screens - sometimes require even a greater number of pixel counts to reliably and uniquely identify the screen.

In particular, differentiating between the "SINAD" and "RMS-FLT" Zoom screens was problematic as both resulted in the same pixel counts for all of the lines usable for unique identification:  The only way to detect the difference was due to the fact that for some lines, the pixel count for the "SINAD" screen would vary due to the aforementioned video field difference - or possibly due to interaction between the asynchronicity of the pixel clock, the CPU clock, and the way counts are registered on a counter input without hardware prescaling.  It was the fact that it varied that allowed it to be reliably differentiated from the "RMS-FLT" screen, which had a very consistent pixel count.

Coloration of the screen:

Many screens on the 4031 have different sections.  For many screens, the upper section contains the configured parameters (e.g. frequency, RF signal level, etc.) while the lower portion of the screen shows the measured values or an oscilloscope display:  Simply by knowing which screen type is being displayed and the current line number, those sections can be colored differently from other portions.

Deciding what color to make what is a purely aesthetic choice, so I did what I thought looked good.  Because about two-dozen different colors are possible, I chose the brightest colors for the most commonly-used screen segments, setting these colors by the function to which they were related.

Finally, all screens have, along the bottom, a set of labels for the buttons below the bottom of the screen:  These may be colored separately as well - and I chose gray (a.k.a. "dim white").

Analyzing the video to determine "pixel counts":

When writing the firmware, a few simple tools were included, notably some variables, hard coded at compile time, that would display the pixel counts.  If, for example, one needed to determine the pixel count for line #14, the pixel count display variable would then be loaded with the pixel count for line 14.  For example, the oscilloscope screen capture shows a pixel count capture:  The left-most pulse is 4 units long followed by a single-unit pulse (meaning "10") followed by a 2 unit long pulse with three more pulses - for a pixel count of 13.

Figure 10:
An example of the "pixel" count:  The 4-unit
wide pulse followed by one pulse represents 10
and the 2-unit wide pulse followed by 3 pulses
represent a pixel count of 13 on the selected line.
Click on the image for a larger version.

Another variable may be set to visually identify which scan line is being counted.  When the scan line being counted occurred, the "MARK" pin would be set high causing an on-screen indication of which line was being inspected, offering a "sanity check" and a visual reference to know which line, exactly, was being checked.

During the vertical interval, pin "RB3" would then be strobed with a series of pulses to indicate the pixel count - a "long" pulse lasting four CPU cycles followed by pulses of 1 CPU cycle, each to indicate the "tens" digit (if any) and a shorter pulse of two CPU cycles followed by the requisite number of 1 CPU-cycle pulses to indicate the "ones" digits. 

Using an oscilloscope triggered on the signal on RB3 (pin 9) these pulses could be read visually on the oscilloscope and by switching between the different screens on the 4031, the "pixel count" of this line for the various screens could be determined:  Repeating this for several different scan lines allow unique identification of all screens.  In the event that there is false detection of a mode, this "pixel count" output could also be configured to show the number of the current modes (in a "#define" statement) when they are detected to aid in debugging.

Comment:

In producing this firmware, I have only one version of the 4031 (with Duplex option) available to me.  Different versions of the 4031 - and the 4032 - may have "other" screensnot included in the analysis, or slightly different layout/labeling that will foil the analysis of the scan line.
The way the firmware doing the screen analysis is written, if the scan line analysis doesn't find a match to what it already "knows" about it will cause that screen's text to be displayed in the default color of white.

At present this "scan line analysis" can only be done by setting certain variables in the source code and recompiling - but this was made easier by the inclusion of the "ICSP" connector (visible in Figure 9) to allow in-circuit programming, while the unit is operating.  In theory, it may be possible to come up with some sort of user-interactive means of setting individual screens' colors which could be used to set the colors on screens of different firmware versions or with features that I don't have in my 4031, but this would require significantly more work on the firmware.

Figure 11:
The 4031 with the retrofit LCD operational.
This isn't a perfect photo because it's very difficult to take
a picture of an operational electronic display!
Click on the image for a larger version.

Color mode selection:

With the lack of the CRT monitor, there is no need for an "intensity" control, but rather than leave a hole in the front panel a momentary switch was fitted at this position.  Connected between ground and pin RB7, using the processor's internal pull-up resistor, this switch is monitored for both "short" and "long" button presses.

A "short" press (less than 1/2 second) toggles between "bright" and "dim" using the same color scheme, but a "long" press (1.5-2 seconds) changes to the next the color mode.  At the time of this writing, the color modes are:

  • Full-color screens.  The screens are colored according to mode and context as described above.
  • Green.  All components of the screen are green.
  • Yellow.  Like above, but yellow.
  • Cyan.  Like above, but cyan.
  • Pink.  Like above, but pink
  • White.  Like above, but white

In some instances (e.g. high ambient light) selecting a specific color (green or yellow) may improve readability of the screen.  These settings selected by the switch are saved in EEPROM (10 seconds after the mode was last changed) so that they are retained following a power-cycle.

* * * * * * *

The hardware

Several bits of hardware are required to do this conversion and if you are of the ilk to build your own circuits, nothing is particularly difficult.  Personally, I spent at least as much time making brackets and pieces and mounting the hardware in 4031 as I did writing the firmware.

Sync processor:

At a minimum, the "simple" sync processor mentioned above (Figure 4) is required to provide a synchronization pulse that is recognizable by the converter board.  If one doesn't wish to have different color modes available, this is certainly an option.

Having said that, the "colorized" 4031 afforded by the circuit described in Figure 8 is quite nice - perhaps a bit of an extravagance.  If the 4031 were originally equipped with a color monitor, I can imagine it looking something like the the images in the "Gallery" section of this article, below.

GBS-8200:

Figure 12:
The GBS-8200 video converter board.  This is "V4.0" of
the GBS-8200 which includes an on-board voltage regulator
allowing it to run from 5-12 volts.
Click on the image for a larger version.

There appear to be several versions of the GBS-8200 around - possibly from different manufacturers and some of these are designed to be operated from a 5 volt supply ONLY, but many have on-board voltage converters, allowing them to be operated from 5 to 12 volts:  The version that I have is the "V4.0" board with a "5-12 volt" input which eliminates the need for yet another voltage conversion step.  If you look carefully at the photo of the GBS-8200, the inductor for the buck converter is visible near the upper right-hand corner of the board, between the power connector the white video-out connector marked "P12" - but the silkscreened "DC 5V-12V" is also a big give-away!

This board, readily available via EvilBay and Amazon for well under US$40, is specifically designed to take a wide variety of RGB video formats - typically from 70s-90s video games and computers - and convert them to VGA format.  There are several connectors for video input seen along the bottom edge of the photo:  The three phono plugs for component video, an input on a VGA connector, and next to the VGA connector, two white headers for cable:  The unit that I purchased included a cable that plugs into the header between the VGA input and the three potentiometers.

At the top of the board, the VGA connector outputs the converted video - but there is also a white header next to it with these same signals.  As mentioned elsewhere, I simply soldered the six wires (R, G, B, H, V, and Ground) to the board, at this white header as I didn't happen to have another male HD-15 cable in my collection of parts.

This device can accept YUV and RGB inputs - and the latter can have either separate or composite sync inputs.  As the sync signals from the 4031 are non-standard, it's required that the sync processor described above produce a composite sync and the GBA-8200 be switched to the "RGBS" mode (using the "mode select" button) where the composite sync is fed into the "H-Sync" input and the "V-Sync" input is grounded.

The RGB inputs to the GBS-8200 come from the 4031, either as a single video source that is connected to all three inputs in the case of the "simple"(monochrome) version of the sync processor or from the RGB lines of the color version.  On board the GBA-8200 are three potentiometers visible in the photo above (near the lower-left corner) that are used to scale the input levels of the RGB signals to provide color tint/balance as desired.  In the lower-right corner can be seen the buttons used to configure the GBS-8200.

External monitor:

The use of the GBS-8200 has an interesting implication:  It would be perfectly reasonable to use an externaldisplay with a VGA input (or VGA to HDMI converter) with the 4031.  This has the obvious advantage of being larger and the possibility of being placed conveniently when making adjustments where the 4031's itself may be too distant or awkwardly placed.  Additionally, it offers the possibility of being able to display to a larger group of people (e.g. teaching) and being digitized and recorded, as was done with the images at the bottom of this article.

Simply connecting a monitor to the VGA output of the GBS-8200 in parallel with the built-in LCD monitor would work - perhaps even as a short, permanent cord mounted to the rear (somewhere?) or hanging out of the 4031 should this be frequently required.  With a short (8", 20cm)"extension" cable permanently connected, any degradation caused by having an unterminated cable (when the external monitor was not connected) could likely be ignored and the rather low resolution of this display - as could be the slight diminution in brightness - when two monitors were connected at the same time (e.g. "double terminating").  Practically speaking, a buffer amplifier could be built to isolate the R, G, B and sync signals (using the simple emitter-follower circuit of Q1 see in Figure 8) to feed the external monitor.

Because there's no obvious place on the back panel to mount such a connector - and since I don't envision the frequent need for it - I did not so-equip my '4031.

Navigating the GBS-8200's menus

The four buttons used to configure the board are seen on the corner of the board at the top of the photo above.  Initially, the GBS-8200's menu system may be in Chinese, but the 4th menu allows the selection of either English or Chinese and it is changed to English with the following button-presses:

  • Menu - > UP -> Menu -> Menu 

At this point the text is now in English.

Other screens include:

  • "Display" - Which sets the output resolution:  A setting other than 640x480 is suggested.
  • "Geometry" - Which sets the position and sizes, along with how the blanking interval is to be treated.  Suggested initial settings are:
    • H position: 94
    • V position: 26
    • H size 56
    • V size: 66
    • Clamp st:  83
    • Clamp sp: 94
  • "Picture" - Which sets other display properties.  A setting of 50 is suggested for Brightness, Contrast and Saturation and a value of 05 is suggested for Sharpness.

The CLAA070MA0ACW display:

This is a 7" diagonal VGA screen of 4:3 aspect ratio and is available with a driver circuit board on EvilBay for around US$50.  Be sure that you get the version with the display controller board and not just the bare display panel, by itself. 

This unit is rated to operate from about 6 to 12 volts, and it comes with both an infrared remote and a small daughter board and interconnect cable that replicates the functions of the remote:  The remote is not required for this project as the daughter board and its pushbuttons will suffice.

Figure 13:
The driver board supplied with the CLAA070MA0A0ACW
LCD panel.  At the top is the VGA input while the TTL
to the panel is at the bottom, the back-light power connector
being in visible in the lower-right corner of the board.
Click on the image for a larger version.
The LCD panels themselves appear to be "pulls" from some consumer product (perhaps a portable DVD player?) as they have evidence of having been previously mounted, but the price is reasonable and their size is precisely that which may be used in lieu of the 4031's CRT, being a few millimeters larger than the window on the front of the 4031 in both axes making them a perfect fit by virtue of their being 4:3 aspect ratio:  It's possible that one could find a newer 16:9 that would fit horizontally in the available space, but it would likely leave a gap above and below the screen.

This unit will accept composite analog, HDMI and VGA, but it is VGA that we require, fed from the GBA-8200 via a short cable:  I constructed a very short (3", 7.5cm) cable, soldering one end directly to the GBA-8200 board itself (I could find only one 15 pin HD connector) just long enough to reach the VGA input connector of the display.  If desired, one could install a switch/distribution amplifier and provide a VGA connector to feed an external display - or likely get away with "double terminating" it as noted elsewhere.

This LCD came with a small board taped to the back of the display that is used to convert to a the flat ribbon cable supplied with the unit, used to connect to the display controller board via the "TTL OUT" connector:  This PC board should be glued to the back of the LCD panel with RTV or other rubberized glue (but not cyanoacrylate!) to mechanically secure it or else it is likely to work its way loose and tear the cable from the LCD panel.  When connecting to the "TTL OUT" connector on the main driver board, one must carefully lift up the locking lever (the black plastic piece that runs its width) from the back on the connector, slide in the cable, and push the lever back down.  The cable itself isn't marked as to which way is "up", but putting it in upside-down won't damage anything - but you'll see nothing on the screen:  Mark this cable when you determine its proper orientation.

There is also a short cable provided for powering the LCD panel's back light:  You won't likely see anything on the panel if this is not connected!

Figure 14: 
The original screen protector with EMI shield, held in place
with 10 screws and two brass angle pieces around its
perimeter.  This holds the front bezel in place.
Click on the image for a larger version.
Mounting the LCD panel:

The display is mounted "upside-down"(the wider portion of the metal border around the LCD panel being on top) to clear mechanical obstructions around the front panel of the 4031.  Fortunately, configuring for this display orientation can be accommodated via a menu on the display driver board as follows:

  • Select the "Function" menu
  • Go to "Mode"
  • Use the up/down buttons to select "SYS2"

The ONLY modification required of the 4031 to use the LCD display is mechanical.  Unlike the original CRT module - which was mounted in a large cavity behind the front panel - the LCD itself is mounted to the front panel of the 4031 while the other circuit boards (sync processor, GBA-8200,  CLAA070MA0ACW controller board) are mounted in the cavity formerly occupied by the CRT.

Figure 15:
The original screen protector (center) and copies, sitting atop
the laser cutter.  These were cut from 0.060" thick poly-
carbonate plastic.
Click on the image for a larger version.
Front screen protector: 

On the 4031s that I have, the CRT is protected by a plastic sheet containing embedded metal mesh for RFI/EMI shielding - which didn't actually seem to be grounded, anyway.

Unfortunately, over the years, this sheet tends to de-laminate and "bubble",  making viewing the screen rather difficult, so I duplicated a replacement using 0.060" polycarbonate using a laser cutter.  The use of polycarbonate over other types of clear plastic (like acrylic) is recommended due to its resiliency:  It can be bent nearly in half without breaking and is likely to stand the occasional impact from the cable of a connector or bolt without cracking.  Acrylic, on the other hand - unless it is quite thick - would crack with such abuse.  For convenience, the dimensions of this screen protector are shown below.

While the original screens had EMI/RFI mesh embedded within them, these replacements will not.  The "need" for such shielding may be debated, but its worth noting that many similar pieces of equipment have no such shielding.  I did a bit of searching around for plastic windows with embedded mesh, but other than a few random surplus pieces here and there, a reliable source could not be found.

Figure 16:
The dimensions of the screen protector - just in case
you might want to make your own!
Click on the image for a larger version.
One possible saving grace is the nature of the CRT versus the LCD:  A CRT has the potential (pun intended) to cause EMI owing to the fact that its surface is bombarded by an rapidly-changing electron beam that varies at MHz frequencies - and this can radiate a significant E-field.

The LCD, on the other hand, is a flat panel with low voltage and backed by a grounded metal plate, so the opportunity for it to radiate extraneous RF is arguably reduced.

Removing the front panel:

The front face of the 4031 comes off as a unit by removing the "Intensity" control knob, the two screws on either side that hold it into the unit's frame (the "second" screws from the top/bottom) and carefully unplugging three ribbon cables.  Inspection reveals that the screen protector is, itself, mounted to a bezel held in by several screws.

In my 4031, the original  the (de-laminated) front screen protector is extricated by removing the ten small screws around its perimeter - and noting the way the pieces of brass angle that may be included are mounted - which allows it and front bezel to come out:  It looks to me like this screen protector may have been replaced in the past and it could be of slightly different construction than what was provided from the factory - but this is only a guess.

Figure 17:
After fully-tapping the 2.3mm screws, these aluminum angle
pieces with slots were attached to the aluminum bars seen
in Figure 14.  It is into these bars that the LCD panel, with
attached brackets, mount.
Click on the image for a larger version.

Removing front screen protector will reveal two aluminum bars on either side - each with metal "finger stock" on the "inside" of the screen area - mounted to the front panel by countersunk screws hidden by the bezel that holds the screen cover.  Inspection will reveal that there are three holes along these bars that are not tapped all of the way through.  I removed these bars and purchased a 2.3mm tap and completed the threads so that I could insert 2.3mm x 6mm screws from the "other"(back) side.  It would have been about as easy to have drilled entirely new holes and tapped them for 4-40 screws (or your favorite Metric equivalent) and, in retrospect, I should have probably done so.

Using scrap pieces of aluminum, a pair of angle brackets were fashioned, held to the aluminum bars by the newly-tapped screws in those bars as seen in Figure 17.

To accommodate the momentary switch, I had to file away a portion of the bracket and bar on the left side ("behind" that in Figure 17 and this not visible) as well as countersink the back side of the plastic lens bezel so that it would accommodate the mounting hardware of the momentary switch and sit flush.

Into the brackets, slots were cut with a saw - also visible in Figure 17 - and it is into those that the angle pieces - now attached to the LCD - slide to allow adjustment of depth and very slight adjustment of axial rotation.  The LCD was located about 3/8"(10mm) behind the polycarbonate lens for clearance to protect the LCD panel itself should something be dropped on it - like a cable, RF connector or tool.

Figure 18:
The two brackets and new screen protector mounted in the
front panel assembly of the 4031.
Click on the image for a larger version.

As seen in the pictures, there is no obvious way to mount the display itself so a section of right-angle aluminum was cut and these were glued using "Shoe Goo"(a resilient rubber adhesive) to the back of the display itself, using the mounts fabricated to hold the display itself in position (described below) as a positioning guide:  It's likely that RTV (silicone) would have worked as well but I would not use an ineflexible adhesive like epoxy or cyanoacrylate ("Super Glue").

As this is done, it's very important to make sure that these brackets are installed correctly so that the display is both centered and square with the 4031's window:  I recommend actually mounting the display in place while the adhesive sets so that it perfectly fits the mechanical environment and there is no stress on the display itself as screws are tightened when it is mounted. When I did this, I put some "painters tape" on the front of the display and lightly marked it so that I could precisely set the horizontal and vertical position of the display with reference to the front bezel before the glue set.

Electrically connecting to the 4031:

Figure 19:
Two aluminum angle pieces with holes were glued to the back
of the LCD panel, now mounted in the front panel.
Click on the image for a larger version.
The connection of the original monitor to the 4031 is via an industry-standard 14 connection IDC ribbon cable/connector connected to the monitor and an exact duplicate was ordered from Digi-Key (P/N:  H1CXH-1436G-ND).  On this cable are the ground, power, sync and video connections as follows:

  • 1, 2:  +15 volts
  • 3-6:  Not connected
  • 7:  Vertical sync (positive-going pulse, TTL level, 50 Hz)
  • 8:  Ground
  • 9:  Horizontal sync (positive-going pulse, TTL level, 15.625 kHz)
  • 10:  Ground
  • 11:  Video  (positive-going, TTL level)
  • 12:  Ground
  • 13, 14:  Not connected

It's perhaps easiest to empirically determine these pins by stripping a small amount of insulation from the end of the wires and using a combination of volt/ohmmeter and oscilloscope to positively identify them, the ground pins being identifiable plugging in the other end of the cable and using continuity to the chassis with the unit powered down and then (carefully!) verifying them with the unit powered up, beingvery careful to avoid connecting the +15 volt wires to anything else.  Once identified, the wires that are marked as "not connected" were trimmed back slightly, the two +15 volt and three ground wires were (separately!) connected in parallel and the wires themselves colored using markers to aid in later identification.

Mounting the boards:

Figure 20:
The "stack-up" of the boards on the mounting sled.  Hidden
by the ribbon cable is the sync processor, above that is the
the GBS-8300 with its output VGA connector and above
that is the LCD controller with 4-button daughter board.
At the bottom, on the sled, may be seen the 7812
regulator used to drop the 15 volt supply to 12 volts.
Click on the image for a larger version.
A "sled" about 6"(155mm) wide and about 4.75"(120mm) tall - designed to be mounted to the left-hand wall (as viewed from the front panel) inside the enclosure.  This was constructed from a sheet of scrap aluminum and on it, the sync processor board, the GBS-8200 and the LCD controller were mounted using an assortment of stand-offs.  The different shapes and sizes of these boards complicated matters, so I had to be creative, resorting to mounting the LCD controller - and its daughter board (with pushbuttons and infrared receiver) to a piece of glass-epoxy PCB material that was, itself, held in place with stand-offs, seen in Figure 20 as the board on the vary top.

While I happen to have a bunch of stand-offs in my parts bins, I could have just as easily mounted the boards using long screws or "allthread" along with an assortment of nuts and washers.  These days, a more elegant custom mount could also be 3D-printed to hold these boards in place, although the metal "sled" and stand-offs offer a solid electrical connection to the chassis that may aid in RFI shielding and mitigation.

The only critical things in mounting are to provide access to the ICSP connector and R1 ("gray" adjust) on the sync processor board, the buttons on the GBS-8200, and the buttons on the daughter board on the LCD controller:  All of these should be accessible with just the top cover of the 4031 removed, without needing to disassemble anything else as depicted in Figure 21.

Figure 21:
Installed and powered- up, the stack-up of boards and
connected LCD panel.  All controls - and the ICSP
connector - are accessible simply by removing the top cover
of the 4031.
Click on the image for a larger version.
Into this "sled" were pressed self-retaining "PEM" nuts and it is mounted at four points in the same slots (using 8-32 screws) on the left side of the frame that were used to mount the original CRT monitor.

Powering the boards:

As noted above, the GBS-8200 is available in a version that may operate from 5-12 volts.  Similarly, the LCD panel's board can also accommodate up to 12 volts - but the 4031 supplies 15 volts.  During development, I ran both boards on 15 volts directly with no issues, but I noted that 16 volt electrolytic capacitors were used on the inputs, so 15 volts would be pushing their maximum ratings.

Despite having no issues, I decided not to take a chance, so I added a 7812 voltage regulator, bolting it to the aluminum "sled" for heat-sinking (see Figure 20) and powering both the GBS-8200 and LCD panel from it.  As seen from the diagram above, the sync processor includes its own regulator (a 7805) and it may be powered from either 12 or 15 volts.

Overall results

Figure 22:
Under the shield of the "Monitor Control" board is R16, the
"width" adjustment that may be use to optimize video quality.
Click on the image for a larger version.

The results of all of this work look quite good as can be seen in the picture gallery below, but there are slight visual artifacts owing to the fact that the VGA conversion is from a device (the '4031) that does not have its pixel clock synchronized with the sampling clock of the GBS-8200 - or even the horizontal sync pulse.  The inevitable result is - if you look closely - that you may see some slight "glitching" on the leading or falling edge of vertical lines.

This effect can be reduce somewhat by adjusting the read-out pixel clock from the 4031's Monitor Control board.  Located on this board, under the shield, is potentiometer R16.  Nominally set to 11.0 MHz (as monitored at test point "Mp10") the frequency of this clock output may be reduced by turning this potentiometer slightly clockwise, reducing the effects of this aliasing somewhat by increasing the "width" of the display by making it output the line of video "slower".

If this adjustment is done, it should be done iteratively:  If it is set too low, the beginning of the line will start before the current line has finished drawing causing you to be able to see the left edge of the screen along the far right edge.  By adjusting the "Horizontal Width" on the GBS-8200, some of this overlap can be moved off the right edge of the screen so a balance between this and a low clock frequency must be found.  The approximate frequency set by R16 after this adjustment is between 7.75 and 8.0 MHz.

As mentioned earlier, trying to set a color horizontally across a scan line is not really practical:  The fact that, as we have seen, the pixel read-out rate is a free-running oscillator that is not synchronous with any of the the video sync pulses, so there is no "easy" way to synchronize a clock signal to set color attributes along the scan line from the video information alone.  To do so would require a sample of the pixel clock itself from the Monitor Control board!

In theory, it may be possible to tie the internal pixel clock to an already-existing clock signal on the Monitor Control board (e.g. the 8 MHz clock) to allow this and to reduce the "glitching" that is sometimes visible:  This modification is open to investigation.




Photo gallery

The following are screen captures obtained by first connecting a VGA-to-HDMI converter to the VGA output of the GBS-8200 board, and then connecting the HDMI output to a USB3 capture device meaning that the image is re-sampled several times in the process, accumulating geometrical artifacts. 


Figure 23:
The main "RX FM" screen.  The top portion is colored as light magenta to indicate an RX-FM screen while the center portion is colored in yellow.  The "soft" buttons on the bottom of the screen are given the attribute of a "gray" color.
Click on the image for a larger version.


Figure 24:
The TX FM screen, the top portion color-coded as light-cyan.
Click on the image for a larger version.


Figure 25:
The "duplex" screen, the top portion color-coded as light-green.
Click on the image for a larger version.

Figure 26:
The "oscilloscope" screen.  Because it is an "RX FM" screen, the top portion is colored with light-magenta, with the portion with the scope trace is colored light yellow.
Click on the image for a larger version.

Figure 27:
The analyzer display, color coded as light cyan as it's one of the "TX FM" modes.
Click on the image for a larger version.

Figure 28:
The Modulation Monitor "Zoom" screen, color coded as light magenta as it's one of the "RX FM" modes.
Click on the image for a larger version.

 

Video captured from the 4031:

Here is a short video,captured from the output of the GBS-8200, as the various screens are selected on the 4031:

 

At the end of the video, the monochrome modes (green, yellow, etc.) are selected in sequence.

Remember:  The video on the LCD mounted in the 4031 looks quite a bit better than is represented in the video - not only because it's a smaller screen, but the capturing of the video from the VGA output added yet another stage of analog digitization/degradation - plus there are artifacts from the YouTube video compression as well.

* * * * * * * * * * * * * * * * 

Why use a PIC?

One might ask, "Why did you do this with a PIC rather than an Arduino or a Raspberry Pi?"

First, I've been using the PIC Microcontrollers since the early 1990s, making good use of the CCS "PICC" compiler - (LINK) for much of this time:  This compiler is capable of producing fairly tight and compact code and I'm very familiar with it.  The PIC16F88 was chosen because it has the necessary hardware peripherals, it's easy to use, has plenty of RAM, program space and speed for this task, and is still available in DIP (and SMD) packages - a real plus in these days of "supply chain" issues.

The code running on the PIC uses interrupts and as such, it's possible that its same function could be done on a lower-end Arduino UNO as that processor sports similar hardware capabilities - but it's unlikely that this could be done using the typical Arduino IDE sketch environment, which does not, by default, lend itself to latency-critical interrupt processing.  You would have to get much closer to the "bare metal" and implement lower-level interrupts and some careful coding (possibly in mixed "C" and assembly) in order to have the code operate fast and consistently enough to do the pixel counting.

Finally, a Raspberry Pi - if you can get one - would be overkill:  You would still need to interface the same signals (sync, video), but to 3.3 volt logic, and you would still need the same hardware (analog switches, etc.) to modify the video attributes - not to mention the time-critical code on a non-realtime operating system to do the pixel counting!

Where can I get the code?

You may find the source code (for the CCS "PICC" compiler - I used version 5.018) and a compiled .HEX file for the PIC16F88 at the following links:

The .HEX code above is suitable for "burning" into a PIC16F88, and I use the PicKit3 programmer's ICSP (In Circuit Serial Programming) for this:  It's possible to reprogram the device in a powered-up 4031 - but because the code is written to detect when the ICSP is connected, it won't resume normal operation until the cable is disconnected.

As mentioned before, I have only one version of the 4031, so if your device has "different" screen signatures that result in pixel counts that don't match what's in the code, that screen will be rendered with white text.  Due to the complexity of the screen detection via pixel counting, making the recognition of the screen an automated process so that one could provide user-defined configurations would require a significant addition to the code - and likely the need for much more code space.

With the information provided it should be possible to apply this technique using other hardware platforms/microcontrollers - provided that one has either the speed to reliably count pixels at MHz rates and/or is able to get close enough to the "bare metal" of the processor to use on-chip peripherals to aid in the task.  In either case, close attention to the way the code operates - possibly a bit of optimization - will likely be required to pull off this task.

Final comments:

The most obvious change in the appearance of the 4031 after the modification - other than the colorized screen - is that of readability.  Clearly, the replacement of the degraded screen protector improved things considerably!

One advantage of the CRT - assuming that it is in good condition - is that it can be very bright, meaning that the LCD is at a slight disadvantage where high ambient light might be an issue:  In this case, one of the available "monochrome" modes may help.

The most obvious disadvantage of the LCD is that unlike the CRT, which has essentially a Lambertian emission profile from its surface (e.g. it radiates hemispherically from the plane of the surface of the CRT), the LCD, by its very nature, has a comparatively reduced viewing angle.

When faced with viewing difficulties one would, in practice, simply relocate or reposition the 4031 so that it was more favorably oriented - and in some instances switching to one of the large "Zoom" screens may help when reading from a distance and/or awkward angle:  If you wish to do so, you could take advantage of the ability to use an external LCD monitor (small 7" units are fairly inexpensive) as described above.

Installing an LCD panel - with a blemish-free screen protector - and having "colorized" screens is a nice "refresh" of the 4031, particularly if you have been dealing with an ailing CRT for which there is no modern, drop-in equivalent.

* * * * * * * * * * *

This page stolen from ka7oei.blogspot.com


[END]



Exploring the NDK 9200Q7 10 MHz OCXO (Oven-controlled Crystal Oscillator)

0
0

Figure 1:
The NDK 9200Q7 OCXO.  This unit, pulled from
used equipment, is slightly "shop-worn" but still
serviceable.  The multi-turn tuning potentiometer
is accessible via the hole at the lower-left.
Click on the image for a larger version
The NDK 9200Q7 (pictured) is an OCXO (Oven-Controlled Crystal Oscillator) that occasionally appears on EvilBay or surplus sites.  While not quite as good a performer as the Isotemp 134-10 (see the 17 October, 2017 Blog entry, "A 10 MHz OCXO" - Link) it's been used for a few projects requiring good frequency stability, including:

  • The 146.620 Simulcast repeater system.  One of these is used at each transmitter site, which are held at 4 Hz apart to eliminated "standing nulls" - and they have stayed put in frequency for over a decade. (This system is described in a series of previous blog entries starting with  "Two Repeaters, One System - Part 1" - Link).
  • 10 GHz transverter frequency reference.  One of the local amateurs used one of these units to hold his 10 GHz frequency stable and it did so fairly well, easily keeping it within a  hundred Hz or so of other stations:  This was good enough to allow him to be easily found and tuned in, even when signals were weak.

At least some of these units were pulled from scrapped VSAT (Very Small Aperture SATellite) terminals so they were designed for both stability and the ability to be electronically tuned to "dial in" the frequency precisely.

Testing and experience shows that given 10-15 minutes to thermally stabilize, these units are perfectly capable of holding the frequency to better than 1 part in 108 - or about 1 Hz at 100 MHz - and since any of these units that you are likely to find about are likely to be 25-30 years old, the intrinsic aging of the quartz crystal itself is going to be well along its asymptotic curve to zero.

Figure 2:
The bottom of the OCXO, annotated to show the various
connections.
Click on the image for a larger version.

Using this device

In its original application, this device was powered from a 12-15 volt supply, but if you were to apply power and give it 5-15 minutes to warm up, you would probably be disappointed in its accuracy as it would not have any sort of external tuning input to get it anywhere close to its intended frequency.

Because of the need for it to be electrically tuned, this device is actually a VCXO (Voltage-Controlled Crystal Oscillator) as well and as such, it has a "Tune" pin, identified in Figure 2.  Nominally, the tuning voltage was probably between 0 and 10 volts, but unless a voltage is applied, this pin will naturally drift close to zero voltage, the result being that at 10 MHz, it may be a dozen or two Hz low in frequency.

Adding a resistor

The easiest "fix" for this - to make it operate "stand-alone" - is to apply a voltage on the pin.  If your plans include locking this to an external source - such as making your own GPSDO (GPS Disciplined Oscillator) then one simply need apply this tuning voltage from a DAC (Digital-to-Analog Converter) or filtered PWM output, but if you wish to use this oscillator in a stand-alone configuration - or even as an externally-tuned oscillator, a bit of modification is in order.

Figure 3:
This shows the 10k resistor added between the internal 5 volt
source and the "TUNE" pin to allow "standalone" operation.
Click on the image for a larger version.
The OCXO may be disassembled easily by removing the small screw on each side and carefully un-sticking the circuit board from the insulation inside.  Once this is done, you'll see that there are two boards:  The one on the top is part of the control board for the heater/oven while the bottom houses some of the oscillator components.

Contained within the OCXO is a 78L05 five-volt regulator which is used to provide a voltage reference for the oven and also likely used as a stable source of power for the oscillator - and we can use this to our advantage rather than need to regulate an external source which, itself, is going to be prone to thermal changes.

Figure 3 shows the addition of a single 10k resistor on the top board, connecting the "TUNE" pin to the output of this 5 volt regulator.  By adding this resistor, the TUNE pin allows one to use this OCXO in a "standalone" configuration with no connection to the "TUNE" pin as it is is automatically biased to a temperature-stable (after warm-up) internal voltage reference and can then be used as-is as a good 10 MHz reference, using the onboard multi-turn potentiometer to precisely set the frequency of operation.

Figure 4:
More pictures from inside the OCXO
Click on the image for a larger version.
Another advantage of adding the internal 10k resistor is that it's easy to reduce the TUNE sensitivity from an external voltage:  This value isn't critical, with anything from 1k to 100k likely being usable.  Testing shows that by itself, the oscillator is quite table and varying the TUNE voltage will adjust it by well over 10 Hz above and below 10 MHz.

The inclusion of the 10k internal resistor may also be of benefit.  In many cases, having a much narrower electronic tuning range than this will suffice so a resistor of 100k (or greater) can be used in series with the TUNE pin, between it and an external tuning voltage, acting as a voltage divider.  Doing this will reduce the tuning range and it can also improve overall stability since much of the tuning voltage will be based on the oscillator's already-stable 5 volt internal source.  The stability of the OCXO itself is such that even with a 10-ish:1 reduced tuning range due to a series 100k resistor, there is still far more external adjustment range than really necessary to tune the OCXO and handle a wide range of external temperatures.

The actual value of the added internal resistor is unimportant and could be selected for the desired tuning/voltage ratio based on the external series tuning resistor and the impedance of the tuning voltage.

When reassembling the OCXO, take care that the insulation inside the can is as it was at the time of disassembly to maximize thermal stability and, of course, be sure that the hole in the can lines up with the multi-turn potentiometer!

Operating conditions

Figure 5:
Even more pictures from inside the OCXO.
Click on the image for a larger version.
The "official" specifications of this OCXO are unknown, but long-term use has shown that it will operate nicely from 12-15 volts - and it will even operate from a 10 volt supply, although the reduced heater power at 10 volts causes warm-up to take longer and there may not be sufficient thermal input for the oven to maintain temperature at extremely low (<15F, <-9C) temperatures unless extra insulation is added (e.g. foam around the metal case.)

It is recommended that if one uses it stand-alone, the voltage source for this device be regulated:  While the on-board 5 volt regulator provides a stable reference without regard to the supply voltage, the amount of thermal input from the oven will change with voltage:  More power and faster heating at higher voltage.  While you might think that this wouldn't affect a closed-loop system, it actually does owing to internal thermal resistance and the fact that due to loss to the environment, there will always be a thermal gradient between the heater, the temperature-sensitive circuitry, and the outside world - and changing the operating voltage and thus the amount of heater power will subtly affect the frequency.

Finally, this oscillator - like any quartz crystal oscillator that you are likely to find - is slightly affected by gravity:  Changing orientation (e.g. turning sideways, upside-down, etc.) of this oscillator affects its absolute frequency by a few parts in 10E-8, so if you are interested in the absolute accuracy and stability, it's best to do the fine-tuning adjustment with it oriented in the same way that it will be used and keep it in that orientation.

* * * * * * * * *

This page stolen from ka7oei.blogspot.com

[End]


A 2 meter band-pass cavity using surplus "Heliax"

0
0

The case for filtering

If you operate a repeater - or even a simplex radio such as a Packet node - that is located at a "busy" radio site, you'll no doubt be aware of the need for cavity-based filtering.

In the case of a repeater, the need is obvious:  Filtering sufficiently "strong" to keep the transmit signal out of the receiver, and also to remove any low-level noise produced by the transmitter that might land on the receive frequency.

Figure 1:
Close-in responses of various filter combinations
Yellow:  Duplexer-only
Magenta:  Bandpass-only
Cyan:  Duplexer + Bandpass
Click on the image for a larger version.

In the case of a packet or simplex node of some sort, a simple "pass" cavity is often required at a busy site to not only prevent its receiver from being overloaded by off-frequency signals, but also be a "good neighbor" and prevent low-level signals from your transmitter from getting into other users' receivers - not to mention the preventing of those "other" signal from getting back into your transmitter to generate spurious signals in its own right.

Comments:

In this discussion, a "band pass" filter refers to the passing of ONLY a narrow range of frequency near those of interest and at odd multiples of the lowest resonant frequency - but nothing else.
It is HIGHLY RECOMMENDED that anyone attempting to construct this type of filter get and learn to use a NanoVNA:  Even the cheapest units (approximately $50US) when properly set up will be capable of the sorts of measurements depicted in this article.

A Band-Pass/Band-Reject (BpBr) duplexer may not be what you think!

A common misconception is that a typical repeater duplexer - even though it will have "band pass" written on its label, or in its specifications - has a true "band pass" response.

Figure 1 shows a typical example.  The yellow trace shows the response of a typical 2 meter duplexer where we can see a peak in response at the "pass" frequency and a rather deep notch at the frequency that we wish to reject.

The problem becomes more apparent when we look over a broader frequency range.  Figure 2 shows the same hardware, but over a span of about 30 MHz to 1 GHz.

Figure 2:
The same as in Figure 1 except over a wider
frequency range showing the lack of off-
frequency rejection of a "BpBr" duplexer
(Yellow) that is significantly mitigated by the
addition of a band-pass filter (Cyan)
Click on the image for a larger version.

Keeping an eye on the yellow trace, you'll note that over most of the frequency range there is very little attenuation.  What this means is that the "BpBr" filter doesn't exhibit a true pass response once you get more than a few MHz away from the design frequency.

I've actually had arguments with long-time repeater owners that disagreed with this assertion, but hadn't actually "swept" a duplexer over a wide frequency range:  These days, with the availability of inexpensive test equipment like a NanoVNA, there's no good excuse determining this for yourself!

For more about this, see the related articlelinked here.

Why is this a problem?

In the "old days" radios that you would use at a repeater site were typically cast-off mobile radios - and even if you had a repeater, it was typically based on a mobile design.  These radios typically used a bank of narrowband  (often Helical) filter elements, each tuned to the frequency of interest - or, if several frequencies were used, the system planners often placed then near each other so that they could be covered by the receivers' narrow filters without undue attenuation.

Modern radios are "broadband" in nature meaning that they often have rather wide receiver front-end filters:  It is not practical to have electronically-tuned filters that are anywhere near as narrow as the Helical filters of the past which means that they simply lack the filtering to reject strong, off-frequency signals.

When a modern radio is dropped in place of an old radio, disappointment is often the result:  The "new" radio may seem less sensitive than the old one - or it might seem that sensitivity varies over time.  In reality, the "new" radio may well be being overloaded by these off-frequency signals that the old radio easily filtered.  What's worse is that the precise nature of this overload condition may be masked by the use of subaudible tones or digital tone squelch, and if this is a digital audio system, there may be no obvious clues at all as to the problem at hand.

To be sure, if the radio in question can operate in carrier-squelch analog mode the usual techniques to determine overload (Iso-Tee measurements, injection of a weak carrier and observing SNR, etc.)

A simple pass cavity:

While not a panacea, the use of a simple pass-only cavity can go a long way to diagnose - even solve - some chronic overload issues - particularly if these have arisen when gear was replaced.

Suitable pass cavities are readily available for purchase - new, from a number of suppliers and used, from auction sites - they are also pretty easy to make from copper and aluminum tubing - if you have the tools.  Because of the rather broad nature of a typical pass cavity, temperature stability is usually not much of an issue in that its peak could drift hundreds of kHz and only affect the desired signal by a fraction of a dB.

Another thing that could be used to make reasonable-performance pass cavities is larger-diameter hardline or "Heliax".  Ideally, something on the order of 1-5/8" or larger would be used owing to its relative stiffness and unloaded "Q".  One could use either air or foam dielectric cable, the main difference being that the "Q" of the foam cable will be slightly lower and the cavity itself will be somewhat shorter.

Figure 3:
Cutting the (air core) cable to length
Click on the image for a larger version.

The "Heliax cavity" described can be built with simple hand tools, and it uses a NanoVNA for tuning and final adjustment.   While its performance will not be as good as a larger cavity, it will - in many cases - be enough to attenuate strong, out-of-band signals that can degrade receiver performance.

Using 1-5/8""Heliax":

The "cavity" described uses 1-5/8" air-core "Heliax" - and it is necessary for the inner conductor to be hollow to accommodate the coupling capacitors.  Most - but not all - cable of this size and larger has a hollow center conductor.  Cables larger diameter than 1-5/8" should work fine - and are preferred - but smaller than this may not be practical - both for reasons of unloaded "Q" and also if the center conductor is not solid or if its inside diameter cannot accommodate the coupling capacitors described later on.

Preparing the "shorted" end:

For 2 meters, a piece 18" long was cut.  For the air dielectric, it's recommended that one cuts it gently with a hand saw rather than a power tool as the latter can "snag" and damage the center conductor.

Figure 4:
The "shorted" end of the stub with the slits bent to the middle
and soldered to the center conductor.
Click on the image for a larger version.

For the "cold"(e.g. shorted) end, carefully (using leather gloves) remove about 3/4"(19mm) of the outer jacket and then clean the exposed copper shield with a wire brush, abrasive pad and/or sand paper.  With this done, use a pair of tin snips cut slots about 1/2"(12mm) deep and 1/4"(6mm) wide around the perimeter.  Once this is done, use a pair of needle nose pliers and remove every other tab, resulting is a "castellated" series of slots.  At this point, using a pair of diagonal pliers or a knife, cut away some of the inner plastic dielectric so that it is about 1/2"(12mm) away from the end of the center conductor.

Now, clean the center conductor so that it is nice and shiny and then bend the tabs that were cut inwards so that they touch the center conductor.  Using a powerful soldering iron or soldering gun - and, perhaps a bit of flux - solder the shield tabs to the center conductor all of the way around.  It's best to do this with the section of coax laying on its side so that hot solder/metal pieces to end up inside the coax - particularly if air-core cable is used.  If you used acid-core flux, carefully remove it before proceeding.

With one end of the cable shorted you can trim back any protruding center conductor and file any sharp edges - again taking care to avoid getting bits of metal inside the cable or embedded in the foam.

Preparing the "business" end:

Figure 5:
The "coupling tubes" soldered in place which
receive the wires for coupling in/out.
Click on the image for a larger version.
At this point, the chunk of coax should be trimmed again, measuring from the point where the center conductor is soldered to the shield:  For air-core trim it to 17"(432mm) exactly and for foam core, trim it to 16-1/8"(410mm).  Again, using a sharp knife and gloves, remove about 3/4"(19mm) of the outer jacket and, again, clean the outer conductor so that it is bright and shiny.

Making coupling capacitors:

We now need to make two capacitors to couple the energy from the "in" and "out" connectors to the center resonator and for this, I cut two 3"(75mm) long pieces of RG-6 foam TV coaxial cable and from each of these pieces, I removed and kept the center conductor and dielectric - removing any foil shield and then stripping about 1/2"(12mm) of foam from one end of each piece.

At this point, you'll need some small copper tubing:  I used some 1/4" O.D. soft-drawn tubing, cutting two 2"(50mm) lengths and carefully straightening them out.  To cut this, I used a rotary pipe cutting tool which slightly swedged the ends - but this worked to advantage:  As necessary, I opened up the end cut with the rotary tool just enough that it allowed the inner dielectric of the RG-6 to slide in and out.

Comment:

The use of 1/4"(6mm) O.D. copper pipe and RG-6 center conductor/dielectric isn't terribly critical:  A different-sized copper or brass pipe could be used as long as two parallel pieces will fit inside the center conductor of the Heliax - and that the chosen center conductor and dielectric of the coax you use to make the capacitor will fit somewhat snugly inside it.

Figure 6:
The PC Board plate soldered to the end of the
coax.
Click on the image for a larger version.

Using a hot soldering iron or gun, solder the two straightened pieces of tubing together, in parallel, making sure that the ends of the tubing that you adjust to snugly fit the outside diameter of the piece of RG-6 are at the same end.  Once this is done, insert the two parallel pieces of tubing inside the Heliax's center conductor and solder them, the ends flush with the end of the center conductor.

Making a box:

On the "business"(non-shorted) end of the piece of cable we need to make a simple box to which we can mount the RF connectors with good mechanical stability.  For the 1-5/8" cable, I cut a piece of 0.062"(1.58mm) thick double sided glass-epoxy circuit board material into a square that was 3"(75mm) square and using a ruler, drew lines on it from the opposite corners to form an "X" to find the center.  Using a drill press, I then used a 1-3/4"(45mm) hole saw to cut a hole in the middle of this piece of circuit board material, using a sharp utility knife to de-burr the edges and to enlarge it slightly so that it would snugly fit over the outside of the cable shield:  You will want to carefully pick the size of hole saw to fit the cable that you use - and it's best that it be slightly undersized and enlarged with a blade or file than oversized and loose.

Figure 7:
Bottom side of the solder plate showing the
connection to the coax.
Click on the image for a larger version.

After cleaning the outside of the coaxial cable and both sides of the circuit board material, solder it to the (non-shorted) end on both sides of the board, almost flush with just enough of the shield protruding through the top to solder it.  For this, a bit of flux is recommended, using a high-power soldering iron or gun - and it's suggested that it first be "tacked" into place with small solder joints to make sure that it is positioned properly.

When positioning the box, rotate it such that the two "capacitor tubes" that were soldered into the center conductor are parallel with one of the sides of the square - this to allow symmetry to the connectors.

Adding sides and connectors:

With the base of the box in place, cut four sides, each being 1-3/8"(40mm) tall and two of them being 3"(75mm) long and the other two being 2-1/2"(64mm) long.  First, solder the two long pieces to the top, using the shorter pieces inside to space and center them - and then solder the shorter pieces, forming a five-sided (base plus four sides) box atop the piece of cable. As seen in the photo, the "short" sides are parallel to the two tubes in the center conductor.

Figure 8:
Inside the box with coupling/tuning stubs and
lines and stiffening bar installed.
Click on the image for a larger version.

As can be seen in the picture, BNC connectors were used as they were convenient, but "N" type, SMA or even UHF connectors could be used - but the use of BNC connectors will be described.

The BNC connectors were mounted on opposite sides of the box, approximately 3/8" to the left of the center line and 3/4" from the bottom.  As can be seen in the photo, the connectors were mounted in the "short" wall of the box such that our "RG-6" capacitors more or less line up with the capacitor tubes.

Now, insert the ends of the RG-6 center conductor into the "capacitor tubes" and, bending the top in an "L" shape, solder the end with the exposed center conductor to the coaxial connectors.

Preliminary adjustment:

At this point we are ready to do some preliminary tuning - and this will require a NanoVNA or similar:  It is presumed that the builder will have familiarity with the NanoVNA to make S11 VSWR and S12 insertion loss measurements on an instrument that has been properly calibrated at the frequency range in question.

Setting the NanoVNA to measure both VSWR and through-loss over a span of 130-160 MHz, connect it to the cable and you should see a pass response somewhere in the frequency range and if all goes well, the peak in the pass response will be somewhere in the 130-140 MHz range.

Adjusting the center frequency and passband response is an iterative process as reducing the coupling by pulling out the capacitors (the RG-6 center conductor) will also increase the frequency.  Practically speaking, only about 3/8"-1/2"(9-12mm) of center conductor is needed at most to attain optimal coupling so don't be afraid to pull out more and more of the capacitors.

A bit of experimentation is suggested here to get the "feel" of the adjustment - and here are a few pointers:

  • Figure 9:
    The band-pass filter sitting against a Sinclair
    Q2220E 2-meter Duplexer - a good combination
    for receiver protection at a busy repeater site!
    Click on the image for a larger version.
    Lowest SWR is obtained with the coupling capacitors are identical.  If the SWR isn't below 1.5:1, try pulling out or pushing in one of the capacitors slightly to determine the effect - but move it only about 1/16" in each iteration.  Generally speaking, pulling one out slightly is the same as pushing the other in slightly in terms of reducing VSWR.
  • The passband response will be narrower the less of the RG-6 center conductor is in the capacitor tubes - but the insertion loss will also go up.
  • The frequency will go up the more the passband response is narrowed by reducing the coupling capacitors.
  • With the lengths given (e.g. 17" for air-core 16-1/8" for foam core) the passband will be within the 2 meter band with the amount of coupling that will yield about 0.5-0.6 dB insertion loss.  To a degree, you can "tune" the center frequency of the cavity by adjusting the coupling.  It is recommended that you first tune for the bandpass response - and then tune it to frequency: See below for additional comments.
  • It is recommended that you use a little coupling as needed to obtain the desired response.  For example, if the cavity is "over coupled", the insertion loss will be about 0.5dB, but this will go up only very slightly as the coupling is reduced and the response is narrowed.  At some point the insertion loss will start to go up as the passband is further-narrowed.

As mentioned above, it's recommended that the approximate passband width be set with the capacitors and if all goes well, the pass frequency can be adjusted with just the adjustment to the coupling with only a slight change in overall bandwidth.  If, however, the desired "narrowness" results in a pass frequency above that which is desired, a simple "tab" capacitor can be constructed as shown in the photo.

Figure 10:
The "close-in" response of the band-pass cavity.
With the current settings providing a bit less than 0.5dB of
attenuation at the center, it's rejection at the edges of the
U.S. 2 meter band (144-148 MHz) is a bit over 8 dB.
Click on the image for a larger version.
This capacitor consists of two parts:  A 3/8"(10mm) wide, 3/4"(20mm) long piece of copper or brass sheet is soldered to the center conductor.  The addition of this piece, alone, may lower the center frequency and bending this tab up and down can provide a degree of fine-tuning.  If the center frequency is still too high, another 3/8" wide, 3/4" long piece can be soldered to the shield of the coax next to it and be bent such that it and the first piece form two plates of a simple capacitor, allowing even greater reducing in resonant frequency of the cavity.

With the preliminary tuning done, a bit of reinforcement of the box is suggested:  A strip of copper circuit board material 3/8"-1/2" wide is soldered between the inside walls of the box with the RF connectors.  This strip minimizes the flexing of the walls with the RF connectors due to stresses on the connected cables which can change the orientation of the coupling capacitors and cause slight detuning.

With this reinforcement in place, do a final tweaking of the bandpass filter's tuning.

Final assembly:

It's strongly suggested that the shorted end of the cavity be covered to prevent debris and insects from entering either the center conductor or, especially, the space between the shield and center conductor.  This may be done using electrical tape or RTV (Silicone) adhesive.

Figure 11:
A wider sweep showing the rejection at and below the FM
broadcast band and up through 225 MHz.  This
filter, by itself, provides over 40 dB rejection at 108 MHz
Click on the image for a larger version.
Similar protection should be done to the top of the box:  A piece of brass or copper sheet - or a piece of PC board material could be tack-soldered into place - or even some aluminum furnace tape could be used:  The tuning should be barely affected - if at all - by the addition of this cover, but it is worth verifying this with a simple test-fit of the cover.

Final comments:

While the performance will vary depending on the coupling and tuning, the prototype, tuned for a pass response at 146.0 MHz, performed as follows:

  • Insertion loss at resonance:  <0.5dB
  • -3dB points:  -88 kHz and +92 kHz
  • -10dB points:  -2.5 MHz and +2.75 MHz
  • -20dB points:  -7.6 MHz and +11 MHz
  • 2:1 VSWR bandwidth:  600 kHz
  • Loss <=108 MHz:  40dB or greater

More detail about the response of this cavity filter may be seen in figures 10 and 11.  In the upper-left corner of each figure may be found the measured loss and VSWR at each of the on-screen markers.

If a higher insertion loss can be tolerated, the measured bandwidths will be narrower.  Depending on the situation, an extra dB or two of path loss may be a reasonable trade-off for improved off-frequency rejection - particularly on a noisy site where the extra loss won't result in a degradation of system sensitivity due to the elevated noise floor.

As with any cavity-type filter, there is a bit of fragility in terms of frequency stability with handling.  If, after it is tuned this - or any cavity filter - is dropped or jarred strongly, the tuning should be re-checked and adjusted as necessary.

There's no reason why this cavity couldn't be used for transmitting, although using the materials described (e.g. the center conductors of RG-6) I would limit the power to 10-15 watts without additional testing.

As it is, this band-pass filter - in conjunction with a conventional 2-meter duplexer- can provide a significant reduction in off-frequency energy that could degrade receiver performance.  As can be seen in Figure 2, the pass cavity may still pass energy from odd-order (3rd, 5th) harmonics that may fall within commercial/70cm and TV broadcast frequencies - but the addition of a VHF low-pass filter - perhaps even the VHF side of a VHF/UHF mobile diplexer - would eliminate these responses.

To be a good neighbor on a busy site it's strongly recommended that a pass cavity also be installed on the transmit side, along with a ferrite isolator (e.g. circulator with dummy load) to deal with signals that may enter into the transmitter's output stage and mix, causing intermodulation distortion and interference - both to your own receiver and those of others. 

"I have 'xxx' type of cable - will it work?"

The dimensions given in this article are approximate, but should be "close-ish" for most types of air and foam dielectric cable.  While I have not constructed a band-pass filter with much smaller cable like 1/2" or 3/4", it should work - but one should expect somewhat lower performance (e.g. not-as-narrow band-pass with higher losses) - but it may still be useful.

Because of the wide availability of tools like the NanoVNA, constructing this sort of device is made much easier and allows one to characterize both its insertion loss and response as well as experimentally determining what is required to use whatever large-ish coaxial cable that you might have on-hand.

* * * * * *

Future article: 

I have constructed several effective notch-type cavities from this type of coaxial cable - including one that is designed to attenuate 144.39 MHz APRS energy in a 147 MHz receiver - but since there are relatively few articles about pass-type cavities constructed in this way, I decided to post this one first.

Related articles:

  • Second Generation Six-Meter Heliax Duplexer by KF6YB - link  - This article describes a notch type duplexer rather than pass cavities, but the concerns and construction techniques are similar.
  • When Band-Pass/Band-Reject (Bp/Br) Duplexers really aren't bandpass - link - This is a longer, more in-depth discussion about the issues with such devices and why pass cavities should be important components in any repeater system.

 

This article was stolen from ka7oei.blogspot.com

[END]


"CQ CQ - Calling all dielectric welders!" (Or, those strange curvy things seen on a 10 meter waterfall)

0
0

 If one owns a receiver with a waterfall display, the increased cluttering of the 12 and 10 meter bands with weird "swooping" signals could not have gone unnoticed.  Take, for example, this recent snapshot of the lower portion of 10 meters from the waterfall of WebSDR #5 at the Northern Utah WebSDR (Link)

Figure 1:
10 Meters as seen on a beam antenna pointed toward Asia showing QRM from a large number of different sources - presumably dielectric heaters/welders/seamers.  These things radiate badly enough that they should have their own callsigns, right?
Click on the image for a larger version.

 

In looking at this spectral plot - which comes from an antenna oriented to the Northwest (toward Asia and the Pacific) one could be forgiven for presuming that someone had somehow connected a can of "Silly String" to their coax and was squirting noodles into the ionosphere!

What, specifically, are we looking at?

Across the entire spectrum plot one can see these "curved" signals, some of them - like that near the bottom, just above the cursor at 28374 kHz - are quite strong while there are many, many others that are much weaker, cluttering the background.  These signals contrast with normal SSB and CW signals - the former being seen clustered around 28500 and the latter around 28100 kHz - which are more or less straight lines as these represent transmissions with stable frequencies.

What are these from?  The general consensus is that these are from "ISM"(Industrial, Scientific and Medial) devices that nominally operate around 26957 kHz to 27283 kHz.  Clearly, the waterfall plot shows many devices outside this frequency range.

What sort of devices are these?  Typically they are used for RF heating - most often for dielectric sealers of plastic items such as bags, blister packs - but they could also be used in the manufacture of items that require some sort of energetic plasma (e.g. sputtering metal, etching) in any number of industrial processes.

Where are they coming from?

The simple answer is "everywhere" - but in terms of sheer number of devices, it's more likely that much of the clutter on these bands originates in Asia.  Consider the above spectral plot from an antenna located in Utah pointed at Asia - but then consider the plot below, taken at about the same time from an antenna that is pointed east, across the continental U.S. and Canada - WebSDR #4 at the Northern Utah WebSDR (link):

Figure 2:
10 Meters on a beam pointed toward the U.S.
Click on the image for a larger version.

 

To be absolutely fair, this was taken as the 10 meter band was starting to close across the U.S, but it shows the very dramatic difference between the two antenna's directionality, hinting at a geographical locus for many of these signals.

Further proof of the overseas origin of these signals can be seen in the following plot:

Figure 3:
Spectrum from AM demodulation of some of the signals of Figure 1 showing 50/100 Hz mains energy.
Click on the image for a larger version.

This plot was taken by setting the WebSDR to AM and setting for maximum bandwidth, tuning onto a frequency where several of these "swoops" seen in Figures 1 and 2 are recurring and then, using a virtual audio cable, feeding the result directly into the "Spectran" program (link).

As expected this plot shows a bit of energy at the mains harmonic frequencies of 120, 240 and 360 Hz owing to the fact that this antenna points into slightly-noisy power lines operating at the North American 60 Hz frequency - but on this plot you can also see energy at 50 and 100 Hz, indicative of a lightly-filtered power supply operating from 50 Hz power mains - something that is NOT present anywhere in North America.

Based on other reports (IARU "Intruder Watch", etc.) a lot of these devices seem to be located in Asia - namely China and surrounding countries where one is more likely to experience lax enforcement of spurious radiation of equipment that is manufactured/sold in those locales.

Why the "swoop", "curve" or "fishook" appearance seen in Figure 1?  If these devices were crystal controlled and confined to the nominal 26957 kHz to 27283 kHz ISM frequency range, we probably wouldn't see them in the 10 meter amateur band at all, but many of these devices - likely "built to cost" simply use free-running L/C oscillators that are accurate to within 10-15% or so:  As these oscillators - which are likely integral to the power amplifier itself (perhaps self-excited) - warm up, and as the industrial processes itself proceeds (e.g. plastic melts, material cures, glue dries) the loading on the RF output of this device will certainly change, and this results in an unstable frequency.

Why do they radiate?

Ideally, the RF would be contained to the working area and in the past, reputable manufacturers of such equipment would employ shielding of the equipment and filtering of power and control leads to confine the RF within.  But again, such equipment is often "built to cost" and such filtering and shielding - which is not necessary for the device to merely function is often omitted.

Can we find and fix these?

In this U.S. and parts of Europe such sources are occasionally tracked down and RF interference mitigated - either voluntarily or with "help" from the local regulator - but the simple fact is that the intermittent nature of these sources - and the fact that they radiate on frequencies that are prone to good propagation when the sun is favorable to such - makes them very difficult to localize.  If the signal source is coming from halfway around the world, there's likely nothing that you can do other than point your directional antenna the other way!

If it so-happens that you can hear such a signal at your location at all times of the day - regardless of propagation - you may be in luck:  There may be a device with a short distance (a few miles/km) of your location - and perhaps you can make a visit and help them solve the problem.

* * * * * * * 

Related article:

 

This page stolen from ka7oei.blogspot.com


[End]


Injection locking cheap crystal "can" oscillators to an external source

0
0

Figure 1:
The two generic "can" oscillators tested - both having been
found in my "box of oscillators".
Click on the image for a larger version.

Sometimes one comes across a device with one of those cheap crystal "can" oscillators that is "close" to frequency - but not close enough.  Perhaps this device is used in a receiver, or maybe it's used for clock generation or clock recovery. Such oscillator are available on a myriad of frequencies - although too-often not exactly the right one!

What if we want to "nail" this oscillator to an external (perhaps GPS) reference?  If this oscillator were variable, this task would be simplified, but finding a "VCXO"(Variable-Control Crystal Oscillator) on the frequency of interest is sometimes not even possible.

What if there were a way to externally lock a bog-standard crystal oscillator to an external source?

To answer this question, I rummaged through my box of crystal oscillators (everyone has such a box, right?) and grabbed two of them:  A standard 4 MHz oscillator and a 19.440 MHz oscillator that has an "enable" pin.

Comment:  

This article refers to standard, quartz crystal oscillators and not MEMs or "Programmable" oscillators where the internal High-Q resonating element likely has no direct relationship with the synthesis-derived output frequency.

Injection locking

This is what it sounds like:  Take a signal source of the desired frequency - typically very close to that of the oscillator that you are trying to nail to frequency - and inject it into the circuitry to lock the two together.

This technique is ancient:  It accounts for the fact that a (wobbly) table of pendulum-type metronomes set "close" to the same tempo will eventually synchronize with each other, and it is the very technique used in the days of analog TV to synchronize their vertical and horizontal oscillators to the sync signals from the incoming signal.

It's still used these days, one notable example being the means by which an Icom IC-9700's internal oscillator may be externally locked to an external 49.152 MHz source (see: http://www.leobodnar.com/shop/index.php?main_page=product_info&products_id=352 ) - and this is done by putting a known-stable source of 49.152 MHz "very near" the unit's built-in oscillator.

Injection-locking a discrete-component crystal oscillator is relatively simple:  It's sometimes just a matter of placing a wire near the circuitry with the resonant element (e.g. near the crystal or related capacitors) and the light capacitive coupling will cause it to "lock" to the external source - as long as it's "close" to the oscillator's "natural" frequency.

Getting a signal inside the oscillator

Injection locking often needs only a small amount of external signal to be applied to the circuit in question - particularly if it's inserted in the feedback loop of the resonant circuit, but what about a "crystal can" oscillator that is hermetically sealed inside a metal case?

Figure 2:
Schematic depiction of power supply rail
to get the external signal "into" the can.
Click on the image for a larger version.
Because, in many cases, opening the can would compromise the seal of the oscillator and expose the quartz element to air and degrade it, this isn't really an option.  Another possibility would be to magnetically couple an external signal into the circuitry, but owing to a combination of its small size and the fact that these devices are typically in ferrous metal cans, this isn't likely to work, either.

So what else can one do to get a sample of our external signal inside?

Power rail injection

The most obvious "input" is via the power supply rail.  Fortunately - or unfortunately, depending on how you look at it - these oscillators often have built-in bypass capacitors on their power rails, putting a low-ish impedance on the power supply input - but this impedance isn't zero. 

Figure 3:
Top - The signal riding on the voltage rail
Bottom - The locked output of the oscillator
Click on the image for a larger version.
A simple circuit to do this is depicted in Figure 2.  The way it works is by decoupling the power supply via L1 and C2 and heavily "modulating" it with the signal to be locked with Q1.  For the test circuit seen in Figures 2 and 4, L1 and L2 were 10uH molded chokes, C1 and C2 were 0.1uF capacitors and Q1 was a 2N3904 or similar NPN transistor.  

When an external signal is applied to Q1 via C2 (I used +13dBm of RF from a signal generator) Q1 will conduct on the positive excursions of the input waveform, dragging the power supply voltage to the oscillator down with it.  With this simple circuit, Q1 has to dissipate quite a bit of power (the current was about 500 mA) and this action results in a fair bit of power dissipation, likely due to the fact that the bypass capacitance within the oscillator is being shunted and causing a significant amount of power to be lost.

This circuit has room for improvements - namely, it's likely that one could better-match the collector impedance of Q1 with the (likely) much lower impedance at the V+ terminal of the oscillator - possibly using a simple matching circuit (L/C, transformer, etc.) to drive it more efficiently.

Figure 4:
The messy test circuit depicted in Figure 2 used to inject the
external into the "can" oscillator via the power pin.
Click on the image for a larger version.

Despite its simplicity, with the circuit in Figure 2 shows how I was able to inject an external signal source into the oscillator and, over a relatively narrow frequency range (15 Hz for the 4 MHz oscillator, 60 Hz for the 19.44 MHz oscillator) it could be locked externally.

The oscillogram in Figure 3 shows the resulting waveforms.  The top (red) is the AC-coupled power supply rail for the oscillator showing about 2 volts of RF imposed on it while the bottom rail shows the square-ish wave output of the power supply.  Using a dual-trace scope, it was easy to spot when the input and output signals were on the same frequency - and locked - as they did not "slide" past each other.

As you might expect, the phase relationship between the two signals will vary a bit, depending whether one is at the low or high frequency end of the lock range and with changes in amplitude, so this - like about any injection-locking scheme - shouldn't be confused with a true "phase lock".

Is the lock range wide enough?

The "gotcha" here is that these are inexpensive oscillators, likely with 50-100 ppm stability/accuracy ratings meaning that they are going to drift like mad with temperature and applied power supply voltage.  What this also means is that these oscillators are not likely to be "dead on" frequency, anyway.

To a degree, their frequency can be "tuned" by varying the power supply voltage:  A 5-volt rated "can" oscillator will probably work reliably over a 3.5-5.5 volt range, often changing the frequency by a hundred Hz or so:  The 19.44 MHz oscillator moved by more than 1.5 kHz across this range, but never getting closer than 2 kHz above its nominal frequency - but this correlates with the often-loose specifications of these devices in terms of frequency accuracy, not to mention temperature!

If your oscillator is "close enough" to the desired frequency at some voltage - and it is otherwise pretty stable, this may be a viable technique, but other than that, it may just be a curiosity.  If one chooses an oscillator with better frequency stability/tolerance specifications - like a TCXO - this may be viable, but testing would be required to determine if a TCXO's temperature compensation would even work properly if the power supply voltage were varied/modulated with an external signal.

"Enable" pin injection

Figure 5:
Schematic depicting applying an external signal via the
"enable" pin.  The amplitude of the external signal must
have a peak-to-peak voltage that is a significant percentage
of the power supply voltage.
Click on the image for a larger version.
Many of these "can" oscillators have (or may be ordered with) an "enable" pin which turns them on and off - and unlike the power supply pin, this typically has pretty low parasitic capacitance compared to the V+ pin of the oscillator and it can provide a way "in" for the external frequency reference.  Figure 5 shows how this can be done.

For this circuit, resistors Ra and Rb (which may be between 1k and 10k, each) bias the "enable" pin somewhere around the threshold voltage and capacitively couple the signal - in this case, a +13dBm signal from a signal generator which had about 2 volt peak-to-peak swing.  If a logic-level signal is available, one can dispense with the bias resistors and the capacitor and drive it directly.

Note that some oscillators have a built in pull-up or pull-down resistor which can affect biasing and the selection of resistors should reflect that:  If its specs note that the pin may be left open to enable (or disable) the oscillator, this will certainly be the case.  If a pull-up resistor is present, the value of the corresponding external pull-down resistor will have to be experimentally determined, or "Rb"(in Figure 5) may be made variable using a 10k-100k trimmer potentiometer.

The 19.44 MHz oscillator shown in Figure 1 has such an enable pin and by injecting the 2 volt peak-peak signal from the external source into, it will reliably lock over a 900 Hz range.  Some degree of locking was noted even if the signal was quite low (around 250 mV peak-peak) but the frequency swing was dramatically reduced.  For optimal lock range it's expected that a swing equal to that of the supply rail would be used.

The precise mechanism by which this works is unknown:  Does the "enable" pin actually turn the oscillator on and off, does it simply gate the output of the oscillator while it continues to run or is it that this signal gets into the onboard circuitry and couples into the oscillator's feedback loop?  I suspect that it is, in most cases, the former as the "enable" pin often reduces power consumption significantly which would explain why it seems to work reasonably well - at least with the oscillators that were tested.

If the oscillator itself is "gated"(e.g. turned on/off) by the "enable" pin, then this is precisely the mechanism that we would want to inject an external signal into the oscillator.  In looking at the output waveform, however, I suspect that the answer to this question isn't that simple:  If it were simple logic gating one would expect to see the output waveform of the oscillator gated - and mixing - with the external signal once the latter was outside the "lock" range - but this was not the case for the oscillator tested.  I suspect that there might be some sort of filtering or debouncing in the gating circuit, but based on the ease by which locking was accomplished using this oscillator, there was clearly enough of the external signal getting into the oscillator portion itself to cause it to lock readily.

As noted previously, while the lock range was about 900 Hz, the oscillator itself was about 2.5 kHz high, anyway, so it could not be brought precisely onto the nominal frequency.  Again, it may be possible to do this with a TCXO equipped with an "enable" pin, but testing would be required for any specific oscillator to determine if this is viable.

"Locked" performance

The testing of spectral purity using either of these methods was only cursorily checked by tuning to the output of the oscillator with a general-coverage receiver and feeding the resulting audio into the Spectran program to see a waterfall display.  This configuration allows both the absolute frequency and the lock range to be measured with reasonable accuracy.

It can also tell us a little bit about spectral purity:  If there was a terrible degradation in phase noise, it would likely show up on the waterfall display - but when solidly locked, no such degradation was visible.

Although it wasn't tested, it's also likely that locking the oscillator - particularly using the "enable" pin - could be used to "clean up" an external oscillator that is somewhat spectrally "dirty" owing to the rather limited lock range and high "Q" of the "can" oscillator.  This is most likely useful for higher-frequency components (e.g. those farther away from the carrier than a few kHz) rather than close-in, low-frequency phase noise - a property which the most inexpensive oscillators likely don't have is anything resembling stellar performance, anyway.

Harmonic locking?

One thing that I did not try (because I forgot to do so) was harmonic locking - that is, the injection of a signal that is an integer fraction of the oscillator frequency (e.g. 1 MHz for the 4 MHz oscillator) - perhaps something to try later?

Is this useful for anything?

I had wondered for some time if it would be possible to lock one of these cheap oscillators to an external source and the answer appears to be "yes".  Unfortunately, most crystal oscillators have accuracy and temperature stability specifications that cause its natural frequency variance to exceed the likely lock range unless one gets a particularly stable and accurate oscillator.

If one presumes that the oscillator to be "tamed" is good enough then yes, it may be practical to lock it to an external source - particularly via the "enable" pin.  In many cases, such oscillators don't have this feature as they need to be active all of the time so it may be necessary to replace it with one that has an "enable" pin - and then one must hope that the replacement will, in fact, be stable/accurate enough and also capable of being locked externally - something that must be tested on the candidate device.

So the answer is a definite "Yes, maybe!"


This page stolen from ka7oei.blogspot.com

[END]


Exploring the "1-930MHz 2W RF Broadband Power Amplifier Module for FM Radio HF VHF Transmission" found on EvilBay

0
0

Figure 1:  The amplifier - with heat sink


On EvilBay, you can find a number of sellers of a device described as:

 "1-930MHz 2W RF Broadband Power Amplifier Module for FM Radio HF VHF Transmission".  

This unit has SMA connectors for both input and output and is constructed on a circuit board and heat sink that measures just a bit under 2" square (50mm x 50mm).

Asis so often the case with these sorts of things, the sellers likely have no idea what this actually is - and their listings are often sparse on details other than general operating parameters.

In the case of the device depicted in Figure 1, the parameters given in the listing are:

Type: RF Amplifier
Module: RF Broadband Power Amplifier Module
Dimensions: 50*50*15mm (L*W*H)
Working voltage: 12V (DC)
Frequency: 1-930MHz
Working current: 300--400mA (determined by output power)
Type 1: 1-930MHz 2W
Working frequency: 1-930MHz

There are a few "tells" here that this data was simply copied from some source - notably that line beginning with "Type 1" which probably means something only to the original supplier, in the original Chinese - but likely means nothing at all to anyone else.  Unfortunately, you are unlikely to get more information that this from an EvilBay listing and this hardly counts as "detailed technical information" about exactly how it works by virtue of describing its design in detail.

What is it, really?

From the picture in Figure 1, it's apparent that there are two active devices, but what are these - and will identifying these devices give a clue as to how one might really want to use one of them?

With a bit of magnification and Google-Foo, I was able to determine the nature of both of the active devices - and reverse-engineer a schematic diagram, below in Figure 2.

Figure 2:  Reverse-engineered schematic diagram and component layout of the amplifier

This amplifier is about as simple as it gets:  A broadband MMIC with approximately 20 dB of gain is coupled into the gate of a VHF/UHF N-channel MOSFET amplifier - which itself has 10-15 dB of gain - with no matching. 

What this means is that it will take just a few milliwatts input to obtain about a watt of RF output across the intended frequency range - the precise amount of drive depending on the frequency, the supply voltage, and the desired output power.

A 5 volt regulator (U2) provides about 1.68 volts of gate bias on Q1 while supplying U1 with a stable 5 volt supply (at about 90 milliamps).  With no drive, the total current consumption is likely to be in the area of 130-150mA, but it could exceed 500mA at higher operating voltages and saturated power output levels.

Looking at the active devices:

Taking a step back, let's look at each device a bit closer - starting with U1, the MMIC on the input.  This device is the Qorvo SBB20892 MMIC (datasheet here:  https://www.mouser.com/datasheet/2/412/SBB2089Z_Data_Sheet-1314913.pdf ).

Inspecting this data sheet we can see that its rated for operation from 50 to 850 MHz - although these types of devices typically have no problems operating at much lower frequencies (even down to DC) - and they can typically operate at quite a bit higher frequency than the specification, albeit with a bit of roll-off in gain and output power capability meaning that this stage of the amplifier should have no problem operating up to its 930 MHz stated range - or even higher.

Looking at the output stage, Q1, we see that it's a Mitsubishi RD01MUS2B RF N-channel MOSFET transistor (datasheet here:  https://www.mitsubishielectric.com/semiconductors/content/product/highfrequency/siliconrf/discrete/rd01mus2b.pdf ) which is nominally a 7.2 volt, 1 watt transistor.  This is device has an SMD marking code of "KB861".

Right away you'll spot a bit of disparity between the EvilBay listing and the manufacturer's specifications - the former stating 2 watts at 12 volts.  Taking a close look at the specifications in the data sheet we can see that we should easily (and safely) be able to get about a watt out over the range of at least 100 to 930 MHz (and likely down to a few MHz) with a drain voltage of 7.2 volts on this device - perhaps a bit more or less, as there is no attempt at impedance matching on the output of this amplifier.

Looking further at the specifications, you might also note that the maximum drain-source voltage of this transistor is 25 volts:  If it is operated at 12 volts into a highly reactive load, it could be expected that the peak voltage could reach or exceed twice the supply voltage - and this does not take into account that the drain current, which is specified as an absolute maximum of 600mA - could also be exceeded.

What we can conclude from this is that operating at 12 volts or greater - particularly under conditions where the load to which the amplifier is connected might be mismatched (e.g. high VSWR) is probablynot a good idea!

The device overall:

It should also not escape the attention of the reader the comment on the schematic relating to inductors L3 and L4 on the drain of Q1:  Together, these have a DC resistance of a bit more than an ohm.  With an expected drain current of 300-400mA in normal operation, one can expect at least a half-volt of drop across these two components which actually can work to our advantage in reducing the power supply voltage a bit.

Finally, looking closely at the data sheet you'll note that there are graphs that show operation to 10 volts drain current (or about 11 volts supply voltage, considering the drop of L3 and L4) that show outputs exceeding 2 watts.  If you feel that you really need more than 1 watt - or wish to have a bit of extra headroom for 1 watt operation (e.g. to preserve linearity) then operating at that voltage (10-10.5 volts) may be possible with the caution that you may be sacrificing reliability.

Considerations of linearity and stability:

This amplifier will generate significant harmonics, so it should never be connected directly to an antenna!  At a power output of 1 watt, if its second harmonic is about 25 dB down (a reasonable value) that will represent several milliwatts of power on its harmonics which can easily carry for several miles/kilometers line-of-sight.

Particularly when this amplifier is operated from a supply of greater than 8 volts, care should be taken that the output is resistive (nominally 50 ohms).  Now this may sound pretty easy as antennas and filters are nominally 50 ohms, but one should consider frequencies other than that on which the amplifier may be operating:  Being an inexpensive device from EvilBay, it's hard to be sure of the quality of the components that one would use to make it operate in a stable manner (capacitors, board layout, inductors) - and since this amplifier has a rather high gain of around 30dB, it may not be unconditionally stable.

Take, for example, this amplifier being used to boost the output of an exciter on the amateur 6 meter band - around 50 MHz.  We should assure that at 50 MHz that the load (low-pass filter plus antenna) provides a reasonable match to 50 ohms.  What is not easily knowable with this sort of device is how it will behave at other frequencies:  If you move below 50 MHz, the match (SWR) will get terrible because the antenna is out of its design range - and if you move above 50 MHz, the SWR will also be terrible not just because of the antenna, but because the low-pass filter itself will start to reflect energy.  Again - in this example - the amplifier will see a match only at the antenna's design frequency - but it will be terrible everywhere else.

While an ideal amplifier wouldn't really care about off-frequency mismatches, a poorly-designed amplifier - or one that has not been designed to be intrinsically stable under all load conditions - might be prone to oscillation at some unknown frequency if it is connected to a load that presents to it just the right conditions that its built-in instability may cause oscillation.

This last point - the possibility of an amplifier oscillating at a frequency other than at where it is intended to operate - can be difficult to diagnose:  Worst case, this will cause the amplifier to die randomly and in the best case, it will output lower power than expected and - possibly - have spurious outputs related to the mixing of the desired frequency and that at which it is oscillating.  If the frequency at which it is operating capriciously is above that of the low-pass filter, you may not even be able to detect that it is behaving in an untoward manner unless you were to do a broadband analysis by probing the amplifier's output directly - a process that could, itself, change the results!

This sounds like a lot of conjecture, hassle and trouble - and sometimes it is - but there are a few things that one can do to make the device work more reliably and also detect that something may be amiss.

  • Do not operate it at a higher voltage than needed to obtain the desired output power.  In the case of this device, 1 watt is about all you should reasonably expect in terms of long-term reliability.  Period.
  • If, under certain conditions, you see the power output randomly fluctuating - but the input drive and power supply voltage is constant - you likely have a spurious oscillation occurring within the amplifier.  A redesign of the filtering to change the off-frequency characteristics (e.g. the impedance well above the cut-off frequency of a low-pass filter, for example) may improve things:  Consider the use of a diplexer-type circuit with the "other" port (e.g. that which passes frequencies other than the desired) terminated.
  • A reasonable question would be:  "If I blow up Q1, where can I get another RD01MUS2B transistor to replace it?"  The answer - albeit a bit glib - is to simply buy another of these amplifier modules:  Unless you buy a lot of them at once, it will probably be cheaper to get another amplifier than just that transistor!

 * * *

This page stolen from ka7oei.blogspot.com


[End]


Characterizing spurious (Harmonic) responses of the SDRPlay RSP1a (and other models)

0
0

The SDRPlay RSP2pro (left) and RSP1a receivers (right)
The SDRPlay RSP1a is a popular Software Defined Radio (SDR).  This device, connected to and powered by the computer via a USB cable covers from VLF through UHF and low microwave frequencies.

This receiver shares a similar internal architecture of similar devices such as the RTL-SDR dongle and the AirSpy in that an analog frequency converter (mixer) precedes the analog-to-digital converter:  In the case of the SDRPlay, the frequency to which the receiver is tuned is (usually) converted to baseband I/Q signals, with the "center" frequency being at zero Hz (DC).1

Note:

For the purposes of this discussion, there is no difference between the RSP1a and some of the other receivers in the product lineup (e.g. RSPDuo, RSPdx and the discontinued RSP1, RSP2 and RSP2pro) in terms of harmonic response across the 2-30 MHz range as they all have about the same 12 MHz and 30 MHz cut-off frequencies on their input filtering - properties that would affect HF reception across the 2-30 MHz range in terms of harmonic response.

Imperfect mixers

By its nature, a frequency mixer is a non-linear device.  Ideally, the two frequencies applied to a mixer would yield just two more - the sum and difference.  For example, if we applied a 5 MHz signal and a 1 MHz signal to a mixer, it would output both the sum of 6 MHz and the difference of 4 MHz - and this is true, but there's more to the story.

In our example - with a real-world mixer, we will also get additional products - including those related to the harmonics of the local oscillator and the applied signal.  Because of this, we will see weaker signals at:

  • 11 MHz (2 * 5 MHz + 1 MHz) 
  • 9 MHz (2 * 5 MHz - 1 MHz) 
  • 7 MHz (5 MHz + 2 * 1 MHz) 
  • 3 MHz (5 MHz - 2 * 1 MHz) 
  • And so on.

Typically, these "other" signals will be quite a bit weaker than the original - but they will still be present, possibly at a high enough level to cause issues such as spurious signals - a problem with both receivers and transmitters.  Typically, these are tamed by proper design of the mixer, proper selection of frequencies and careful filtering around the mixer to limit the energy of these "extra" signals.

SDRPlay Poor harmonic response suppression on 80 meters and below.

ANY receiver will experience spurious responses related to mixing products.  Typically, filtering is employed to remove/minimize such responses, but for a wide-bandwidth receiver such an SDR, doing this is complicated by the fact that being able to cover wide swaths of bandwidth would ideally require a large number of overlapping filters.  An example of a radio - albeit of different architecture - is the Icom IC-7300 which has nine overlapping band-pass filters that cover 160 through 10 meters.  While the reasons for the '7300 having many filters has as much to do with its being a "direct sampling"2 type of SDR, good filtering on the signal path of any type of receiver - SDR or "HDR"(Hardware Defined Radio - or an "old school" analog type) is always a good idea.

In the case of the RSP1a, this was not done - likely due to practical reasons of economics3:  There are just three filters used for covering all of the "HF" amateur bands 160 through 10 meters:  One that covers up to 2 MHz, another that covers 2-12 MHz and third that covers 12-30 MHz:  This information is covered in the RSP1a technical information document (https://www.sdrplay.com/wp-content/uploads/2018/01/RSP1A-Technical-Information-R1P1.pdf )

The sensitivity to harmonics was tested with the RSP1a's local oscillator (but not necessarily the virtual receiver) tuned to 3.7 MHz 4.  For reasons related to symmetry, it is odd harmonics that will elicit the strongest response which means that it will respond to signals around (3.7 MHz * 3) = 11.1 MHz.  "Because math", this spurious response will be inverted spectrally - which is to say that a signal that is 100 kHzabove 11.1 MHz - at 11.2 MHz - will appear 100 kHz below 3.7 MHz at 3.6 MHz.  

In other words, the response to spurious signals follow this formula:

Apparent signal = Center frequency + ((Center frequency * 3) - spurious signal) )

Where:

  • Center frequency = The frequency to which the local oscillator on the RSP is tuned.  In the example above, this is 3.7 MHz.
  • Spurious signal = The frequency of spurious signal which is approximately 3x the center frequency.  In the example above, this is 11.2 MHz.
  • Apparent signal = Lower frequency where signal shows up.   In the example above, this is 3.6 MHz.

In our example - a tuned frequency of 3.7 MHz - the 3rd harmonic would be within the passband of the 2-12 MHz filter built into RSP1a meaning that the measured response at 11.2 MHz will reflect the response of the mixer itself, with little effect from the filter as the 2-12 MHz filter won't really affect the 11 MHz signal - and according to the RSP documentation, this filter really doesn't "kick in" until north of 13 MHz.

In other words, in the area around 80 meters, you will also be able to see the strong SWBC (Shortwave Broacasting) signals on the 25 meter band around 11 MHz.

How bad is it?

Measurements were taken at a number of frequencies and the amount of attenuation is indicated in the table below.  These values are from measurement of a recent-production RSP1a and spot-checking of a second unit using a calibrated signal generator and the "HDSDR" program:

LO Frequency (MHz)
Measured Attenuation at 3X LO
Approx "S" Units
2.1
21 dB (@ 6.3 MHz)
3.5
2.5
21 dB (@ 7.5 MHz)
3.5
3.0
21 dB (@ 9.0 MHz)
3.5
3.7
21 dB (@ 11.1 MHz)
3.5
4.1
23 dB (@ 12.3 MHz)
3.8
4.5
30 dB (@ 13.5 MHz)
5
5.0
39 dB (@ 15.0 MHz)
6.5
5.5
54 dB (@ 16.5 MHz)
9
6.0
54 dB (@ 18.0 MHz)
9
6.5
66 dB (@ 19.5 MHz)
11
12.0
21 dB (@ 36.0 MHz)
3.5
12.5
21 dB (@ 37.5 MHz)
3.5
13.5
22 dB (@ 40.5 MHz)
3.7
14.5
26 dB (@ 43.5 MHz)
4.3
15.5
31 dB (@ 46.5 MHz)
5.2
16.5
35 dB (@ 49.5 MHz)
5.8
17.5
39 dB (@ 52.5 MHz)
6.5
18.5
43 dB (@ 55.5 MHz)
7.2
19.5
46 dB (@ 58.5 MHz)
7.7
20.5
50 dB (@ 61.5 MHz)
8.3
21.5
53 dB (@ 64.5 MHz)
8.8

Interpretation:

  • In the above chart, an "S" unit is based on the IRU standard of 6 dB per S unit which is reflected in programs like SDRUNO, HDSDR and many others.
  • Below the cutoff frequency of the relevant filter (nominally 12 MHz for receive frequencies in the range of 2 to 12 MHz, nominally 30 MHz for receive frequencies in the range of 12 to 30 MHz) the harmonic response is limited to that of the mixer itself, which is 21 dB.
  • We can see that on the 2 to 12 MHz segment, the attenuation doesn't exceed 40 dB (which is the low end of what I would call "OK, but not great) until one gets above about 5 MHz, and it doesn't get to the "goodish" range (50dB or more) until north of about 5.5 MHz which is borne out by the filter response charts published by SDRPlay.
  • On the 12 to 30 MHz band the filter has practically negligible effect until one gets above about 20 meters, at which point it gets to the "OK, but not great" range by about 18 MHz, and it doesn't really get "goodish" until north of 20.5 MHz.  What this means is that strong 6 meter signals may well appear in the 16.5 to 17.5 MHz range as frequency inverted representations.
  • If there is a relatively strong signal source in the area of the 3rd harmonic response, it will likely appear at the lower receive frequency where the attenuation of the filter is less than 40 dB or so.  The severity of this response will, of course, depend on the strength of that signal, the amount of attenuation afforded by the filters at that frequency, and the amount of noise and other signals present in the range of the fundamental frequency response.
Based on the above data, we can deduce the following:
  • In the 2-12 MHz range, below approx. 4 MHz, the 12 MHz cut-off of the filter has negligible effect in reducing harmonic response.
    • What this means is that signals from 6-12 MHz will appear more or less unhindered (aside from the 21 dB reduction afforded by the mixer) when the local oscillator of the receiver is tuned between 2 and 4 MHz.
    • The 3rd harmonic response across 2-4 MHz - which is the 6-12 MHz frequency range - can contain quite a few strong signals and noise sources.
  • In the 12-30 MHz range, below about 14 MHz, the 30 MHz cut-off of the filter has negligible effect in reducing harmonic response.
    • Signals from 36-40 MHz will appear with just 21-26 dB attenuation when tuned in the range of 12-13.5 MHz.
    • Fortunately, in most cases there are few signals in the 36-40 MHz range that are likely to be an issue when tuning in the 12-13.5 MHz range.

80 meter example:

 Connecting the RSP1 to a known-accurate signal generator set to -40dBm, the signal level at 3.6 MHz was measured:  Maintaining the signal level, the generator was retuned to 11.2 MHz and the resulting signal level was measured to be 21 dB (a bit more than 3 "S" units) lower than that at 3.6 MHz.

What this means is is that a "20 over S-9" signal at 11.2 MHz will show up as an S-9 signal at 3.7 MHz, and an S-9 signal at 11.2 MHz will be around S-6 at 3.7 MHz.  In other words, even a "weak-ish" signal at the 3rd harmonic will show up at the lower frequency.

 

Filtering is the key:

While these spurious responses may not be too much of a problem for the casual user, it will be necessary to add additional filtering to allow the RSP1a to function on par with a modern, SDR receiver from one of the major manufacturers.

Unfortunately, the filtering in the RSP1a is not sufficient in the 80 meter case mentioned above as it doesn't have octave filters (or similar) - but what about 60 or 40 meters?

The table above answers this question.  In the case of 60 meters - with the receiver tuned to 5.3 MHz - our 3rd harmonic will land on 15.9 MHz.  Based on measurements of the receiver the response of signals around 15 MHz - which corresponds to the 19 meter Shortwave Broadcast Band - will be a bit more than 40 dB down from 40 meters with about 20 dB of this being due to the roll-off of the 2-12 MHz filter - but because this frequency range is inhabited by very strong shortwave broadcasters they are likely to still be quite audible around 60 meters.

The situation is a bit better for 40 meters where the 3rd harmonic is around the 15 meter band.  There, the 2-12 MHz filter knocks signals down by 50dB or more, putting them about 70dB below the 40 meter response - on par with about any respectable receiver!

What this means is that for amateur bands below 40 meters it is suggested that additional filtering be applied.

The best solution - and recommended for any software-defined radio (or even older "hardware-defined radios") is to have band-pass filter designed for the specific amateur band in question. This will not only significantly attenuate the harmonic response, but it will also reduce the total amount of RF energy entering the receiver, reducing the probability of overload.  The obvious down-side is that it will reduce the flexibility of the receive in that unless you change/remove it, you won't be able to receive signals well outside the filter's design range.

Another possibility is to add a low-pass filter that is designed to cut off signals above the band of interest.  For example, if you have a filter that cuts off sharply above 8 MHz, you will be able to tune 80-40 meters and get reasonable attenuation of the 3rd harmonic response across this entire frequency range.

In the case of 160 meters the RSP1a will automatically select the 0-2 MHz low-pass filter and the 3rd harmonic response will be a respectable 50-ish dB down, depending on frequency.

On 20 meters - where the 3rd harmonic is around 42 MHz - the "12-30 MHz" filter will be selected, but the published response of this filter shows that at 42 MHz its attenuation will be quite limited.  Practically speaking, it is unlikely that there will be any signals in this frequency range so "only" 20-30dB of attenuation is unlikely to cause a problem in most cases, but one should be aware of this.

What can be done:

In short, none of the currently-made SDRPlay receivers - by themselves - will offer very good performance in terms of harmonic rejection between 2 and 5 MHz and it will be particularly bad on the 80 meter band where strong 25 meter SWBC signals can appear:  It is interesting that the ARRL review of the RSPdx (Link here) didn't catch this issue.

It is unfortunate that the designers of the SDRPlay receivers did not add at least one additional low-pass filter in the signal path to quash what is a rather strong response in the 2-6 MHz range - particularly on 80 meters, one of the most popular bands.  A low-pass filter with a cut-off frequency of 6 MHz (with attenuation becoming significant above 7 MHz) would ameliorate the harmonic response when tuning across this band.  This problem is made even worse by the fact that even antennas that aren't particularly resonant at their harmonic responses (e.g. the antenna for 80 meters) will likely do quite a decent job of receiving signals in the 11-12 MHz area.

The only real "fix" for this is to install additional filtering between the SDRPlay receiver and the antenna.  If single-band operation is all that is desired, the best choice will be a band-pass filter designed for the frequency range in question5 - but unless you are dedicating the receiver just for that one band, this isn't really desirable.

A more flexible solution would be to use a low-pass filter.  As we noted above, the 12 MHz roll-off of the built-in (2-12 MHz) filter just doesn't do much at 20 meters, but if we had a filter that cut off, say, at 8 MHz, we could use it for 80, 60 and 40 meters.

The obvious down-side for this is that if you are tuning all over the HF spectrum, you'd have to manually remove or bypass any such filtering when you tuned beyond the range that the added filter would pass.

 

Footnotes:

  1. The receivers mentioned at the beginning of the article (SDRPlay, AirSpy HF, RTLSDR, etc.) have analog-to-digital converters that cover only a portion of the HF spectrum, using a frequency mixer to convert a range of frequencies from the range of interest to a lower frequency, which is then fed into the converter.  Limiting the amount of spectrum being ingested by the receiver - particularly when appropriate filtering is used - can improve performance, reduce cost, and especially reduce the total amount of data, allowing a modest computer (older PC, Raspberry Pi) to be used with it.
  2. A "direct sampling" type of receiver - such as that found in the Icom IC-7300, IC-7610, the KiwiSDR and the RX-888 (when used at HF) and others like them simply "inhale" large swaths of spectrum all at once.  Because the analog-to-digital converter itself has a limited amount of total RF signal power that it can handle, radios like the Icoms have filtering that allow the passage of only the (relatively) small portion of the HF spectrum around that to which the receiver is tuned, reducing the probability of overload from strong signals on frequencies well away from those of interest.  Other direct-sampling receivers such as the KiwiSDR and RX-888 do not have band-specific filtering as they are intended to be able to receive multiple frequencies across the entire HF spectrum at once and as such, much more care is required in implementation to prevent overload/distortion.
  3. In the case of the (currently-produced) RSP receivers, the filtering varies depending on model:  In the case of the RSP1a, it has a band-pass filter that covers 2-12 MHz while others have used just a 12 MHz low-pass - the former being capable of rejecting AM broadcast band (e.g. mediumwave) signals from the input of the receiver when tuned to HF, and the latter not:  Some units additionally have a separate filter that is designed to remove just AM broadcast-band signals.  The situation described in this article - the reception of signals around 11 MHz when tuned to 80 meters - is related to the fact that the 2-12 MHz filter represents a 6:1 frequency range which means that over the lower portion of this spectrum, the 12 MHz cut-off of this filter cannot possibly cut off responses to the third harmonic, hence the issue described here.
  4. If you are using a program like SDRUno it may not be readily apparent to what frequency the receiver's local oscillator is tuned.  If set to "Zero IF" mode, the local oscillator will be tuned at the same place as the center of the waterfall display - typically indicated by a slight line at the "Zero Hz" frequency.  By default, one cannot directly tune the local oscillator ("Zero IF" frequency) in SDRUno.  If you use the "HDSDR" program by I2PHD (et al) you can independentally tune the local oscillator and the frequency of the virtual receiver.
  5. SDRPlay receivers are currently in use at a number of WebSDRs around the world as the "acquisition device"(e.g. receiver).  In most cases, these receivers - because they are used only for specific amateur bands - are preceded by a band-pass filter for the band that they are covering, completely eliminating issue noted in this article.  It was during testing at one of these WebSDR - a receiver on 80 meters - that, even though the band was "dead" in the middle of the day - that it was strongly receiving signals across the 80 meter band that should not have been there at all - and these signals were quickly realized to be the result of a harmonic response in the front end.  These responses were then verified and quantified using two other RSP1a receivers during the preparation of this article.

* * *

This page stolen from ka7oei.blogspot.com

[End]


It *is* possible to have an RF-quiet home PV (solar) electric system!

0
0

Figure 1:  Half of the array on my garage - the other half is
on the west-facing aspect.
There's a bit of shade in the morning around the end of June,
but it detracts little during the peak solar production
of the day - the hours on either side of "local" noon.
Click on the image for a larger version
For the past several years an incremental nemesis of amateur radio operation on the HF bands is solar power and the cover article of the April 2016 issue of QST magazine, "Can Home Solar Power and Ham Radio Coexist?"(available online HERE) brings this point home.



Personally, I thought that the article was a bit narrow in its scope, with an unsatisfying conclusion (e.g. "The QRM is still there after a lot of effort and expense, but I guess that it's OK") - but this impression is understandable owing to the constraints of the medium (magazine article) and the specific situation faced by the author.

Solar power need not cause QRM:

I can't help but wonder if others that read the article presumed that amateur radio and home solar were incompatible - but I know from personal experience that this is NOT necessarily the case:  There are configurations that will not produce detectable QRM on amateur bands from 160 meters and higher.

Before I continue, let me state a few things important to the context of this article:

Expertise in HF radio interference and home solar installations seems to mutually exclusive - which is to say that you will be hard-pressed to find anyone who is familiar with aspects of both.  This means that in the solar industry itself, you will not likely find anyone who can offer useful advice in putting together a system that will not contribute to the crescendo of electrical noise.
I have noted that many installers (at least in my area) will strongly pressure their potential customers to use microinverter-based systems - and this my experience as well:  From the very start of the process, I was adamant that the design of my system would be series string using SunnyBoy inverters which were known to me to be RF-quiet.  If your installer will not work with you toward your goals, consider a different company.
Designing an "RF-quiet" system as described here may incur a trade-off in available solar production as the use of microinverters can eke out additional efficiencies when faced with issues such as shading and complicated roofs that present a large number of aspects with respect to insolation (e.g. amount of light energy that can be converted to electricity).  Only in the analysis of proposed systems appropriate for your case can you reasonably predict the magnitude of this and whether or not you find it to be acceptable.
What is presented here is my own experience and that of other amateur radio operators with similar PV (PhotoVoltaic) system.  The scope of this experience is necessarily limited owing to the fact that when spending tens of thousands of dollars, one will understandably "play it safe" and pick a known-good configuration.
I will be the first to admit that there are likely other "safe"(low RF noise) combinations of PV equipment that can be demonstrated to be "clean" in terms of radio frequency interference.  I have anecdotally heard of other configurations and systems, but since I have not looked at them first-hand, I am not willing to make any recommendations that could result in the outlay of a large amount of money.  For this reason, please don't ask me a question like "What about inverter model 'X' - does it cause RFI?" as I simply cannot answer from direct experience.

An example system:

The system at my house consists of two series-string Sunny Boy grid-tie inverters:  I can unequivocally state that this system, which has both a SB 5000TL-US-22 (5 kW) and an SB3.8-1SP-US-40 (3.8kW) does not cause any detectable RF interference on any HF frequency or 160 meters - and I have yet to detect any interference on 6 meters, 2 meters or 70cm.  Near the LF and lower MF band (2200 and 630 meters, respectively) some emissions from these inverters can be detected - but none of the switching harmonics (about 16 kHz) land within either of these bands.  

Figure 2: 
One of two inverters in the garage. 
The Ethernet switch (upper right) produces
more RF noise than the inverter!
Click on the image for a larger version
This PV system is very simple:  I have a detached garage with a north-south ridge line meaning that the roof faces east and west.  While this orientation may seem to be less than ideal compared to a south-facing roof, it actually produces equal or greater power during the summer than a south-facing roof - and there are two usable surfaces onto which one can place panels (east and west) whereas one would typically not place any panels on a north-facing roof.  This means that one may be able to put twice as many panels on a symmetrical east-west facing roof than a south-facing roof.


Simple roof configuration can equal low noise:

The "simple" roof also has another advantage:  All panels on the faces are oriented the same and a larger number of panels may simply be wired in series.

This simple fact means that known-quiet series-string inverters may be used and known noise-generating components may be omitted from the system - namely, many models of "microinverters" and optimizers.  Both of these devices - despite being very different in their operation - are installed on a "per panel" basis and able to adjust the overall contribution of each panel to maximize the energy input of the entire solar power system.

Having each panel individually optimized for output power sounds like a good idea - and in most cases it is, but this nicety should be taken in context with the goals in mind - but considering that the panels themselves represent a rather small portion of the overall system cost, efficiency gains can often be offset with the addition of more panels.  To be fair, it is not always possible to simply "add more panels" to make up for loss of production - but this must be carefully weighed against a major goal, which is to produce a "noise free" PV system.

The options have changed:

Since the 2016 article was written, the number of options for series-string inverters has significantly increased and the prices have gone down, allowing options to be considered now that may have been dismissed at that time.  Take the article as an example.

From the photographs accompanying the article, there appear to be two different aspects of panels:  A large array consisting of 30 panels, all seeming to face the same direction;  a smaller array of 8(?) panels:  There appears to be an array of 4 panels, but let us presume that this is an independent energy system.

Assuming that each panel is rated for 300 watts (likely higher than a circa-2016 panel) and that one would wish to limit the maximum open-circuit potential to about 450 volts, this implies the use of at least four MPPT circuits:  The 8 panel array and three arrays consisting of 10 series panels, each.  The maximum output of this system would theoretically be about 11.4 kW - but since one can optimistically expect to attain only about 80% of this value in a typical installation the use of an inverter system capable of 10 kW, as stated in the article, is quite reasonable.

Back in 2016, it would be reasonable to have a 10kW series string inverter with two MPPT inputs representing two separate inputs that could be independently optimized.  If such an inverter were used, this would mean that one input would have just 8 panels and the other would have all 30 panels on the main array - not particularly desirable in terms of balancing.  While all 30 of the panels in the larger array would ostensibly be producing the same output, snow, leaves and shading might cause the loss of efficiency should certain parts be thus impaired.

Having already ruled out the optimizing of each panel independently in the interest of having a "known-quiet" system, we might want to split things up a bit.  As an example, a single 10kW inverter with two MPPT inputs could be replaced with a pair of 5 kW inverters, each with 3 MPPT inputs and having a total of six independent DC inputs allowing the 8 panels of the isolated roof to be optimized together and the remaining 30 panels being divided into 5 arrays of about 6 panels, each.

The 2016 article did not mention the price the system, but a reasonable estimate for that time would be around US$35000 - and it was mentioned, in passing, that the cost of RFI mitigation might have been about 10% of the total system cost, implying about $3500 - about the cost of two Sunny Boy  SB5.0 5 kW series-string inverters, each with three MPPT inputs.

Replicating success:

At least two other local amateur radio operators used the same recipe for low-noise PV systems:  Series-string SunnyBoy grid-tie inverters - specifically the SB 3800TL, SB 5000TL and SB3.8s.  In none of these cases could RFI be detected that could be attributed to the inverter - and the only noise to be detected was with a portable shortwave receiver held within a few inches of the display.

What is known not to be quiet:

From personal experience I know for certain that microinverters such as the older Enphase M190 can be disastrous for HF, VHF and UHF reception.  As noted in the QST article, the Enphase power optimizers (model number not mentioned) also caused QRM.

Figure 3:
The two Tesla Powerwalls, gateway and electrical sub-
panels for the system located remotely on the east wall
of the house.
Click on the image for a larger version

Additionally, it has been observed that the Solaredge inverters - particularly coupled with optimizers - have caused tremendous radio frequency interference:  The aforementioned April, 2016 QST article about solar RFI deals with this very combination.

It probably won't work in all cases.

Compared to some installations that I have seen, my system - or the one in the 2016 article - are very simple cases - and there are a number of practical limitations, which include:

  • A "minimum" array size limitation.  Taking the Sunny boy SB5.0 as an example, there is a 90 volt minimum input which means that one would (very conservatively) want at least four 60-cell panels on each circuit.  This limitation may affect what areas on a roof may be candidates for placement of solar panels, reducing the total system capacity as compared to what might be possible with individually-optimized panels.
  • Systems with complicated shading.  If there are a number of trees - or even antennas and structures - portions of sub-strings may be shaded, causing reduction in output and compared to individually-optimized panels, series-strings are at a disadvantage, but careful selection of sub-string geometry can help.  For example, if a tower shades a series of panels during the period of highest production, placing all of those panels on one particular string can help isolate the degradation - but this sort of design consideration will require careful analysis of each situation.

Final words:

The design, configuration and layout of a home (or any) PV system is more complicated than depicted here and any system to be considered would have to take into account.  While I am certain that there are other ways to make an "RF Quiet" PV system, this article was intended to be limited to configurations and equipment with which I have direct experience.

Again, the likelihood of finding a "solar professional" who thoroughly understands RFI issues and knows which type of equipment is RF-quiet is unlikely, so it is up to you as the potential recipient of QRM to do the research.

Other articles at this blog on related topics:


This page stolen from ka7oei.blogspot.com


[End]

Modifying an "O2-Cool" battery fan to (also) run from 12 volts

0
0

A blog posting about a fan?  Really?

Why not!

Figure 1:
The modified fan on my cluttered workbench, running
from 13 volts.
The external DC input plug is visible on the lower left.
Click on the image for a larger version.

This blog post is less about a fan, but is more of example of the use of a low-cost buck-type converter to efficiently power a devices intended for a lower voltage than might be available - in this case, a device (the fan) that expects 3 volts.  In many cases, "12" volts (which may be anything from 10 to 15 volts) will be available from an existing power source (battery, vehicle, power supply) and it would be nice to be able to run everything from that.

Background

Several years ago I picked up a 5" battery-operated DC fan branded "O2 Cool" that has come in handy occasionally when I needed a bit of airflow on a hot day.  While self-contained, using two "D" cells - it can't run from a common external power source such as 12 volts.

Getting 3 volts

Since this fan uses 3 volts, an obvious means of powering it from 12 voults would be to simply add a dropping resistor - but I wasn't really a fan of this idea (pun intended!) as it would be very wasteful in power and since doing this would effectively defeat the "high/low" speed switch - which, itself is a 2.2 ohm resistor.

The problem is that the fan itself pulls 300-400 mA on high speed.  If I were to drop the voltage resistively from 12 volts (e.g. a 9 volt drop) and if we assume a 300mA current, we would need to add (9/0.3 = ) 30 ohms of resistance.  The "low" switch inserts a 2.2 ohm resistor and adding this amount to 30 ohms would result in a barely noticeable difference in speed, effectively turning it into a single-speed fan.

Fortunately, there's an answer:  An inexpensive buck converter board.  The board that I picked - based on the MP1584 chip - is plentiful on both EvilBay and Amazon, typically for less than US$2 each.  These operate at a switching frequency of about 1 MHz and aren't terribly prone to cause radio interference, having also been used to power 5 volt radios from 12 volts without issues.

These buck converters can handle as much as 24 volts on the input and up to 3 amps - more than enough for our purpose and can also be adjusted to output about any voltage that is at least 4 volts lower than the input voltage - including the nominal 3 volts that we need for the fan.

An additional advantage is the efficiency of this voltage conversion.  These devices are typically 80% efficient or better meaning that our 300 mA at 3 volts (about 0.9 watts of power) would translate to less than 100mA at 12 volts (a bit more than a watt).  Contrasting this to our hypothetical resistive divider, we would be burning up nearly 3 watts in the 30 ohm resistor by itself!

Implementation

One of my goals was to retain the ability of this fan to run at 3 volts as it can still be convenient to have this thing run stand-alone from internal power.  Perhaps overkill, but to do this I implemented a simple circuit using a small relay to switch to the buck converter when external power was present and internal power when it was not, rather than parallel the buck converter across the battery.

If I never intended to use the internal "D" cells ever again I would have dispensed with the relay entirely and not needed to make the slight modifications to the switch board mentioned below.  In this case I would have had plenty of room in the case and freedom to place the components wherever I wished.  In lieu of the ballast of the battery to hold the fan down and stable, I would have placed some weight in the case (some bolts, nuts, random hardware) to prevent it from tipping over.

The diagram of this circuitry is shown below:

Figure 2:
Diagram of the finished/modified fan.
On the left, J1 is the center-positive coaxial power connector with diode D1 and self-resetting
resetting thermal fuse F1 to protect against reverse polarity.  The relay selects the source of power.
Click on the image for a larger version.

The original parts of are the High/Low switch, the battery and the fan itself on the right side of the schematic with the added circuits being the jack (J1), the self-resetting fuse (F1), D1, R1, the buck converter and the relay (RLY).

How it works:

When no external power is applied, the relay (RLY) is de-energized and via the "NC"(Normally-Closed) contacts, the battery is connected to the High/Low switch and everything operates as it originally did.

External power is applied via "J1" which is a coaxial power jack, wiring the center pin as positive:  The connector that I used happens to have a 2.5mm diameter center pin and expects an outer shell diameter of 5.5mm.  There's nothing special about this jack except that I happen to have it on-hand.

When power is applied, the relay is energized and the switch is disconnected from the battery but is now connected, via the "NO"(Normally Open) contacts, to the OUT+ terminal of the buck converter.  

Ideally, a small 12 volt relay would be used, but the smallest relay that I found in my junk box was a 5 volt unit, requiring that the coil voltage be dropped.  Measuring the relay coil's resistance as 160 ohms, I knew that it required about 30 mA (5/160 = 0.03) and if we were to use 12 volts, we'd need to drop (12 - 5 =) 7 volts.  The resistance needed to drop 7 volts is therefore (7/0.03 = ) 233 ohms - but since I was more likely to operate it from closer to 13 volts much of the time I chose the next higher standard value of resistance, 270 ohms to put in series for R1.

Figure 3:
Modification of the switch board.  The button is
the positive battery terminal and traces are cut to
isolate it to allow relay switching.
Click on the image for a larger version.
The diode D1 is a standard 1 amp diode - I used a 1N4003 as it was the first thing that I found in my parts bin, but about any diode rated for 1 amp or greater could be used, instead.  Placing it in reverse-bias across the input of the buck converter means that if the voltage was reversed accidentally, it would conduct, causing the self-resetting thermal fuse F1 to "blow" and protect the converter.  I chose a thermal fuse that has several times the expected operating current so I selected a device that would handle 500-800 mA before it would open.

Modification to the switch board

The High/Low switch board also houses the positive battery contact, but since it is required that we disconnect the battery when running from external power, a slight modification is required, so a few traces were cut and a jumper wire added to isolate the tab that connects to the positive end of the battery as seen in Figure 3.

Figure 4:
The top of the board battery board. The
connection to the Batt+ is made by soldering to
the tab.
Click on the image for a larger version.
Near the top of the photo in Figure 3 we see the the end of the 2.2 ohm resistor has been separated from the battery "+" connector (the round portion) and also along the bottom edge where it connects to the switch.  Our added jumper then connects the resistor to the far end of the switch where the trace used to go and we see the yellow wire go off to the "common" contact of the relay.

In Figure 4 we can see the top of the board with the 2.2 ohm resistor - but we also see the wire (white and green) that connects to one of the tabs for the Battery + button on the bottom of the board:  The wire was connected on this side to keep it out of the way round battery tab and the "battery +" connection.

The mechanical parts

For a modification like this, there's no need to make a circuit board - or even use prototyping boards.  Because we are cramming extra components in an existing box, we have to be a bit clever as to where we put things in that we have only limited choices.

Figure 5:
Getting ready to install the connector after
a session of drilling and filing.
Click on the image for a larger version.
In the case of the coaxial power connector, there was only one real choice for its location:  On the side opposite the power switch, near the front, because if it wereplaced anywhere else it would interfere with the battery or with the fan itself as the case was opened.

Figure 5 shows the location of this connector.  Inside the box. this is located between two bosses and there is just enough room to mount it.  To mount it, small holes were drilled into the case at the corners of the connector and a sharp pair of flush-cut diagonal nippers were used to open a hole.  From here it was a matter of filing and checking until the dimensions of the hole afforded a snug fit of the connector.

Figure 6:
A close-up of the buck converter board with the
attached wires and BATT- spring terminal.
The tiny voltage adjustment potentiomenter is
visible near the upper-left corner of the board.
Click on the image for a larger version.
Wires were soldered to the connector before it was pressed into the hole and to hold it in place I used "Shoe Goo" - a rubber adhesive - as I have had good luck with this in terms of adhesion:  I could have used cyanoacrylate ("Super" glue) or epoxy, but I have found that these bonds tend to be a bit more brittle with rapid changes of temperature, shock or - most applicable here - flexing - something that the Shoe Goo is meant to do.

Because this jack is next to the battery minus (-) connector, a short wire was connected directly to it, and another wire was run to the location - in the adjacent portion of the case - where the buck converter board would be placed.

Figure 6 shows the buck converter board itself in front of the cavity in which it will be placed, next to the negative battery "spring" connector.  Diode D1 is soldered on the back side of this board and along the right edge, the yellow self-resetting fuse is visible.  Like everything else the relay was wired with flying leads as well, with resistor R1 being placed at the relay for convenience.

Figure 7:
The relay, wired up with the flying leads.
Click on the image for a larger version.

Figure 7 shows the wiring of the relay.  Again, this was chosen for its size - but any SPDT relay that will fit in the gap and not interfere mechanically with the battery should do the job.

The red wire - connected to the resistor - comes from the positive connector on the jack and the "IN+" of the buck converter board - the orange wire is the common connection of the High/Low switch, the white/violet comes from the "OUT+" of the buck converter and goes to the N.O. (Normally Open) contact on the relay, the white/green goes to the N.C. (Normally Closed) relay contact and the black is the negative lead attached to the coil.

Everything in its place

Figure 8 shows the internals of the fan with the added circuitry.  Shoe Goo was again employed to hold the buck converter board and the relay in place while the wires were carefully tucked into rails that look as though they were intended for this!

Now it was time to test it out:  I connected a bench power supply to the coaxial connector and set the voltage at 10 volts - enough to reliably pull in the relay - and set the fan to low speed.  At this point I adjusted the (tiny!)potentiometer on the buck converter board for an output of 3.2 volts - about that which could be expected from a very fresh pair of "D" cells.

Figure 8:
Everything wired and in its final locations.  On the far left is
the switch board.  To the left of the hinge is the relay with the
buck converter on the right side of the hinge.  The jack and
negative battery terminal is on the far right of the case.
Click on the image for a larger version.
The result was a constant fan speed as I varied the bench supply from 9 to 18 volts indicating that the buck converter was doing its job.

The only thing left to do was to make a power cord to keep with the fan.  As is my wont, I tend to use Anderson Power Pole connectors for my 12 volt connections and I did so here.

As I also tend to do, I always attach two sets of connectors to the end of the power cord - the idea being that I would not "hog" DC power connections and leave somewhere to plug something else in.  While the power cord for the fan was just 22 gauge wire, I used heavier wire (#14 AWG) between the two Anderson connectors to carry heavier current than so that I could still run high-current devices.

* * *

Does it work?

Of course it does - it's a fan!

The relay switches over at about 8.5 volts making the useful voltage range via the external connector between 9 and 16 volts - perfect for use with an ostensibly "12 volt" system where the actual voltage can vary between 10 and 14 volts, depending on the battery chemistry and type.

Figure 9:
The fan, folded up with power cord.
The two connectors and short section of heavy
conductor can be just seen.
Click on the image for a larger version.
Without the weight of the two "D" batteries, the balance of the fan is slightly precarious and prone to tip forward slightly, but this could be fixed by leaving batteries in the unit - but this is not desirable for long-term storage as leakage is the likely result.  Alternatively, one may place some ballast in the battery compartment (large bolt wrapped in insulation, a rag, paper towel, etc.) or simply by placing something (perhaps a rock) on the top.  Alternatively, since the fan is typically placed on a desktop, it is often tilted slightly upwards and that offsets the center of gravity in our favor and this - plus the thrust from the airflow - prevents tipping.


This page stolen from ka7oei.blogspot.com


[End]


A solid state replacement for an old radio's "vibrator" (Wards Airline 62-345)

0
0

Figure 1:
The front of the Wards Airline 62-345 with its rather
distinctive "telephone dial" tuning dial.
It's powered up and running from 12 volts!
Click on the image for a larger version.
Quite some time ago - a bit more than a decade - a friend of mine came to me with an old "Farm" radio - a Wards Airline 62-345.  This radio - from the 1930s - was designed to run from a 6 volt positive ground battery system  such as that which one might find in tractors and cars of that vintage.

How high voltage was made from low voltage DC in the 30's

As the technology of the time dictated, this radio has what's called a "vibrator" inside - essentially a glorified buzzer - that is used as a voltage chopper along with a transformer to convert the 6 volts from the battery to the 130-150 volts needed for the plates of the tubes within.  Not only did this vibrator do the chopping for the high voltage, but it also performed the duty of synchronouslyrectifying the AC waveform from the transformer as the pulses from it would naturally be in sync with the motion of the moving reed, briefly connecting the output of the transformer to the input of the high voltage DC supply when the voltage waveform from it was at the correct polarity.

These devices, as you would expect, don't have a particularly long lifetime as they are constantly buzzing, making and breaking electrical contact and causing a small bit of arcing - something that will inevitably wear them out.  Even if the contacts were in good shape, the many decades of time that have passed will surely cause these contacts to become oxidized - particularly since these devices are in rubber-sealed cans (to minimize noise and vibration) and the out-gassing of these materials is likely of no help in their preservation.

Figure 2:
The chassis of the radio.  The vibrator is in its original
can in the far right corner.
Click on the image for a larger version.
Such was the case with this radio.  Often, the judicious application of percussive repair (e.g. whacking with a screwdriver) can get them going and if the contacts are just oxidized, they will often clean themselves and work again - at least for a while.  In this case, no amount of whacking seemed to result in reliable operation, so a modern, solid-state approach was needed.

The solid-state replacement

As mentioned earlier, the job of the vibrator was to produce a chopped DC waveform, apply it to a transformer for "upping" the voltage and then use a separate set of contacts to perform synchronous rectification - and our solid-state replacement would need to do just that.  That last part - rectification - was easy:  Just two, modern diodes would do the job - but chopping the DC would require a bit more circuitry.

The owner of this radio also had a few other things in mind:  He changed it from 6 volts, positive ground to 12 volts, negative ground so that it could be readily operated from this more-common power scheme.  The change to 12 volt filaments required a bit of work, but since all of the tubes were indirectly heated, the filament supply could be rearranged - but some tubes had to be changed to accommodate different filament voltages and currents as follows:

  • Oscillator and detector:  This was originally a 6D8 (6.3v @ 150mA) and it was replaced with a 6A8 (6.3V @ 300mA).  Other than filament current, these tubes are more or less the same.
  • IF Amplifier: The original 6S7 (6.3v @ 150mA) was retained.
  • 2nd Detector/AVC/1st Audio:  The original 6T7 (6.3V @ 150mA) was retained.
  • AF Output:  The original 1F5 (2.0v @ 150mA) was replaced with a 6K6 (6.3v @ 400mA).  The latter is a pentode, requiring a bit of rewiring and rebiasing to replace the original triode.
  • Magic Eye tube: The original 6N5 (6.3v @ 150mA) was replaced with a 6E5 (6.3v @ 300ma) - which is also a bit more sensitive than the 6N5, giving a bit more deflection.

The 6T7 (150mA), 6A8 (300mA) and the #47 dial lamp (6.3v @ 150mA) are wired in parallel on the low side with one end of the filament grounded while the 6K6 (400mA), 6S7 (150mA) and 6E5 (300mA) are wired in parallel on the high side with one end of the filament connected to +12 volts.  You might notice a current imbalance here (600mA on the low side with 850mA on the high side) but this is taken care of with the addition of 30 ohms of resistance between the midpoint of the filament string and ground to sink about 200mA getting us "close enough".

He also did some additional rebiasing and other minor modifications - particularly for the rewiring of the AF Output from the original 1F5 to a 6K6 as he swapped a triode for a pentode - which was then  wired as a triode.  The total current consumption of the radio at 13 volts is 1.6 amps - a bit more than half of that being the filament and pilot lamp circuits meaning that about 10 watts of power is being used/converted by the vibrator supply and consumed by the idle current of the audio output and other tubes.

The other issue with the 6 to 12 volt conversion is that of the primary of the high voltage transformer:  This transformer is center-tapped with that connection going to the "hot" side of the battery (which was originally at -6 volts) - but what this really means is that there's about 12 volts from end-to-end on the transformer at any instant.  We can deal with this difference simply by driving the transformer differently:  Rather than having the center tap "hot" with the DC voltage and alternatively grounding one end or the other as the vibrator did we can simply disconnect the transformer's center tap altogether and alternately apply 12 volts to either end, reversing the connection electronically.

This feat is done using an "H" bridge - an array of four transistors that will do just what we need when driven properly:  Apply 12 volts to one side and ground the other - or flip that around, reversing the polarity.

Consider the schematic below:

Figure 3:
Solid state equivalent of a vibrator supply.  This version uses an "H" bridge, suitable for
the conversion of a 6 volt radio to 12 volt operation as detailed in the text.
Click on the diagram for a larger version.

This diagram shows a fairly simple circuit.  For the oscillator we are using the venerable CD4011 quad CMOS NAND gate with the first two sections wired to produce a square wave with a frequency somewhere in the 90-150 Hz region - the precise value not being at all critical.  The other two sections (U1c and U1d) take the square wave and produce two versions, inverted from each other.

Figure 4:
The top (component side) of the circuit.  This is built on a
piece of phenolic prototype board.
Click on the image for a larger version.
The section of interest is the "H" bridge consisting of transistors Q1 through Q4 wired as two sets of complimentary-pair Darlington transistors.   Here's how it works:

  • Let us say that the output of U1c is high.  This causes the output of U1d to be low as it's wired as a logic inverter.
  • The output of U1c being high will cause the top transistor (Q1 - a PNP Darlington) to be turned OFF, but at the same time the bottom transistor of this pair, Q2, will be turned ON, causing the connection marked "PIN 1" to be grounded.
  • At the output of U1d - being low - we see that the bottom of this pair of transistors, Q4, is turned OFF, but the top transistor Q3 is turned ON causing V+ (12 volts) to appear at the connection marked "PIN 5".
  • In this way, the low-voltage primary of the transformer has 12 volts across it.
  • A moment later - because of the oscillator - the output of U1c goes low:  This turns off Q2 and turns on Q1 - and since this also causes the output of U1d to go high this, in turn, turns off Q4 and turns on Q3.  All of this causes "PIN 5" to now be grounded and "PIN 1" to be connected to V+ - thus applying the full 12 volts to the transformer in reverse polarity.

Also shown are D1 and D2, the solid-state replacements for the synchronous rectifier of the original vibrator.  While this could be a pair of high-voltage diodes (>=400 volts) we simply used half of a full-wave bridge rectifier from a junked AC-powered switching supply.  Finally, resistor R3 and capacitor C2 form a filter to keep switching noise and high-voltage spikes out of the power supply of U1 to prevent its destruction - a sensible precaution!

Now some of you might be concerned about "shoot through" - the phenomenon when both the "upper" transistors (Q1, Q3) might be on - if only for an instant - at the same time as the "lower" transistors (Q2, Q4) as the switching is done.  While this may happen to a small extent, it has negligible effect:  This circuit is efficient enough that no heat sinking is required on transistors Q1-Q4 and they get only barely warm at all.  Were I to build it again I might consider ways to minimize shoot-through, but this would come at the expense of simplicity which, itself, is a virtue - and since this circuit works just fine, would probably be not worth the effort.

Figure 5:
The bottom (wired side) of the circuit with flying leads
connecting to the original base socket.
Click on the image for a larger version.

These days one might consider building this same type of circuit using MOSFETs instead of Darlington transistors (e.g. P-channel for Q1 and Q3, N-channel for Q2 and Q4) and this should work fine - but the Darlington transistors were on hand at the time that this circuit was built and very easily driven by U1 - and the bipolar transistors are - at least in this case - arguably more rugged than the MOSFETs would be - particularly since there was no need to include a "snubber" network to suppress switching transients that might occur.  It's also worth noting that while standard MOSFET transistors would work fine for a 12 volt supply, you'd have to be sure to select "low gate threshold" devices to work efficiently at 6 volts or lower - something that would not really be an issue with the bipolar Darling transistors shown here.

This circuit is simple enough that it was wired onto a piece of phenolic prototyping board, snapped down to a size that will nicely fit into the original can that housed the vibrator.  To complete the construction, the top of the can - which was originally removed by careful filing and prying - was glued into its base using "shoe goo" - a rubber adhesive - keeping the board protected, but also allowing it to be easily disassembled in the future should modification/repair be necessary.

To be sure, the Internet is lousy with this same sort of circuit, but this version has worked very well.

What about the center tap version of the solid state vibrator?

You might ask yourself "what if we don't want to rewire a 6 volt radio to 12 volts?"  As noted previously, the boost transformer in the radio had its center tap connected to the "hot" side - which, in this case, would have been the negative terminal (because many vehicles had 6 volt, positive grounds at the time).  This circuit could be easily modified for that as you'd need only "half" an "H" bridge and the resistors driving the transistors would be changed to a lower value - perhaps 2.2k.  Depending on whether the it was positive-ground or negative ground, or whether the center-tap was grounded or "hot" - this would dictate whether you needed the PNP or NPN halves of the H-bridge.

(If you have a specific need, feel free to contact me by leaving a comment.)

* * * 

This page stolen from ka7oei.blogspot.com

 

 [END]

 

Improving the thermal management of the RX-888 (Mk2)

0
0

Figure 1:
The RX888 showing the "top" and RF connectors.  While
the heat sinks attached to the sides are visible, the large one
on the "bottom" plate are not.
Click on the image for a larger version.
The RX-888 Mk2 SDR is a USB3-based software-defined receiver that, unlike many others, is JUST and analog-to-digital converter (with a bit a low-pass filtering and adjustable attenuation and amplification) coupled to a USB 3 PHY chip.  With a programmable sample rate and a 65-ish MHz low-pass filter, it is capable of simultaneously inhaling the entire spectrum from a few 10s of kHz to about 60 MHz when run with a sample rate of 130 Msps - a rate which pretty much "maxes out" the USB 3 interface.

(Note:  There is also a frequency converter on board which will take up to a 10 MHz swath of spectrum between about 30 and 1800 MHz and shift it to a lower frequency within range of the A/D converter - but that's not part of this discussions.)

The purpose of this post is to discuss the thermal management of the RX-888 Mk2 which, in two words, can be described as  "marginal" and "inconsistent".

Please note:

Despite the impression that the reader might get about the RX-888 (Mk2)'s thermal design and potential reliability, I would still consider it to be an excellent device at a good price - warts and all.

Its performance is quite good and especially since it lacks the FPGA that many other direct-sampling SDRs use, it is quite "future proof" in the sense that support of this receiver - and others like it that will no doubt appear soon - will be based on code running on the host computer (typically a PC or SBC) rather than on an FPGA contained within that requires specialized tools and knowledge for development and is limited by its own capacity.

If you think that an FPGA is needed, consider this:  For a few "virtual" receivers using "conventional" DSP techniques (e.g. HDSDR, SDR-Radio, etc.) a moderate Intel i7 is sufficient:  If using an optimized signal processing program like ka9q-radio along with a modest Intel i5, hundreds of virtual receivers covering the entire HF spectrum can be managed - but these are topics for another discussion.

In other words:  If you need a fairly simple, modestly-priced device to receive multiple RF channels it is well worth getting an RX-888 (Mk2) and performing some simple modification to it to improve its durability.  We can hope that future versions of this - and similar devices - will take these observations into account and produce even better hardware.

What's the problem?

There are scattered anecdotal reports of RX-888 (both the original and Mk2) simply "dying" after some period of time.  For most of these reports there are few details other than comments to this effect in various forums (e.g. little detailed analysis) but this was apparently enough of a problem  with the original version of the RX-888 that with the Mk2, "improved" thermal management is one of the features noted by its seller.  (I do not have an original RX-888, but I would expect that the same general techniques could be applied to it as well.)

In short, here are a few comments regarding the thermal management of the RX-888 Mk2:

  • DO NOT run it outside its case.  There is a compressible thermal pad that goes between the exposed metal pad below the A/D converter that is intended to transfer heat to the case and without this in place the A/D converter and surrounding components can exceed 100C at moderate ambient temperatures.  If you plan to shuck the case, you should be aware of this and make appropriate arrangements to draw away heat via the same method. 

Figure 2:
Showing the paper double-sided "sticky tape" used to mount
the heat sinks.  Despite improper materials, these work "less
badly" than expected, but it's best to re-attach them properly.
Click on the image for a larger version.

  • The heat sinks are held on by double-sided tape.  The heat sink on the A/D converter appears to be some sort of thermal table like that seen on Raspberry Pi heat sink kits, but  those on the exterior of the case (one on each side, another the top) are held on with standard, paper-based double-sided tape:  People have reported these falling off with handling.  Additionally, because both the case and heat sinks are extruded their surfaces are not flat and all of the RX-888 (Mk2) units that I had a gap between the heat sink and the case through which a sheet of paper can be slid meaning that the heat sinks should be flattened a bit and/or attached using a material that will work as a thermally-conductive void filler.
  • The thermal pad may not be adequate.  Unless the small-ish thermal pad is placed precisely in its correct location, it will not be effective in its thermal transfer.  Additionally, these pads require a bit of compression between the board and the heat sink to be effective and it seems that the spacing between the board and the case is somewhat "loose" in the slot into which the PCB slides and that thermal contact may be inconsistent - more on this shortly.
  • Other components get very hot.  Next to the A/D converter are the 3.3 and 1.8 volt linear regulators which run very hot.  While this may be OK, they are next to electrolytic capacitors which - if run very warm - will have rather short lifetimes.  While it is unknown if this is the case here, many regulators will become unstable (oscillate) if their associated capacitors degrade with lower capacitance and/or increased ESR (Equivalent Series Resistance) and if oscillation occurs due to capacitor degradation, this is likely to make the device unusable until the components are replaced.

Figure 3:
The top of the RX888 board.  The ADC's heat sink was
removed for the photo, but glued in place later to improve
its thermal transfer.
Click on the image for a larger version.

  • The FX3 USB interface chip can get very warm.  This chips is right next to the A/D converter.  There are anecdotal reports (again, nothing confirmed) that this particular chip can suffer reliability problems when running near its maximum rated temperature:  Whether this is due to a failure of silicon or (more likely) a mechanical failure of a solder connection on its BGA (ball grid array) as a result for thermal cycling remains to be seen, but either one could explain one of the RX-888's reported failure modes of no longer appearing to be the expected type of USB device, making the unit non-functional even though it seems to enumerate - albeit improperly.

Several different people have made spot measurements of the temperatures within an RX-888 and come up with different results, further indicating inconsistency in the efficacy of the passive cooling and showing the inherent difficulty in making such measurements - but here are a few comments that are likely relevant:

  • Unless you need coverage >30 MHz, do not run a sample rate higher than 65-70 Msps.  As with most devices, more current (and higher heat dissipation) will occur at a higher sample rate so keeping it well below its maximum (around 130 Msps) will reduce heating and potentially improve the lifetime.   If you do run at 65-70 Msps, it is recommended that a 30 MHz low-pass filter be installed as this will prevent aliasing due to this lower rate and the fact that the RX-888 Mk2 has only a 60 MHz low-pass filter internally.
  • At normal "room" temperatures (68F/20C) the thermal properties of the RX-888 Mk2 are likely "Okay", particularly if run at just 65-70 Msps - but increasingly marginal above this.  On several samples, the internal temperature of the A/D converter and other components was fairly high, but not alarmingly so, although this seemed to vary among samples (e.g. some seemed worse than others.)  Since thermal resistance can be characterized by a temperature rise, it makes sense that as the ambient temperature increases, so will the components by the same amount meaning that if the unit is in a hot location - or placed such that it will become warm (convective air movement across the heat sinks is restrictive or in/near the hot air flow of other equipment) then thermal stresses of the components also increase.

Again, the reader should be cautioned that the reported inconsistency between units (e.g. the efficacy of the thermal pad) may mean that the above advice may not apply to specific units that have, say, a misplaced thermal pad or extra "slop" in the spacing between the board and the case which reduces the compression of the pad causing extra thermal resistance.

"Board slop"doesn't help: 

Figure 4:
Measuring the "board slop" in the mounting rails.  As noted
in the text, the board's looseness was nearly 1 mm - the far
extent of which exceeding the 5mm thickness of the pad.
Click on the image for a larger version.

On this latter point (e.g. "slop" in the board position) with the covers removed I measured a variance of 0.170-0.205"(4.32-5.207mm) from the board to the case due to looseness in the board fitting in the rail on one of my RX-888.  Of the three units that I have to measure, this was the worst - but not by much as the the photo (figure 4) from another unit shows.

Considering that the thermal pad is nominally 5.0mm thick, this means that the board MAY not be conducting heat to the case if the gap is closer to 5.2mm.  Also considering the fact that the thermal pad will work better when it is compressed it would be a very good idea - if possible - to reduce this gap - more on this later.

I also observed that with the USB end plate fitted, it happened to push the board "down"(e.g. reduced the gap between the board and the case) by about 0.02"(0.5mm) and since this is the end of the board closest to the A/D converter chip, it likely reduces the gap by about 0.015"(0.38mm) owing to geometry (e.g. the fact that the A/D converter is located away from the edge.)  If desired, this fact could be exploited by adding a shim to the top of the USB connector and filing the bottom a bit to allow the end plate to push "down" on the board a bit, better-compressing the thermal pad and potentially reducing its thermal resistance. 

Figure 5:
The screwdriver tip points to where the end plate is pushing
down on the connector and board to reduce board-to-case
distance to better-compress the pad.
Click on the image for a larger version.
On the opposite end of the board, the RF connectors fit rather loosely in their mounting holes meaning that one could, in theory, move the connectors to the "bottom" of their holes and tighten the nuts on the SMA connectors.  This would not be advisable without adding a washer of appropriate thickness between the plate and the SMA connector as the connectors themselves are not right at the edge of the circuit board and firmly tightening the nuts would likely bend/break them loose.

Before getting out the file, however, I suggest considering the methods/modifications mentioned below to improve the thermal performance of the RX-888 (Mk2) in several other ways.

Ways to improve the thermal performance:

There are two ways to improve the thermal performance and reduce the temperature of the onboard components.

Add another heat sink and a fan

A "brute force" approach to this would be to move more air through and around the unit. using a small fan.  If you do this I would recommend two minor modifications:

  • Glue the heat sink to the A/D converter.  As noted earlier, the heat sink the A/D converter is held on by tape, but I would recommend that this be removed from the heat sink and the chip itself (using a bit of paint thinner or alcohol to remove residue) and it be reattached using thermally conductive epoxy rather than conventional "clear" epoxy.  This epoxy is readily available at the usual places (Amazon, etc.) but it should be noted that the gray (not clear!)"JB Weld" epoxy (available at auto-parts and "big box" stores) also has reasonable thermal conductivity and works quite well in this application.   Do NOT use an adhesive like "super glue" as it is not void-filling by its nature and it is unlikely to endure the heat.
  • Add a heat sink to the FX3 chip.  This chip - next to the A/D converter - should also be cooled and a small heat sink - such as that which comes with a Raspberry Pi heat sink kit - may be attached.  Again, I would recommend thermally-conductive epoxy rather than supplied double-sided sticky tape.

As for the fan mounting, several people have simply removed both side plates and fabricated the attachment for a small fan (say, 20x20mm to 30x30mm) on the side with the USB connector to blow air through the case on both sides of the board.  Others have temporarily removed the board from the case and put holes in the case (on the side with the labels) into which a fan is mounted.

Either of these will be quite effective - but since these are not passive cooling, the failure of a fan could result in excess heat.

Improve passive cooling by using a much larger thermal pad

This is likely the favored approach as it does not depend on a fan which - will have a defined useful lifetime - and the failure of which could result in immediate overheating in certain circumstances.  There are two parts to this approach:

Replace the thermal pad. 

At reasonable ambient temperature I believe that the area of the heat sinks on the RX-888 are of adequate size, provided that they are open for air flow and not placed in the heat exhaust of equipment.

As noted, the thermal pad is seemingly marginal and it is only as large enough to draw heat away from the A/D converter - an issue that may be exacerbated by the board-to-case spacing mentioned above.  Improper placement of this pad will prevent it from conducting heat from the A/D converter - the major heat producer - to the case - and subsequent heating of adjacent components.

Figure 6:
A piece of 45mm x 65mm thermal pad on the bottom of the
board.  This piece is large enough to cover all heat-
generating components.
Click on the image for a larger version.
It is also likely that the thermal pad material supplied with the unit is of lower thermal conductivity than other materials that are available (to save cost!) so the use of better thermal material and a larger pad will draw more heat away from all of the heat-producing components on the board and conduct it to the heat sink.

A suitable pad material is the Laird A15340-01 which may be found at Digi-Key (link here ).  This material has roughly half  the thermal resistance (e.g. better thermal conductivity) of other common pad materials and it is suitably "squishy" in that it will form around components and help fill small voids as it does so.  

Unfortunately, this material is somewhat expensive in that it's available only as a rather large piece - about $32 (at the time of posting - not including shipping) for one that is 22.8x22.8cm square - but this will modify several RX-888s - but even at the price of $32, it's still a reasonable price to pay for improved reliability of a $150-$200 device!  If you do this, it's recommended that you get with others to split the cost of the pad - but be sure to keep the pad - or any pieces that you cut from it - in a zip-bag or clean plastic cling film to prevent its surface from being contaminated with dirt and dust.  If you post this pad material to someone else, be sure to protect it between two pieces of cardboard to prevent it from being mangled.

Figure 7:
The new pad, installed, as viewed from the
end with the USB connector, near the ADC
and FX3 USB interface chip.
Click on the image for a larger version.

A rectangular piece of thermal pad 45mm x 65mm will cover the bottom of the board where there are heat-generating components and ensure superior heat transfer to the case.  Since this material is a bit "sticky", it may be a bit difficult to get it installed as it will be resistant to sliding, but a very light coating of white heat-sink grease on the side of the pad facing the heat sink material will provide sufficient lubrication to allow it to slide as the board is inserted along its mounting rails.

This process is fairly messy, so if you plan to add a connector for an external clock input, I would suggest that you do so at the time that you install the new pad as you will probably not to repeat the process unnecessarily.

Remount the heat sinks.

As noted earlier, the four heat sinks (to on the "bottom" side opposite the label and one on each side) are held on by double-sided paper tape.  It is recommended that these be removed - along with any tape residue (best done with paint thinner and/or alcohol) - and be reattached with thermal epoxy.

Figure 8:
An RX888 (Mk2) in the process of gluing on the side heat
sinks, using a vise for clamping.  Alternatively, weight may
be placed on the heat sink(s) while the epoxy cures to
compress it and squeeze out excess - but note that until it
cures that the heat sinks may slide slowly out of position
if one isn't careful.
Click on the image for a larger version.

As noted previously, the heat sinks do not fit flat with each other so  it would be a good idea to assure that the surfaces are reasonably to maximize thermal conductivity by drawing the case and the mating surfaces of the heat sinks across 800-grid sandpaper (using a flat piece of metal or glass as a substrate) - taking care to prevent metal particles from getting onto the board or inside the case:  It would be best to remove the board and do this prior to the installation of the new thermal pad and wash any such particles from the case before reassembly.

Once the mating surfaces have been flattened and cleaned, using thermal epoxy (or the gray "JB-Weld") reattach the heat sinks one-at-a-time - preferably by compressing them in a vice or with a clamp to squeeze out as much adhesive as possible.

It's worth noting that even if you don't go through the trouble of flattening the heat sink and the surface of the case, the use of a void-filling adhesive will certainly offer far more efficient thermal transfer than  the original double-sided paper sticky tape along with it s rather large void between the two surfaces.

Out of curiosity I measured the difference in temperature between the heat sinks stuck on with double-sided tape and the exposed portion of the case right next to the heat sink and it was found to be about 3-5F (1.7-2.8C) - surprisingly good, actually.

Before and after thermal measurements

Figure 9:
Two RX888 Mk2's with reattached heat sinks, ready for a 
bit of clean-up and final assembly.
Click on the image for a larger version.
Using a thermal infrared camera and verifying with a thermocouple, temperature measurements were made of various components with an RX-888 operating at 130 Msps at an ambient temperature of 74F (23C) after 10 minutes of operation.  The readings were as follows:


With the original thermal pad, end plates removed - heat sink cooling by convection only:

ADC:  175F (79C)

FX3 (USB interface): 155F (68C)

Capacitor near 3.3 volt regulator:  145F (63C)

3.3V Regulator:  170F (77C)

1.8V Regulator:  178 (81C)

 

With Laird 45mm X 65mm pad - heat sink cooling by convection only:

ADC: 145F (63C)

FX3: 130F (54C)

Capacitor near 3.3 volt regulator:  125F (52C)

3.3V Regulator:  145F (63C)

1.8V Regulator:  150F (66C)

Note:  There is another capacitor near the 1.8 volt regulator, but it is temperature cannot be readily measured while the board was installed in the case, but other measurements made outside the case indicates that its temperature was at least as high as that of the capacitor near the 3.3 volt regulator.

Results and comments:

The replacement of the original thermal pad with one that is 45mm X 65mm in size to cover the bottom of the board where there are active components has resulted in a very significant heat reduction:  As with all electronics, reducing the temperature of the components will increase the operational lifetime.

Considering that one can use - as a guideline - the temperature rise above ambient, we can make some estimations as to what will happen if the modified RX-888 (Mk2) is operated at a higher temperature.  

For example, if we consider 212F (100C) to be the maximum allowed case temperature of any of the components, we can see that with the original thermal pad, this limit would occur with the ADC converter at an ambient temperature of around 111F (44C) - a temperature that one could reasonably expect during the summer in a room without air conditioning.  In contrast, with the larger pad the ADC's temperature would likely be closer to 185F (85) in the same environment.

With a small amount of air moving across the heat sinks, their temperature rise would also be lower, further reducing internal temperature - and even though it isn't strictly necessary, it wouldn't hurt to use a small fan - even on a modified RX-888 (Mk2) to cool it even more, and feel confident that it will still survive should that fan fail.

Finally, I would again remind the reader that I consider the RX-888 (Mk2) to be an excellent-performing and extraordinarily flexible device and well worth extra trouble to make it better!

* * *

This page stolen from ka7oei.blogspot.com

 

[End]



Measuring signal dynamics of the RX-888 (Mk2)

0
0

As a sort of follow-up to the previous posting about the RX-888 (Mk2) I decided to make some measurements to help characterize the gain and attenuation settings.

The RX-888 (Mk2) has two mechanisms for adjusting gain and attenuation:

  • The PE4312 attenuator.  This is (more or less) right at the HF antenna input and it can be adjusted to provide up to 31.5dB of attenuation in 0.5dB steps.
  • The AD8370 PGA.  This PGA (Programmable Gain Amplifier) can be adjusted to provide a "gain" from -11dB to about 34dB.

Note:

While this blog posting has specific numbers related to the RX-888 (Mk2), its general principles apply to ALL receivers - particularly those operating as "Direct Sampling" HF receivers.  A few examples of other receivers in this category include the KiwiSDR and Red Pitaya - to name but two.

Other article RX-888 article:

I recently posted another article about the RX-888 (Mk2) discussing the thermal properties of its mechanical construction - and ways to improve it to maximize reliability and durability.  You can find that article here:  Improving the thermal management of the RX-888 (Mk2) - link.


* * * * *

Taking measurements

To ascertain the signal path properties of an RX-888 (Mk2) I set its sample rate to 64 Msps and using both the "HDSDR" and "SDR Radio" programs (under Windows - because it was convenient) and a a known-accurate signal generator (Schlumberger Si4031) I made measurements at 17 MHz which follow:

Gain setting (dB)Noise floor (dBm/Hz)Noise floor (dBm in 500Hz)Apparent Clipping level (dBm)
-25-106-79>+13dBm
+0-140-113+3
+10-151-124-8
+20-155-128-18
+25-157-130-23
+33-158-131-31

Figure 1:  Measured performance of an RX-888 Mk2.  Gain mode is "high" with 0dB attenuation selected.

For convenience, the noise floor is shown both in "dBm/Hz" and in dBm in a 500 Hz bandwidth - which matches the scaling used in the chart below.  As the programs that I used have no direct indication of A/D converter clipping, I determined the "apparent" clipping level by noting the amplitude at which one additional dB of input power caused the sudden appearance of spurious signals.  Spot-checking indicated that the measured values at 17 and 30 MHz were within 1 dB of each other on the unit being tested.

Determining the right amount of "gain"

It should be stated at the outset that most of the available range of gain and attenuation provided by the '4312 and '8370 are completely useless to us.  To illustrate this point, let's consider a few examples.

Consider the chart below:

Figure 2:  ITU chart showing various noise environments versus frequency.

This chart - from the ITU - shows predicted noise floor levels - in a 500 Hz bandwidth - that may be expected at different frequencies in different locations.  Anecdotally, it is likely that in these days of proliferating switch-mode power supplies that we really need another line drawn above the top "Residential" curve, but let's be a bit optimistic and presume that it still holds true these days.

Let us consider the first entry in Figure 1 showing the gain setting of 0dB.  If we look at the "Residental" chart, above, we see that the curve at 30 MHz indicates a value very close to the -113dBm value in the "dBm in 500 Hz" column.  This tells us several things:

  • Marginal sensitivity.  Because the noise floor of the RX-888 (Mk2) and that of our hypothetical RF environment are very close to each other, we may not be able to "hear" our noise floor at 30 MHz (e.g. the 10 meter amateur band).  One would need to do an "antenna versus no antenna" check of the S-meter/receiver to determine if the former causes an increase in signal level:  If not, additional gain may be needed to be able to hear signals that are at the noise floor.
  • More gain may not help.  If we do perform the "antenna versus no antenna" test and see that with the antenna connected we get, say, an extra S-unit (6dB) of noise, we can conclude that under those conditions that more gain will not help in absolute system sensitivity.

Thinking about the above two statements a bit more, we can infer several important points about operating this or any receiver in a given receive environment:

  • If we can already "hear" the noise floor, more gain won't help.  In this situation, adding more gain would be akin to listening to a weak and noisy signal and expecting that increasing the volume would cause the signal to get louder - but not the noise.  
  • More gain than necessary will reduce the ability of the receiver to handle strong signals.  The HF environment is prone to wild fluctuations and signals can go between well below the local noise floor and very strong, so having any more gain that you need to hear your local noise floor is simply wasteful of the receiver's signal handling capability.  This fact is arguably more important with wide-band, direct-sampling receivers where the entire HF spectrum impinges on the analog-to-digital converter rather than a narrow section of a specific amateur band as is the case in "conventional" analog receivers.

Let us now consider what might happen if we were to place the same receiver in an ideal, quiet location - in this case, let's look at the "quiet rural"(bottom line) on the chart in Figure 2.

Again looking at the value at 30 MHz, we see that our line is now at about -133dBm (in 500 Hz) - but if we have our RX-888 gain set at 0 dB, we are now ((-133) - (-113) = ) 20 dB below the noise floor.  What this means is that a weak signal - just at the noise floor - is more than 3 S-units below the receiver sensitivity.  This also means that a receiver that may have been considered to be "Okay" in a noisy, urban environment will be quite "deaf" if it is relocated to a quiet one.

In this case we might think that we would simply increase our gain from 0 dB to +33dB - but you'll notice that even at that setting, the sensitivity will be only -131dBm in 500 Hz - still a few dB short of being able to hear the noise in our "antenna versus no antenna" test.

Too much gain is worse than too little!

At this point I refer to the far-right column in Figure 1 that shows the clipping level:  With a gain setting of +33dBm, we see that the RX-888 (Mk2) will overload at a signal level of around -31dBm - which translates to a  signal with a strength a bit higher than "S9 + 40dB".  While this sound like a strong signal, remember that this signal level is the cumulative TOTAL of ALL signals that enter the antenna port.  Thinking of it another way, this is the same as ten "S9+30dB" signals or one hundred "S9+20dB" signals - and when the bands are "open," there will be many times when this "-31dBm" signal level is exceeded from strong shortwave broadcast signals and lightning static.

In the case of too-little gain, only the weakest signals, below the receiver's noise floor will be affected - but if the A/D converter in the receiver is overloaded, ALL signals - weak or strong - are potentially disrupted as the converter no longer provides a faithful representation of the applied signal.  When the overload source is one or more strong transmissions, a melange of all signals present is smeared throughout the receive spectrum consisting of many mixing products, but if the overload is a static crash, the entire receive spectrum can be blanked out in a burst of noise - even at frequencies well removed from the original source of static.

Most of the adjustment range is useless!

Looking carefully at Figure 1 at the "noise floor" columns, you may notice something else:  Going from a gain of 0 dB to 10 dB, the noise floor "improves"(is lower) by about the same amount - but if you go from 25 dB gain to 33 dB gain we see that our noise floor improves by only 1 dB - but our overload threshold changes by the same eight dB as our gain increase.

What we can determine from this is that for practical purposes, any gain setting above 20 dB will result in a very little receiver sensitivity improvement while causing a dramatic reducing in the ability of the receiver to handle strong signals.

Based on our earlier analysis in a noise "Urban" environment, we can also determine that a gain setting lower than 0 dB will also make our receiver too-insensitive to hear the weakest signals:  The gain setting of -25dB shown in Figure 1 with a receive noise floor of -79dBm (500 Hz) - which is about S8 - is an extreme example of this.

Up to this point we have not paid any attention to the PE4312 attenuator as all measurements were taken with this set to minimum.  The reason for this is quite simple:  The noise figure (which translates to the absolute sensitivity of a receiver system) is determined by the noise generation of all of the components.  As reason dictates, if you have some gain in the signal path, the noise contribution of the devices after the gain have lesser effects - but any loss or noise contribution prior to the gain will directly increase the noise figure.

Note:

For examples of typical HF noise figure values, see the following articles:

Based on the articles referenced above, having a receiver system with a noise figure of around 15dB is the maximum that will likely permit reception at the noise floor of a quiet 10 meter location.  If you aren't familiar with the effects of noise figure - and loss - in a receive signal path, it's worth playing with a tool like the Pasternack Enterprises Cascaded Noise Figure Calculator (link) to get a "feel" of the effects.

I do not have the ability to measure the precise noise figure of the RX-888 (Mk2) - and if I did do so, I would have to make such a measurement using the same variety of configurations depicted in Figure 1 - but we can know some parameters about the worst-case:

  • Bias-Tee:  Estimated insertion loss of 1dB
  • PE4312:  Insertion loss of 1.5dB at minimum attenuation
  • RF Switch (HF/VHF) 1dB loss
  • 50-200 Ohm transformer:  1dB loss
  • AD8370 Noise figure:  8dB (at gain of 20dB)

The above sets the minimum HF floor noise figure of the RX-888 (Mk2) at about 12.5dB with an AD8370 gain setting of 20dB - but this does not include the noise figure of the A/D converter itself - which would be difficult to measure using conventional means.

On important aspect about system noise figure is that once you have loss in a system, you cannot recover sensitivity - no matter how much gain or how quiet your amplifier may be!  For example, if you have a "perfect" 20 dB gain amplifier with zero noise, if you place a 10 dB attenuator in front of it, you have just turned it into an amplifier with 10 dB noise figurewith 10dB gain and there is nothing that can be done to improve it - other than get rid of the loss in front of the amplifier.

Similarly, if we take the same "perfect" amplifier - with 20dB of gain - and then cascade it with a receiver with a 20dB noise figure, the calculator linked above tells us that we now have a systemnoise figure of 3 dB since even with 20dB preceeding it, our receiver still contributes noise!

If we presume that the LTC2208 A/D converter in the RX-888 has a noise figure of 40dB and no gain (a "ballpark" value assuming an LSB of 10 microvolts - a value that probably doesn't reflect reality) our receive system will therefore have a noise figure of about 22dB.

What this meansis that in most of the ways that matter, the PE4312 attenuator is not really very useful when the RX-888 (Mk2) is being used for reception of signal across the HF spectrum, in a relatively quiet location on an antenna system with no additional gain.

Where is the attenuator useful?

From the above, you might be asking under what conditions would the built-in PE4312 attenuator actually be useful?  There are two instances where this may be the case - and this would be applied ONLY if you have been unable to resolve overload situations by setting the gain of the AD8370 lower.

  • In a receive signal path with a LOT of amplification.  If your receive signal path has - say - 30dB of amplification (and if it does, you might ask yourself "why?") a moderate amount of attenuation might be helpful.
  • In a situation where there are some extremely strong signals present.  If you are near a shortwave or mediumwave (AM broadcast) transmitter that induces extremely strong signals in the receiver that cause intractable overload, the temporary use of attenuation may prevent the receiver from becoming overloaded to the point of being useless - but such attenuation will likely cause the complete loss of weaker signals.  In such a situation, the use of directional antennas and/or frequency-specific filtering should be strongly considered!

Improving sensitivity

Returning to an earlier example - our "Quiet Rural" receive site - we observed that even with the gain setting of the RX-888 (Mk2) at maximum, we would still not be able to hear our local noise floor at 30 MHz - so what can be done about this?

Let us build on what we have already determined:

  • While sensitivities is slightly improved with higher gain values, setting the gain above 20dB offers little benefit while increasing the likelihood of overload.
  • In a "Quiet Rural" situation, our 30 MHz noise floor is about -133dBm (500 Hz BW) which means that our receiver needs to attain a lower noise floor than this:  Let's presume that -136dBm (a value that is likely marginal) is a reasonable compromise.

With a "gain" setting of 20dB we know that our noise floor will be around -128dBm (500 Hz) and we need to improve this by about 8 dB.  For straw-man purposes, let's presume that the RX-888 (Mk2) at a gain setting of 20dB has a noise figure of 25dB, so let's see what it takes for an amplifier that precedes the RX-888 (Mk2)to lower than to 17dB or so using the Pasternak calculator above:

  • 10dB LNA with 7 dB noise figure:  This would result in a system noise figure of about 16 dB - which should do the trick.

Again, the above presumes that there is NO  loss (cable, splitters, filtering) preceding the preamplifier.  Again, the presumed noise figure of 25dB for the RX-888 (Mk2) at a gain setting of 20 is a bit of a "SWAG"  - but it illustrates the issue.

Adding a low-noise external amplifier also has another side-effect:  By itself, with a gain setting of +33, the RX-888 (Mk2)'s overload point is -31dBm, but if we reduce the gain of the RX-888 to 20dB the overload drops to -18dBm - but adding the external 10dB gain amplifier will effectively reduce the overload to -28dBm, but this is still 5 dB better than if we had turned the RX-888's gain all of the way up!

Taking this a bit further, let's presume that we use, instead, an amplifier with 3dB noise figure and 8 dB gain:  Our system noise figure is now about 17dB, but our overload point is now -26dBm - even better!

Adding appropriate amounts of external gain has an additional effect:  The RX-888 (and all other SDRs) are computer/network connected devices with the potential of ingress of stray signals from connected devices (computers, network switches, power supplies, etc.).  The use of external amplifiers can help override (and submerge) such signals and if proper care is taken to choose the amount of gain of the external amplification and properly choose gain/attenuation settings within the receiver, superior performance in terms of sensitivity and signal-handling capability can be the result.

Additional filtering

Only mentioned in passing, running a wideband, direct-sampling receiver of ANY type (be it RX-888, KiwiSDR, Red Pitaya, etc.) connected to an antenna is asking a lot of even 16 bits of conversion!  If you happen to be in a rather noisy, urban location, the situation is a bit better in the sense that you can reduce receiver gain and still hear "everything there is to hear" - but if you have a very quiet location that requires extra gain, the same, strong signals that you were hearing in the noisy environment are just as strong in the quiet environment.

Here are a few suggestions for maximizing performance under the widest variety of situations:

  • Add filtering for ranges that you do not plan to cover.  In most cases, AM band (mediumwave) coverage is not needed and may be filtered out.  Similarly, it is prudent to remove signals above that in which you are interested.  For the RX-888 (Mk2), if you run its sampling rate at just 65 MHz or so, you should install a 30 MHz low-pass filter to keep VHF and FM broadcast signals out.
  • Add "window" filtering for bands of interest.  If you are interested only in amateur radio bands, there are a lot of very strong signals outside the bands of interest that will contribute to overload of the A/D converter.  It is possible to construct a set of filters that will pass only the bands of interest - but this does not (yet?) seem to be a commercial product.  (Such a product may be available in the near future - keep a lookout here for updates.)
  • Add a "shelving" filter.  If you examine the graph in Figure 2 you will notice that as you go lower in frequency, the noise floor goes UPWhat this means is that at lower frequencies, you need less receiver sensitivity to hear the signals that are present - and it also means that if you increasingly attenuate those lower frequencies, you can remove a significant amount of RF energy from your receiver without actually reducing the absolute sensitivity.  A device that does just this is described in a previous blog article "Revisiting the limited-attenuation high-pass filter - again (link)".  While I do not offer such a filter personally, such a device - along with an integrated 30 MHz low-pass filter - may be found at Turn Island Systems- HERE.

Conclusions:

  • The best HF weak-signal performance for the RX-888 (Mk2) will occur with the receiver configured for "High" gain mode, 0 dB attenuation and a gain setting of about 20dB.  Having said this, you should always to the "antenna versus no antenna" test:  If you see more than 6-10dB increase in the noise level at the quietest frequency, you probably have too much gain.  Conversely, if you don't see/hear a difference, you probably need more gain - taking care in doing so.
  • For best HF performance of this - or any other wideband, direct-sampling HF SDR (RX-888, KiwiSDR, Red Pitaya, etc.) additional filtering is suggested - particularly the "shelving" filter described above.
  • In situations where the noise floor is very low (e.g. a nice, receive quiet location) many direct-sampling SDRs (RX-888, KiwiSDR, Red Pitaya) will likely need additional gain to "hear" the weaker signals - particularly on the higher HF bands.  While some of these receivers offer onboard gain adjustment, the use of external high-performance (low-noise) amplification (along with filtering and careful adjustment of the devices' gain adjustments) will give improved absolute sensitivity while helping to preserve large-signal handling capability.
  • Because the RX-888 is a computer-connected device, there will be ingress of undesired signals from the computer and the '888's built-in circuitry.  The use of external amplification - along with appropriate decoupling (e.g. common-mode chokes on the USB cable and connecting coaxial cables) can minimize the appearance of these signals.

 

This page was stolen from ka7oei.blogspot.com.

[End]

 


Resurrecting my FE-5680A Rubidium frequency reference

0
0

Fig 1:
The Hammond 1590 aluminum case
housing the FE-5860A rubidium osc-
oscillator and other circuitry - the
markings faded by time and heat.
Click on the image for a larger version.
Recently I was getting ready for the October 14, 2023 eclipse, so I pulled out my two 10 MHz rubidium frequency references (doesn't everyone have at least one?) as I would need an accurate and (especially) stable frequency reference for transmitting:  The details of what, why and how will be discussed in a post to be added in the near future.

The first of these - my Efratom LP-101 - fired up just fine, despite having seen several years of inactivity.  After letting it warm up for a few hours I dialed it in against my HP Z3801 GPSDO and was able to get it to hold to better than 5E-11 without difficulty.

My other rubidium frequency reference - the FEI FE-5680A - was another matter:  At first, it seemed to power up just fine:  I was using my dual-trace oscilloscope, feeding the 'Z3801 into channel 1 and the '5680A into channel 2 and watching the waveforms "slide" past each other - and when they stop moving (or move very, very slow) then you know things are working properly:  See Figure 2, below, for an example of this.

That didhappen for the '5680A - but only for a moment:  After a few 10s of seconds of the two waveforms being stationary with respect to each other, the waveform of the '5680A suddenly took off and the frequency started "searching" back and forth, reaching only as high as a few Hz below exactly 10 MHz and swinging well over 100 Hz below that.

My first thought was something along the lines of "Drat, the oven oscillator has drifted off frequency..."

Fig 2:
Oscillogram showing the GPS reference (red)
and the FE-5680A (yellow) 10 MHz signals
atop each other.  Timing how long it takes for the
two waveforms "slide" past each other (e.g. drift
one whole cycle) allows long-term frequency
measurement and comparison.
Click on the image for a larger version.

As it turns out, that was exactly what had happened.

Note: 

 I've written a bit more about the aforementioned rubidium frequency references, and you can read about them in the links below:


Oscillator out of range

While it is the "physics package"(the tube with the rubidium magic inside) that determines the ultimate frequency (6834683612 Hz, to be precise) it is not the physics package that generates this frequency, but rather another oscillator (or oscillators) that produce energy at that 6.834682612 GHz frequency, inject it into the cavity with the rubidium lamp and detect a slight change in intensity when it crosses the atomic resonance.

In this unit, there is a crystal oscillator that does this, using digital voodoo to produce that magic 6.834682612 GHz signal to divine the hyperfine transition.  This oscillator is "ovenized" - which is to say, the crystal and some of the critical components are under a piece of insulating foam, and attached to the crystal itself is a piece of ceramic semiconductor material - a PTC (positive temperature coefficient) thermistor - that acts as a heater:  When power is applied, it produces heat - but when it gets to a certain temperature the resistance increases, reducing the current consumption and the thermal input and the temperature eventually stabilizes.

Because we have the rubidium cell itself to determine our "exact" frequency, this oven and oscillator need only be "somewhat" stable intrinsically:  It's enough simply to have it "not drift very much" with temperature as small amounts of frequency change can be compensated, so neither the crystal oven - or the crystal contained within - need to be "exact".

Fig 3:
The FE-5680A itself, in the lid of the
case of the 1590 box to provide heat-
sinking.  As you can see, I've had this
unit open before!
Click on the image for a larger version.
What is required is that this oscillator - which is "pullable"(that is, its precise frequency is tuned electronically) must be capable of covering the exact frequency required in its tuning range:  If this can't happen, it cannot be "locked" to the comparison circuitry of the rubidium cell.

The give-away was that as the unit warmed up, it did lock, but only briefly:  After a brief moment, it suddenly unlocked as the crystal warmed up and drifted low in frequency, beyond the range of the electronic tuning.

Taking the unit apart I quickly spotted the crystal oscillator under the foam and powering it up again, I kept the foam in place and watched it lock - and then unlock again:  Lifting the foam, I touched the hot crystal with my finger to draw heat away and the unit briefly re-locked.  Monitoring with a test set, I adjusted the variable capacitor next to the crystal and quickly found the point of minimum capacitance (highest frequency) and after replacing the foam, the unit re-locked - and stayed in lock.

Bringing it up to frequency

This particular '5680A is probably about 25 years old - having been a pull from service (likely at a cell phone site) and eventually finding its way onto EvilBay as surplus electronics.  Since I've owned it, it's also seen other service - having been used twice in geostationary satellite service as a stable frequency reference, adding another 3-4 years of time to its "on" time.

As quartz crystals age, they inevitably change frequency:  In general, they tend to drift upwards if they are overdriven and slowly shed material - but this practice is pretty rare these days - so they seem to tend to drift downwards in frequency with normal aging of the crystal and nano-scale changes in the lattice that continue after the quartz is grown and cut:  Operating at elevated temperature - as in an oven - tends to accelerate this effect.

By adjusting the trimmer capacitor and noting the instantaneous frequency (e.g. adjusting it mechanically before the slower electronic tuning could take effect) I could see that I was right at the ragged edge of being able to net the crystal oscillator's tuning range, so I needed to raise the natural frequency a bit more.

If you need to lower a crystal's frequency, you have several options:

  • Place an inductor in series with the crystal.  This will lower the crystal's in-circuit frequency of operation, but since doing so generally involves physically breaking an electrical connection to insert a component, this is can be rather awkward to do.
Fig 4:
The tip of the screwdriver pointing at the added 2.2uH
surface-mount inductor:  It's the black-ish component
at sort of a diagonal angle, wired across the two
crystal leads.
Click on the image for a larger version.
  • Place a capacitor across the crystal.  Adding a few 10s of pF of extra capacitance can lower a crystal's frequency by several 10s or hundreds of ppm (parts-per million), depending on the nature of the crystal and the circuit.

Since the electrical "opposite" of a capacitor is an inductor, the above can be reversed if you need to raise the frequency of a crystal:

  • Insert a capacitor in series with the crystal.  This is a very common way to adjust a crystal's frequency - and it may be how this oscillator was constructed.  As with the inductor, adding this component - where none existed - would involve breaking a connection to insert the device - not particularly convenient to do.
  • Place an inductor across the crystal.  Typically the inductance required to have an effect will have an impedance of hundreds of ohms at the operating frequency, but this - like the addition of a capacitor across a crystal to lower the frequency - is easier to do since we don't have to cut any circuit board traces.
With either method of tweaking the resonance of the oscillator circuit, you can only go so far:  Adding reactance in series or parallel will eventually swamp the crystal itself, potentially making it unreliable in its oscillation - and if that doesn't happen, the "Q" is reduced, potentially reducing the quality of the signal produce and furthermore, taking this to an extreme can reduce the stability overall as it starts to become temperature sensitive more with the added capacitor/inductor than just the crystal, alone.

In theory, I could have placed a smaller fixed capacitor in series with the trimmer capacitor  - or used a lower-value capacitor - but I chose, instead, to install a fixed-value surface-mount inductor in parallel with the crystal.  Prior to doing this I checked to see if there was any circuit voltage across the crystal, but there was none:  Had I seen voltage, adding an inductor would have shorted it out and likely caused the oscillator to stop working and I would have either reconsidered adding a series capacitor somewhere or, more likely I would have placed a large-value (1000pF or larger) capacitor in series with the inductor to block the DC.

"Swagging" it, I put a 2.2uH 0805 surface-mount inductor across the crystal and powered up the '5680A and after a 2-3 minute warm-up time, it locked.   After it had warmed up for about 8 minutes I briefly interrupted the power and while it worked to re-establish lock I saw the frequency swing nearly 100 Hz below and above the target indicating that it was now more less in the center if its electronic tuning range indicating success!  As can be seen from Figure 4, there is likely enough room to have used a small, molded through-hole inductor instead of a surface-mount device.
Fig 5:
The crystal is under the round disk (the PTC
heater) near the top of the picture and the
adjustment capacitor is to the right of the
crystal.
Click on the image for a larger version.

With a bit of power-cycling and observing the frequency swing while the oscillator was hot, I was able to observing the electronic tuning range and in so-doing, increase the capacitance of the trimmer capacitor very slightly from minimum indicating that I now had at least a little bit of extra adjustment room - but not a lot.  Since this worked the first time I didn't try a lower value of inductance (say, 1uH) to further-raise the oscillator frequency, leaving well-enough alone.

Buttoning everything back up and putting it back in its case, everything still worked (always gratifying!) and I let the unit "burn in" for a few hours.

Comparing it to my HP Z8530 GPS Disciplined oscillator via the oscilloscope (see Figure 2) it took about 20 minutes for the phase to "slide" one entire cycle (360 degrees) indicating that the two 10 MHz signal sources are within better than 10E-10 of each other - not too bad for a device that was last adjusted over a decade ago and as seen about 15000 operational hours since!
 
This page stolen from ka7oei.blogspot.com
 
[END]
 

Remote (POTA) operation from Canyonlands National Park (K-0010)

0
0

As I am wont to do, I recently spent a week camping in the "Needles" district of Canyonlands National Park.  To be sure, this was a bit closer to "glamping" in the sense that we had a tent, a flush-toilet a few hundred feet away, plenty of food, solar panels for power and didn't need to haul our gear in on our backs - at least not any farther than between the vehicle(s) and the campsite.

While I did hike 10s of miles during the week, I didn't hike every day - and that left a bit of "down time" to relax and enjoy the local scenery.

As a first for me - even though I have camped there many times and have even made dozens of contacts over the years on HF - I decided to do a real POTA (Parks On The Air) activation.  In the days before departure, I finally got around to signing up on the pota.app web site and just before I left the area of cell phone coverage (there is none at all anywhere near where we were camping) I scheduled an activation to encompass the coming week as I had no idea exactly when I would be operating or on what bands.

Figure 1:
The JPC-7 loaded dipole at 10', backgrounded by red rock.
Click on the image for a larger version.

* * *

It wasn't until the day after I arrived that I finally had time to operate.  As it was easiest and most convenient to do so, I deployed my "modified" JPC-7 loaded dipole antenna (an antenna I'll describe in greater detail in a future post) affixing it atop a tripod light stand that could be telescoped to about 10 feet (3 meters) in height - attaching one of its legs to the swing-out grill of the fire pit to prevent it from falling over.  Being only about 10 feet from the picnic table, it offered a relatively short cable run and when it came time to tune the antenna, I simply disconnected it from the input of the tuner, connected it to my NanoVNA and adjusted the coils:  In so-doing, I could change bands in about two minutes.

The radio that I usually used was my old FT-100 - typically running at 50 watts on CW, 100 watts on SSB, but I would occasionally fire up my FT-817  and run a few contacts on that as well.  As you would expect, the gear was entirely battery-powered as there is not a commercial power line within 10s of miles of this place:  Often, one of my batteries would be off being charged from a solar panel, requiring that I constantly rotate through them.

* * *

For reasons of practicality - namely the fact that I would be operating in (mostly) daylight and for reasons related to antenna efficiency, I mostly operated on 30 meters and higher.  Because we were outside, this made a screen very difficult to see so I logged on a piece of paper - also convenient because this method required no batteries!  The very first contact - a Park-to-Park - occurred on 15 meter SSB, but I quickly QSY'ed down to 17 meters and worked a few dozen stations on CW - breaking in my "CW Morse" paddle for the first time on the air:  It would seem that my scheduling the activation and my Morse CW being spotted by the Reverse Beacon Network caused the notice to go out automatically where I was quickly pounced on.

In using this paddle for the first time I quickly discovered several things:

  • I've seen others using this paddle by holding it in their hand - but I was completely unable to do that:  I would get into the "zone" while sending and inevitably put my fingers on the "dit" and "dah" paddle's tension adjustment screws, causing me to send random elements:  At first I thought that something was amiss - perhaps RF getting into the radio - but one of the other folks I was with (who are also hams) pointed out what I was doing.
  • Since my CW Morse paddle has magnets in the base - and since the picnic table's top was aluminum - I stuck it to the bottom of a cast-iron skillet which solved the first problem, but I quickly discovered that the bottom of a well-used skillet is really quite smooth and lubricated with a fine layer of carbon.  What this meant was that not only did I have to use my other hand to keep the key from sliding around, I started looking like the carbon-covered operators of high-power Poulsen Arc transmitters of a century ago:  My arm and hand quickly got covered with a slight residue of soot!
  • During contacts, I would randomly lose the "Dit" contact.  I was presuming that this was from dust getting into the contacts (I'm sitting outside!) as it usually seemed to "fix" itself when I would lean over and blow into the paddle, but in once instance when this didn't work at all I wiggled/rotated the 3.5mm TRS jack on the back and it started working again.  I'm thinking that the issue was just a flaky contact on the jack.

At some point I'll need to figure out a better means of holding this paddle down - perhaps a small sheet of steel with bumpers and rubber feet - or simply learn to use the paddle with a much lighter touch!

Figure 2:
Operating CW from the picnic table, the paddle on a skillet!
Click on the image for a larger version.

With a few dozen CW contact under my belt I readjusted the antenna and QSYed down to 20 meter SSB where I worked several pages of stations, my voice getting a bit hoarse before handing the microphone over to Tim, KK7EF who continued working the pileup under my callsign.

* * *

After a while, we had to shut down as we needed the picnic table to prepare dinner - but this wasn't the last bit of activation:  Over the next few days - when time was available - I would often venture out on 40, 30, 20 and 17 meter CW - occasionally braving 17 meter SSB:  I generally avoided 20 meter SSB as the band generally seemed to be a bit busy - particularly during the weekend when some sort of activity caused the non-WARC bands to be particularly full.

* * *

By the end of the trip, I had logged about 387 total contacts - roughly 2/3 of them being CW.  When I got home I had to transcribe the paper logs onto the computer and learned something doing this:  If you do such a transcription, try to avoid doing so late at night when you are tired - and always wait until the next day - whether you were tired or not - and go back and re-check your entries BEFORE uploading the logs to LOTW, eQSL and/or the POTA web site!  Being tired, I hadn't thought the above through very well and later had to go back and make corrections and re-upload.


This page stolen from ka7oei.blogspot.com

[END]


Multi-band transmitter and monitoring system for Eclipse monitoring (Part 1)

0
0

It should not have escaped your attention - at least if you live in North America - there there are/have been two significant solar eclipses occurring in recent/near times:  One that occurred on October 14, 2023 and another eclipse in April, 2024.  The path of "totality" of the October eclipse happened to pass through Utah (where I live) so it is no surprise that I went out of my way to see it - just as I did back in 2012:  You can read my blog entry about that here.

 Figure 1:
The eclipse in progress - a few minutes
before "annularity".
(Photo by C. L. Turner)
I will shortly produce a blog entry related to my activities around the October 14, 2023 eclipse as well.

The October eclipse was of the "annular" type meaning that the moon is near-ish apogee meaning that the subtended angle of its disk is insufficient to completely block the sun owing to the moon's greater-than-average distance from Earth:  Unlike a solar eclipse, there is no time during the eclipse where it is safe to look at the sun/moon directly, without eye protection.  The sun will be mostly blocked, however, meaning that those in the path of "totality" experienced a rather eerie local twilight with shadows casting images of the solar disk:  Around the periphery of the moon it was be possible to make out the outline of lunar mountains - and those unfortunate to start at the sun during this time will receive a ring-shaped burn to their retina.

From the aspect of a radio amateur, however, the effects of a total and annular solar eclipse are largely identical:  The diminution of the "D" layer and partial recombination of the "F" layers of the ionosphere causing what are essentially nighttime propagation conditions during the daytime - geographically limited to those areas under the lunar shadow.

In an effort to help study these sort of effects - and to (hopefully) better-understand the propagation effects, a number of amateurs went (and are) going out into the field - in or near the path of "totality" - and setting up simultaneous, multi-band transmitters.

Producing usable data

Having "Eclipse QSO Parties" where amateur radio operators make contacts during the eclipse likely goes back nearly a century - the rarity of a solar eclipse making the event even more enigmatic.  In more recent years amateurs have been involved in "citizen science" where they make observations by monitoring signals - or facilitate the making of observations by transmitting them - and this will be happened during the October eclipse and should also happen during the April event as well.

While doing this sort of thing is just plain "fun", a subset of this group is of the metrological sort (that's "metrology", no "meteorology"!) and endeavor to impart on their transmissions - and observations of received signals - additional constraints that are intended to make this data useful in a scientific sense - specifically:

  • Stable transmit frequencies.  During the event, the perturbations of the ionosphere will impart on propagated signals Doppler shift and spread:  Being able to measure this with accuracy and precision (which are NOT the same thing!) adds another layer of extractable information to the observations.
  • Stable receivers.  As with the transmitters, having a stable receiver is imperative to allow accurate measurement of the Doppler shift and spread.  Additionally, being able to monitor the amplitude of a received signal can provide clues as to the nature of the changing conditions.
  • Monitoring at multiple frequencies.  As the ionospheric conditions change, its effects at different frequencies also changes.  In general, the loss of ionization (caused by darkness) reduces propagation at higher frequencies (e.g. >10 MHz) and with lessened "D" layer absorption lower frequencies (<10 MHz) the propagation at those frequencies is enhanced.  With the different effects at different frequencies, being able to simultaneously monitor multiple signals across the HF spectrum can provide additional insight as to the effects.

To this end, the transmission and monitoring of signals by this informal group have established the following:

  • GPS-referenced transmitters.  The transmitters will be "locked" to GPS-referenced oscillators to keep the transmitted frequencies both stable and accurate to milliHertz.
  • GPS referenced receivers.  As with the transmitters, the receivers will also be GPS-referenced to provide milliHertz accuracy and stability.

With this level of accuracy and precision the uncertainties related to the receiver and transmitter can be removed from the Doppler data.  For generation of stable frequencies, a "GPS Disciplined Oscillator" is often used - but very good Rubidium-based references are also available, although unlike a GPS-based reference, the time-of-day cannot be obtained from them.

Why this is important:

Not to demean previous efforts in monitoring propagation - including that which occurs during an eclipse - but unless appropriate measures are taken, their contribution to "real" scientific analysis can be unwittingly diminished.  Here are a few points to consider:

  • Receiver frequency stability.  One aspect of propagation on HF is that the signal paths between the receiver and transmitter change as the ionosphere itself changes.  These changes can be on the order of Hertz in some cases, but these changes are often measured in 10s of milliHertz.  Very few receivers have that sort of stability and the drift of such a receiver can make detection of these Doppler shifts impossible.
  • Signal amplitude measurement.  HF signals change in amplitude constantly - and this can tell us something about the path.  Pretty much all modern receivers have some form of AGC (Automatic Gain Control) whose job it is to make sure that the speaker output is constant.  If you are trying to infer signal strength, however, making a recording with AGC active renders meaningful measurements of signal strength pretty much impossible.  Not often considered is the fact that such changes in propagation also affect the background noise - which is also important to be able to measure - and this, too, is impossible with AGC active.
  • Time-stamping recordings.  Knowing when a recording starts and stops with precision allows correlation with other's efforts.  Fortunately this is likely the easiest aspect to manage as a computer with an accurate clock can automatically do so (provided that one takes care to preserve the time stamps of the file, or has file names that contain such information) - and it is particularly easy if one happens to be recording a time station like WWV, WWVH, WWVB or CHU.

In other words, the act of "holding a microphone up to a speaker" or simply recording the output of a receiver to a .wav file with little/no additional context makes for a curious keepsake, but it makes the challenge of gleaning useful data from it more difficult.

One of our challenges as "citizen scientists" is to make the data as useful as possible to us and others - and this task has been made far easier with inexpensive and very good hardware than it ever has been - provided we take care to do so.  What follows in this article - and subsequent parts - are my reflections on some possible ways to do this:  These are certainly not the only ways - or even the best ways - and even those considerations will change over time as more/different resources and gear become available to the average citizen scientist. 

* * *

How this is done - Receiver:

The frequency stability and accuracy of MOST amateur transceivers is nowhere near good enough to provide usable observations of Doppler shift on such signals - even if the transceiver is equipped with a TCXO or other high-stability oscillator:  Of the few radios that can do this "out of the box" are some of the Flex transceivers equipped with a GPS disciplined oscillator.

To a certain degree, an out-of-the-box KiwiSDR can do this if properly set-up:  With a good, reliable GPS signals and when placed within a temperature-stable environment (e.g. temperature change of 1 degree C or so during the time of the observation) they can be stable enough to provide useful data - but there is no guarantee of such.

To remove such uncertainty a GPS-based frequency reference is often applied to the KiwiSDR - often in the form of the Leo Bodnar GPS reference, producing a frequency of precisely 66.660 MHz.  This combination produces both stable and accurate results.  Unfortunately, if you don't already have a KiwiSDR, you probably aren't going to get one as the original version was discontinued in 2022:  A "KiwiSDR 2" is in the works, but there' no guarantee that it will make it into production, let alone be available in time for the April, 2024 eclipse. 

Figure 2:
The RX-888 (Mk2) - a simple and relatively inexpensive
box that is capable of "inhaling" all of HF at once.
Click on the image for a larger version.

The RX-888 (Mk2)

A suitable work-around has been found to be the RX-888 (Mk2) - a simple direct-sampling SDR - available for about $160 shipped (if you look around).  This device has the capability of accepting an external 27 MHz clock (if you add an external cable/connector to the internal U.FL connector provided for this purpose) in which it can become as stable and accurate as the external reference.

This SDR - unlike the KiwiSDR, the Red Pitaya and others - has no onboard processing capability as it is simply an analog-to-digital coupled with a USB3 interface so it takes a fairly powerful computer and special processing software to be able to handle a full-spectrum acquisition of HF frequencies.

Software that is particularly well-suited to this task is KA9Q-Radio(link).  Using the "overlap and save" technique, it is extraordinarily efficient in processing the 65 Megasamples-per-second of data needed to "inhale" the entire HF spectrum.

KA9Q-Radio can produce hundreds of simultaneous virtual receivers of arbitrary modes and bandwidths which means that one such virtual receiver can be produced for each WSPR frequency band:  Similar virtual receivers could be established for FT-8, FT-4, WWV/H and CHU frequencies.  The outputs of these receivers - which could be a simple, single-channel stream or a pair of audio in I/Q configuration - can be recorded for later analysis and/or sent to another program (such as the WSJT-X suite) for analysis.

Additionally, using the WSPRDaemon software, the multi-frequency capability of KA9Q-Radio can be further-leveraged to produce not only decodes of WSPR and FST4W data, but also make rotating, archival I/Q recordings around the WSPR frequency segments - or any other frequency segments (such as WWV, CHU, Mediumwave or Shortwave broadcast, etc.) that you wish.

Comment:  I have written about the RX-888 in previous blog posts:

  • Improving the thermal management of the RX-888 (Mk 2) - link 
  • Measuring signal dynamics of the RX-888 (Mk 2) - link

Full-Spectrum recording

Yet another capability possible with the RX-888 (Mk2) is the ability to make a "full spectrum" recording - that is, write the full sample rate (typically 64.8 Msps) to a storage device.  The result are files of about 7.7 gigabytes per minute of recording that contain everything that was received by the RX-888, with the same frequency accuracy and precision as the GPS reference used to clock the sample rate of the '888.  

What this means is that there is the potential that these recordings can be analyzed later to further divine aspects of the propagation changes that occurred during, before and after the eclipse - especially by observing signals that one may not have initially thought to consider:  This also can allow the monitoring of the overall background noise across the HF spectrum to see what changes during the eclipse, potentially filling in details that might have been missed on the narrowband recordings.

Because such a recording contains the recordings of time stations (WWV, WWVH, CHU and even WWVB) it may be possible to divine changes in propagation delay between those transmit sites and the receive sites.  If a similar GPS-based signal is injected locally, this, too, can form another data point - not only for the purposes of comparison of off-air signals, but also to help synchronize and validate the recording itself.

By observing such a local signal it would be possible to time the recording to within a few 10s of nanoseconds of GPS time - and it would also be practical to determine if the recording itself was "damaged" in some way (e.g. missed samples from the receiver):  Even if a recording is "flawed" in some way, knowing the precise location an duration of the missing data allows this to be taken into account and to a large extent, permit the data "around" it to still be useful.

Actually doing it:

Up to this point there has been a lot of "it's possible to" and "we have the capability of" mentioned - but pretty much everything mentioned so far was used during the October, 2023 eclipse.  To a degree, this eclipse is considered to be a rehearsal for the April 2024 event in that we would be using the same techniques - refined, of course, based on our experiences.

While this blog will mostly refer to my efforts (because I was there!) there were a number of similarly-equipped parties out in the fields and at home/fixed stations transmitting and receiving and it is the cumulative effort - and especially the discussions of what worked and what did not - that will be valuable in preparation for the April event.  Not to be overlooked, this also gives us valuable experience with propagation monitoring overall - an ongoing effort using WSPRDaemon - where we have been looking for/using other hardware/software to augment/improve our capabilities.

In Part 2 I'll talk about the receive hardware and techniques in more detail.


Stolen from ka7oei.blogspot.com

[END]



Observations, analysis and field use of the JPC-7 portable "dipole" antenna

0
0

Figure 1:
The JPC-7 and its original set of components in the case.  On
the left is a zippered section with the balun, strap, feedpoint
and mounting hardware for the elements.  On the right
can be seen the two telescoping sections, the two loading
coils and the four screw-together mast sections.
Click on the image for a larger version.
The JPC-7 (apparently by BD7JPC) is a portable dipole antenna - somewhat similar to the "Buddipole" - in that it is tripod-mounted, with telescoping elements that can be oriented horizontally.  Both use loading coils to increase the electrical length of the antenna, allowing them to operate down to 40 meters in their standard configuration.

I was able to get mine, shipped, via Ali Express for about US$170, but it is also sold domestically (in the U.S.) from a number of vendors - sometimes under the brand name of "Chelegance".

A portable antenna is not the same as a "home" antenna

As you might expect, this antenna is intended for portable use - and easy-to-assemble, quickly-deployable antennas are not likely to offer high performance compared to their "ful-sized, high up in the tree" counterparts that you might have at your home QTH.  Rather, this antenna's height is limited by the tripod on which it is mounted - which, for the lower bands where its height above ground is definitely below 1/4 wavelength - is likely to put it squarely in the "NVIS"(Near Vertical Incident Skywave) category - that is, an antenna with a rather high radiation angle that better-favors nearer stations than being a DX antenna.

Additionally, its element length as-shipped (with the two screw-in sections and the telescoping whip fully-extended, sans coil) is 125"(3.175 meters) is approximately a quarter-wavelength at 22MHz - near the 15 meter band meaning - that for all HF amateur bands 15 meters and below it requires the addition of the coils' inductance to resonate the two elements.  Being a loaded antenna - and with a small-ish aperture and with coils losses - means that its efficiency IS going to be less than that of its full-sized antenna (e.g. half-wave dipole) counterpart.

Of course, the entire reason for using a "portable" antenna is to enjoy the convenience of an antenna that is quick to deploy and fairly easy to transport - and anyone doing this knows (or shouldknow) that one must often sacrifice performance when doing this!

Having said this, after using the JPC-7 in the field several times I've found that it holds up pretty well against a similar "full size" antenna (e.g. dipole) on the higher bands (20 and up) while on 40 meters, subjective analysis indicates that it's down by "about an S-unit".  For SSB (voice) operation, this is usually tolerable under reasonable conditions and for digital or CW, it may hardly be noticeable.

Figure 2:
The components included with the JPC-7 - except the
strap and the manual.
Click on the image for a larger version.

What is included with the JPC-7:

  • Four aluminum mast sections.  These are hollow tubes with (pressed in?) in screw fittings on the ends - one male and the other female, both with M10-1.5 coarse threads that may be assembled piece-by-piece into a mast/extension.  End-to-end these measure 13-3/16"(33.5cm) each, including the protruding screw - 12-3/4"(32.4cm) from flat to flat.  These are 3/4"(1.9cm) diameter.  There are two of these sections per element to achieve the  125"(3.175 meter) length of each.
  • Telescoping sections.  These are stainless steel telescoping rods that are 13-1/8"(33.4cm) long including the threaded stud (12-7/8" or 32.7cm without) when collapsed and 99-11/16"(8' 3-11/16" or 253.2cm) when fully extended - not including the stud.
As with all stainless-steel telescoping whips, it is strongly recommended that you lubricate the sections as soon as you receive them.  As with about every telescoping whip you will ever see, these sections are "stainless on stainless" and as with many friction surfaces between the same type of metal, they will eventually gall and become increasingly difficult to operate as they scratch each other.  I use PTFE (Teflon) based "Super Lube" for this purpose as it does not dry out and become gummy as normal distillate oils like "3-in-1" or "household" do.  Do not use "lubricants" like "WD-40" as these aren't actually lubricants in the traditional sense in that they tend to evaporate and leave a varnish behind.  If the sections do get stiff over time, a buffing with very find steel wool and/or very fine (1000 or higher) grid sandpaper followed by wiping down and lubricating may help loosen them.
  • Adjustable coils.  These are constructed of what appears to be thermoplastic or possibly nylon with molded grooves for the wire.  This unit is connected to the others via a male threaded stud on the bottom and female threads on the top, both being M10-1.5 like everything else.
The form itself is 4-1/2"(11.4cm) long not including the stud and 1-11/16"(4.3cm) diameter - wound with 34 turns of #18 (1mm) stainless steel wire with an inside diameter of approximately 1.66"(4.21cm) over a length of about 2.725"(6.92cm).  It has a slider with a notched spring that makes contact with the coil and this moves along a stainless steel rod about 0.12"(3mm) diameter that is insulated at the top, meaning that as the slider is moved down, the inductance of the coil is increased.  I suggest that a drop of lubricant (I recommend the PTFE-based "Super Lube" as it doesn't dry and get gummy) be applied to the slider to make it easier to adjust and to minimize the probability of galling.
 
The coils have painted markings indicating "approximate" locations of the tap for both 20 and 40 meters when the telescoping section is adjusted as described in the manual.  These coils are wound with 1mm diameter 316 stainless steel wire:  The maximum inductance is a bit over 20uH and the DC resistance is about 4 ohms - more on this later.
  • Figure 3:
    A close-up of the feedpoint mount showing the
    brass inserts and index pins.  The holes in the knurled
    knobs are sized to receive the miniature banana plugs
    from the balun.
    Click on the image for a larger version.
    Feedpoint mount.  This is a heavy plastic piece molded about pieces of brass into which the elements/coils are threaded.  There are three 10mm x 1.5mm female threads into the brass inserts plus another female thread of larger size (1/2" NPT) into which the aluminum 5/8" gaffer stud mount is screwed.  On the surfaces with the brass inserts and the 10mm x 1.5mm female threads are a series of index holes into which the element mounts (described below) are seated to allow the elements to be adjusted at various angles.  Electrical connection is made via holes to receive 2.5mm miniature banana plugs (visible in Figure 3) which contact the adjacent 10mm x 1.5mm female thread bodies.
Element mounts.  These are two heavy-duty nickel-plated brass adapters that are held to the feedpoint mount via 10mm x 1.5mm screws with large handles - both included.  Into the mounting surfaces are holes to receive index pins allow the elements to be rotated to various angles - from a horizontal dipole to a "Vee" configuration - and even to an "L" with one element vertical and the other horizontal.  It can also be configured with just a single element as a plain vertical if one so-chooses - the counterpoise/ground needing to be supplied by the user.  These may be seen in Figure 8, below.
  • 5/8" stud (gaffer) mount.  As mentioned earlier, this kit includes a male 5/8" stud mount commonly found on photographic lighting tripods.  The other side of this has 1/2" NPT pipe threads that screw into the feedpoint mount.  This piece is shown in Figure 4.

Figure 4:
5/8 stud mount adapter to be used with
lighting tripods.  The "other" side is a 1/2 inch
NPT pipe thread that screws into the feedpoint mount.
Click on the image for a larger version.

  • 1:1 balun.  This appears to be a "voltage" balun, with DC continuity between the "balanced" and "unbalanced" sections and across the windings themselves.  This is in contrast to a "current" type balun that would typically consist of feedline, twisted pair or two conductors wound as a common-mode choke on a ferrite core. More on this later.
  • Hook-and-loop ("Velcro") strap for the balun.  This is used to attach the balun to the mast to prevent the weight of the coax and balun from pulling on the feedpoint mount.  This strap appears to be generic and doesn't really fit the balun too well unless it is cinched up, so I zip-tied it in place to keep the two together. 
  • Padded carrying case.  This zippered case is about 14" x 9"(35.5x23cm) with elastic loops to retain the above antenna components and a zippered "net" pocket to contain the counterpoise/radial cable kit and the instructions.  There is ample room in this case to add additional components such as coaxial cable - and enhancements to the antenna, as discussed below.  
  • Instruction manual.  The instructions included with this antenna are only somewhat better than typical "Chinese English" - apparently produced with the help of an online translator rather than someone with intimate knowledge of the English language resulting in a combination of head-scratching, laughter and frustration when trying to make sense of them.  Additionally, the instructions that came with my antenna included those for the JPC-12 vertical as well, printed on the obverse side of the manual.

Construction and build quality

About a year ago I purchased a JPC-12 vertical antenna and it shares many of the same components as this antenna - the only real differences are that this antenna comes with two telescoping whips and loading coils, the center mount for the elements, a 1:1 balun, and the 5/8" stud adapter for the center mount.

Many of these components are the same as supplied with the JPC-12 vertical:  The loading coils, the telescoping whips, and the screw-together antenna sections.  In other words, if you have both antennas, you can mix-match parts to augment the other.  You can, in fact, buy kits of parts for either antenna to supply the missing pieces to convert from one to the other.

Mechanically, this antenna seems to be quite well built:  During use, I have no sense of anything being "about to come apart" or "just barely good enough".  I suspect that the designers of this antenna did so iteratively, and the end product is a result of some refinement over time.  The only fragile parts are the telescoping whips, but these things are, by definition, fragile, no matter who makes them!

How it is mounted

This antenna does NOT come with any tripod or other support, but it offers three ways of being mounted:

  • 1/2" NPT threads.  The center support, as the primary mounting, has female 1/2" NPT threads.
  • 5/8" male stud mount.  This antenna comes with a machined aluminum mount (seen in Figure 4) that screws into 1/2" NPT threads in the center support that is a 5/8" stud mount - sometimes referred to as a "Gaffer" or "Grip" mount - of the sort found everywhere on tripods used for holding photographic lights.
  • 10mm x 1.5mm thread.  If you want to configure this antenna as a dipole, you also have the option of using a 10mm x 1.5mm thread that is on the side opposite the female threads into which the 5/8" stud mount screws.  While this thread isn't particularly common in the U.S.A., it would seem that this is a common size for portable antennas everywhere else in the world and hardware of this size is available at larger U.S. hardware stores.  As this mounting point may be used as part of the antenna
    Figure 5:
    A homebrew double-female 5/8 stud adapter.  These adapters
    have 3/8" threads and were attached using a thread
    coupler.  This piece was necessary as both the antenna and my
    tripod have male 5/8" stud mounts on them!
    Click on the image for a larger version.
    (when configured in an "L" shape or if configured as a vertical-only)
    so it's the same threads as the screw-in element sections.

For me the 5/8" male stud mount is the most useful as it happens that I have onhand an old gaffer tripod (light stand) of this sort - but there's a catch:  It, too, has a 5/8"male stud mount!  It would seem that these tripods come both ways - with either a male or female 5/8" mount, but for less than US$15 I was able to construct a "double-female" adapter that solved the problem.  From Amazon, I ordered two 5/8 female stud to 3/8"-16 adapters and coupled them together with a 3/8" thread coupler as seen in Figure 5.  The only "trick" with this was that I had to sort through my collection of flat washers to find the combination of thicknesses that resulted in both knobs facing the same direction when they were tightened.

Frequency coverage

This antenna is advertised to cover 40 through 6 meters - and this is certainly true:  When the four supplied mast sections are installed (two per side) the lowest frequency at which it can be resonated with the telescoping rods at full extension and the inductors set at maximum is around 6.7-6.8 MHz - well below the entirety of the 40 meter band.

On 40 meters, the 2:1 VSWR bandwidth was typically around 150 kHz:  A 2:1 VSWR is about the maximum mismatch at which most modern radios will operate at full power before SWR "foldback" occurs.  Of course, if your radio has a built-in tuner - even one with a limited range - you will certainly be able to make the radio "happy" across the entire 40 meter band without fussing with the antenna.

On the other extreme, with the minimum coil inductance and the two telescoping rods at maximum the resonant frequency was about 21.7 MHz:  This means that for all bands 15 meters and lower, you will need the inductors - but for 12 meters and up you can omit them entirely (which is recommended!), bringing the antenna to resonance solely by adjusting the length of the telescoping sections.

Tuning the antenna

This may be where some people have issues.  I am very comfortable using a NanoVNA:  I have several of these as they are both cheap and extremely useful - the only down-side really being that their screens are not easily viewed in direct sunlight - but simply standing with my back to the sun was enough to make it usable as all one is trying to see is the trace on the screen rather than any fine detail.

The biggest advantage of the NanoVNA over a traditional antenna analyzer is that you get the "big picture" of what is going on:  You can instantly see where the antenna is resonant  - and how good the match may be.  More importantly, you can see at a glance if the antenna is tuned high (too little inductance) or too low (too much inductance) and make adjustments accordingly whereas using a conventional antenna analyzer will require you to sweep up and down:  Still do-able, but less convenient.

Tuning is somewhat complicated by two factors:

  • There are two coils to adjust - and they must both be pretty close to each other in terms of adjustment to get the best match.  Simply looking at the coils one can "eyeball" the settings of the slider/contact to get them very close to each other - something that becomes easier with practice.
  • The "resolution" of the inductors' adjustments is limited by the fact that one can make adjustments by one turn at a time.  At 20 meters and higher, being able to only adjust inductance one turn at a time is likely to result in the best match being just above or below the desired frequency.  At lower frequencies (lots of turns) - say 40 and 30 meters - you can likely get 2:1 or better by adjusting the coil taps alone, but at higher frequencies you will likely need to tune for the best match just below the frequency of interest and then shorten the telescoping rods slightly to bring it right onto frequency.

 Once I'd used the antenna a few times I found that I could change bands in 2-3 minutes as I would:

  • Lower the antenna to shoulder height so that the coils and telescoping rods may be reached.  If you had previously shortened the telescoping elements for fine-tuning a band you should reset them to full length.
  • Set the NanoVNA to cover from the frequency to which it is already tuned and where I want to go:  If I was setting it up for the first time I would set the 'VNA to cover above and below the desired frequency by 5 MHz or so so I could see the resonant point even when it was far off-frequency.  After using it a few times you will remember about where the coil taps need to be set for a particular band.
  • On the NanoVNA I would then set a marker to the desired frequency.
  • I would then "walk" both coils up/down to the desired frequency while watching the 'VNA.  As the tuning of the elements interact, you may have to iterate a bit to get the VSWR down.  Again, you may have to tune for best match at a frequency just below the target frequency and then shorten the telescoping sections.
  • I would raise the mast to full height again.  I noticed  a slight increase in resonant frequency (particularly on the lower bands - 40 and 30 meters) by raising the antenna on the order of 50 kHz on 40 meters.  Usually, this doesn't matter, but with a bit of practice/experience you'll be able to compensate for this while tuning.
  • A match of 2:1 or better was easily obtained - but don't expect to get a 1:1 match all of the time as the only adjustments are those of resonating the elements.  Practically speaking, there is no performance difference between a 2:1 and 1:1 match unless your radio's power drops back significantly:  An antenna tuner could be used, but this will surely insert more loss than having a modest mismatch!

Figure 6:
As with almost any inductor adjustable using sliders, care
should be taken to assure that only one turn is being touched
by the contact, as shown.
Click on the image for a larger version.
All of that sounds complicated - and it may be the first time doing it - but I found it to be very quick and easy, particularly after even just a little bit of practice!

Carefully adjusting coil taps

 If you look very carefully at the sliding coil taps you'll notice that if very carefully adjusted that they will contact just one turn of wire - but it is almost easier for the contact spring to bridge two turns of wire, shorting them together.  When this happens the inductance will go down slightly and you may see the resonance go up in frequency unexpectedly.  Additionally, the shorting of two turns can also reduce the "Q"(and efficiency) of the coil slightly.

If you are aware of this situation - which can occur with nearly all tapped inductors adjusted with a slider - you can start to "feel" when the slider bridges two turns of the coil and avoid its happening as you make the adjustments.

* * *

Suggested modifications/additions:

All electrically-short antennas that require series inductance for tuning to resonance - like this one - will lose efficiency due to losses in the coil, but this can be offset - at least somewhat - by increasing the length of the elements themselves.  One of the easiest ways to do this is to purchase a couple of extra screw-on mast sections:  The addition of one on each side will increase the total length by about 25"(64cm) and allow a slight decrease in the required inductance - resulting in slightly lower loss and increase the aperture of the antenna slightly.  These additional screw-on sections are typically available from the sellers of the antenna for between US $10 and $15 each but are often called something like "Dedicated lengthened vibrator for JPC-7 (JPC-12)" or similar due to quirks of the translation.

Figure 7:
The elements may be lengthened by clipping a lead to each
end of the telescoping sections, reducing the amount of
needed inductance - and also allowing resonance on lower
bands - in this case, 60 meters.
Click on the image for a larger version.

While adding two additional sections will bring the resonant frequency down to about 5.7 MHz with full inductance and extension of the telescoping sections, the antenna can be made to cover 60 meters by clipping on short (18" or 46cm) jumper leads to the very end of the antenna elements and let them hang down.  In testing it on the air, the signals were about 1 or 2 "S" units below a full-sized dipole, but still quite good for a fairly compact antenna that was  close to the ground in terms of wavelength.

Of course these leads can be used for all bands for which the coils are needed to lower the inductance and reduce losses:  As it will always be the parts of the antenna that carry the most RF current that radiates the vast majority of the signal - and since those portions will always be the sections right near the coils for this type of antenna - adding these dropping wires at the ends won't appreciably affect the antenna pattern or its polarization.

As there is plenty of room to do so in the zipper case, I have since added two extra sections and two "clip leads" permanently into the kit.

Finally, I would order at least two extra telescoping sections as these are the most fragile parts of the antenna kit.  These can also be ordered from the same folks that sell the antennas for US $12-$16 each and are typically referred as something like "304 stainless steel 2.5M whip antenna for PAC-12 JPC7 portable shortwave antenna". 

The reason for ordering two of them is that if the antenna falls over, both whips are likely to be damaged (ask me how I know!):  The cost of getting two extra whips is likely to be less than the cost of fuel for even a modest road trip to wherever you are going, so their price should be kept in perspective.  As the zippered case for the antenna has plenty of extra elastic loops inside, there is ready storage for these two extra whips with no modification.

A word of caution:  However you store them, do not allow the telescoping whips to lay loosely:  If they bash into something else they may be easily dented which can make it impossible for them to be extended/retracted.  For this reason they should be secured in the elastic strap, or individually in a tubes or padded cases.

Note:  There are also available much heavier and longer telescoping whips with the same M10x1.5 thread that would easily allow 60 meter coverage:  I have not tried these to see how well they would work, mechanically, or if it would even be a good idea to do so (e.g. extra stress on the tubes, coils, mounting point - or how stable such a thing might be on a tripod).

Figure 8:
The mounting of the balun, just below the feedpoint mount.
The index holes allow flexibility in the orientation, the
connection being made by 2.5mm banana plugs.
Here, the antenna is shown with the elements configured
one hole higher than "flat", forming a lazy "Vee"
shape as seen in Figures 9 and 10.
Click on the image for a larger version.

Additional comments:

"To vee, or not to vee"

The feedpoint mount has a number of indexed holes that allow the elements to be mounted in a variety of configurations, from flat, in a number of "Vee" configurations, or even an "L" or vertical configuration.  

Personally, I use the flattest "Vee" configuration as seen in Figures 8, 9 and 10.  This configuration keeps the drooping ends of the telescoping whips higher than the feedpoint and helps clear any local obstacles (trees!)  - and just looks cool!

As can be seen in Figure 8, the connection between the balun and the feedpoint is made by plugging 2.5mm miniature banana plugs into the brass receptacles on the feed.  Shown in the photo are connections to the two sides, typically used for a dipole arrangement, but the third, unused connection on the top could be used to hold an element horizontal while one of the side connections hold it vertical - more on the use of this antenna as a vertical in the next section.

It should be no surprise that these 2.5mm miniature banana plugs are quite small and fragile and if one isn't careful - say, by allowing the weight of the balun to be supported by the wires rather than using the hook-and-loop strap - they can be broken.  For this reason I ordered a pack of ten 2.5mm banana plugs from Amazon and made a pair of short (4", 10cm) leads - one end with a small alligator clip and the other with a 2.5mm banana plug - to allow me to make a temporary connection should one get broken off in the field - something that could torpedo an activation if you didn't have spare parts! 

Operating as a vertical antenna

Because of the flexibility of the mounting point, it is possible to use this same kit as a vertical antenna with the second element as a resonant (rod) ground "plane" if - due to space or personal preference - emitting a signal with a vertically-polarized component is desired.  While this will certainly "work", if you do plan to operate with vertical polarization its recommended that you add several (2 or more) wire "radials" or counterpoises.

Because of the included balun (more on this in a moment) the coaxial feedline itself will not act as an effective part of the counterpoise network so rather than connecting additional radials to the shield, the ends of the wire should be clamped under the washer/bolt that holds the horizontally-configured element in place.  Of course, one need not use the balun and connect the coaxial cable directly, but if you choose this option you will be on your own to supply the means to make such a connection.

For best results with the fewest number of radials, choosing lengths that are odd-number quarter wavelengths long (1/4, 3/4, 5/4) and keeping them elevated a foot (25cm) or more off the ground is suggested as this will help minimize "ground" losses.  Having said this, almost no matter what you do, you will probably be able to radiate a useful amount of signal:  Operating CW or digital modes offers an improvement in "talk" capability owing to their efficiency - but if you are planning to operate SSB, it's worth taking a bit of extra time and effort to maximize performance.

Would I operate this antenna in "vertical" mode?  While I don't have plans to do so, I have purchased an extra ground stake of the sort used on the JPC-12 vertical, and the short banana plug/clip lead jumpers that I made could be used to make a temporary connection directly to a coaxial connector.

Nature of the balun

The supplied balun has a 1:1 impedance ratio and has DC connection between the input and output - butsince there is a DC connection between all of the conductors, it is more than a simple current balun (e.g. transmission line wound on ferrite).  As the balun seems to work well, I have no reason to break it open to figure out what's inside, but I did a bit of "buzzing" of the connections with a meter to measure inductance and here are the results:

  • Between coax shield and center conductor:  16.9uH
  • Between red and black (on antenna side):   16.9uH
  • Between center coax and black:  38.5uH
  • Between center and red:  3.4uH
  • Between Shield and black:  3.4uH
  • Between Shield and red:  3.4uH
  • The DC resistance between any combination of the leads is well under 1 ohm.

What does this tell us?  The inductance readings of about 16.9uH indicate that this may be a voltage balun providing about 500 ohms of inductive reactance at 5 MHz - more than enough for reasonable efficiency.  The interesting reading is the inductance between the center coaxial connection and the black wire which is only twice the inductance of the input or output windings:  If there was a direct connection between one of the coax and one of the output wires this would imply twice the number of turns and four times the inductance - but since it is only twice, this indicates that the total number of turns in the "center coax to black" route is about sqrt(2) (or 1.414x) as many turns as the primary/secondary - or there is another inductor in there.

Figure 9:
The JPC-7 backgrounded by red rock during a POTA
operating in K-0010.
Click on the image for a larger version.

While I'm sure that the balun is very simple, its exact configuration/wiring escapes me at this time.

Coil losses

As mentioned earlier, the coil is wound with 18 AWG (1mm diameter) type 316 stainless steel wire.  Fortunately, this wire appears is austenitic - which is to say that it is not of the variety that is magnetic and thus has a permeability of unity:  Were it magnetic, this would negatively impact performance significantly.

Knowing the diameter of the coil form and the fact that there are 34 turns, we know that the total length of the wire used is approximately 180 inches (457cm) and measurement shows that the stainless steel wire coil has a total DC resistance of about 4 ohms.  Using Owen Duffy's online skin effect calculator (link) and assuming 1mm diameter, 316 Stainless we can calculate the approximate RF resistance including skin effect - the tendency for RF to flow on the outside skin of a conductor rather than through its cross-section - versus frequency:

  • 3.5 MHz = 5.2 ohms
  • 7 MHz = 7.2 ohms
  • 14 MHz = 9.6 ohms
  • 28 MHz = 13.6 ohms

If was make a very broad assumption that the feedpoint resistance at each coil is about 25 ohms (the two in series being around 50 ohms) we can see that in this hypothetical situation about a third of the total resistance could be due to the coil, and since P = I2R - and if we presume that the current is consistent throughout the coil (it probably is not) we can roughly estimate that the total power loss will be proportional to the resistance implying that about 1/3rd of the total power is lost in the coil.  In practical terms, a 33% power loss is around 4.8dB - still less than one "S" unit, so this loss may go unnoticed under typical conditions.

In operation, we would be unlikely to need all - or even most of the turns of the coil for operating on the higher bands, so the overall coil losses are likely to go down as the need for loading inductance at these frequencies is also significantly reduced:  Since we actually use only about 2/3 of the turns of the coil on 40 meters, the loss is more likely to be something on the order of 5 ohms rather than 7.2, reducing the loss even more.

Note:  K6STI's "coil" program - Link - calculates the loss for this coil as being closer to 8 than 5 ohms - a bit higher than the simple loss calculation of Owen Duffy's wire calculation and likely more representative of in-situ measurements.

When operating on 40 meters with 100 watts, the coils definitely do get quite warm - but not dangerously so and thus I would presume that the very rough estimates above are likely in the ballpark.

By comparison, the calculated DC resistance of  the same length of 18 AWG bare copper wire is under 0.5 ohms, but the RF resistance due to skin effect at 28 MHz is around 2 ohms and about an ohm at 7 MHz - roughly a 7:1 difference meaning that if the above analysis is in any way close to being correct, our losses at 7 MHz when using the full coil (again, we don't!) and presuming that the feedpoint of the individual coil stayed at 25 ohms (it probably won't) our losses would drop from about 30% to less than 5%.

As a consequence, if wound with copper/silver plated I would expect that the not only would the antenna become narrower than the 40 meter 2:1 bandwidth of about 150 kHz - which would make it trickier to tune - I would also expect the feedpoint resistance to drop, possibly increasing the VSWR at the feedpoint.  From a practical standpoint, even a modest antenna tuner capable of handling only 3:1 mismatch should be able to cope with this, but it is likely that some of the gains from using lower-loss wire might be offset by the increase in losses caused by feedline mismatch and the losses within a tuner - both of which could easily exceed 3dB in a portable set-up with moderately-long, small-diameter coax.

Would it be worth rewinding the coil with (readily-available) 18AWG (1mm dia) silver-plated or bare copper wire?  Maybe.   I obtained another adjustable coil from the same vendor and rewound it with 1mm diameter (18 AWG) silver-plated copper wire (readily available and used for making jewelry).  I am in the process of running comparisons/tests and will produce a blog about that in the future.

Final comments

Figure 10:
Operating 20 meter CW from POTA entity K-6085, with the
Conger mountains and the JPC-7 dipole in the background.
Click on the image for a larger version.

Is this an antenna that is worth getting?  I would have to say "yes".

Remembering that you will also need to supply a suitable tripod mount (e.g. an inexpensive "light stand" ) this antenna is quite portable and, if you have a bit of practice, quick to set up and adjust.  Unlike a vertical antenna, it doesn't need a set of ground radials and it is likely that the antenna itself will be up and above everyone's heads when it is deployed.

Best used on the higher bands (20 and higher) its efficiency will be quite good - certainly equal to or better than a typical mobile antenna.   As this is a large-ish antenna on a tripod, be sure to weigh down the legs and/or attach simple guying to it to prevent it from blowing over in the wind or being knocked over by tripping over the coax:  I can attest personally that the latter can easily happen!

I also have the JPC-12 vertical (which will be discussed in a future post) and I find this antenna (the JPC-7 loaded dipole, that is) to be far more convenient to use (no radial system), particularly if you plan to change bands several times during the operation - something that is quite likely to happen on the higher bands as propagation varies over the course of a few hours - as best performance requires adjusting the radials as well as the antenna itself, although it would probably work "just fine" if the radials are left at maximum length.  Another advantage of this being a (largely) horizontally-polarized antenna is that in an urban environment it is likely to intercept less noise on receive than a vertical - and it can be inconspicuous in its deployment as compared to a taller vertical.

For the lower bands (40 and 30 meters) the JPC-7 works quite well - particularly if one operates CW or digital modes.  As mentioned, it can also work competently on 60 meters as well with the addition of extra length of the elements by the purchasing of extra rods and/or simply attaching "drooping" wires to the ends of the telescoping rods.

Over the course of several POTA and related activation I have made about 500 contacts with this antenna on the band 60 through 15 meters - on CW and voice:  I'm sure that the antenna works well on 12, 10 and 6 meters as well, but I just haven't tried it on those bands.

Overwhelmingly, the sense has been "If I can hear them, they can hear me." with this antenna as I have worked quite a few QRP and DX stations that I could barely copy above the band's natural QRN level.  Admittedly, some of these times I was on the receiving end of the frenzy - being the activator during POTA operation - but there were many times when I had to stop operating not because I ran out of people to work, but because I ran out of time.

* * * * *

This page stolen from ka7oei.blogspot.com

[End]


A simple VHF notch cavity from scraps of (large) Heliax

0
0

In a previous post I discussed how a band-pass"cavity" could be constructed from a chunk of 1-5/8" Heliax (tm) cable (a link to that article is here).  This is the follow-up to that article.

Figure 1:
The dual notch filter assembly - installed at the
repeater.
Click on the image for a larger version.

Notch versus band-pass

As the name implies, a "notch" cavity (or filter) removes only a specific frequency, ideally leaving all others unaffected while a "band pass" cavity does the opposite - it passes only a specific frequency.  Being the real world, neither type of filter is perfect - which is to say that the "width" of the effect of the notch or pass response is not infinitely narrow, nor is it perfectly inert at frequencies other than where it is supposed to work:  The notch filter will have some effect away from its frequency of rejection, and a pass cavity will let through off-frequency energy.

The degree to which it is imperfect is significantly determined by the "Q" (quality factor) of the resonator and in general, the bigger the cavity (diameter of conductors and the container surrounding it) the better the performance will be in terms of efficacy - which is "narrowness" in the case of the notch filter and "width" and loss in the case of the band-pass cavity.

The use of large-ish coaxial cable as compared to smaller cable (like RG-8 or similar) is preferred as it will be "better" at everything that is important - but even a cavity constructed from 1-5/8" coax will be significantly inferior to that of a relatively small 4" (10cm) diameter commercial cavity - but there are many instance where "good" is "good enough.

Case study:  Removal of APRS/packet transmitter energy from a repeater input

As noted in the article about the band-pass cavities linked above, a typical repeater duplexer - even though it may have the words "band" and "pass" on the label and in the literature - RARELY have an actual, true "band-pass" response.  In other words, a true "bandpass" cavity would have 10s of dB of attenuation 20 MHz away from its tuned frequency - but most duplexers found on amateur repeaters will actually be down only 6-10 dB or so, meaning that off-frequency signals (FM broadcast, services around 150-174 MHz, TV transmitters) will hit the receiver nearly unimpeded.  When I tell some repeater owners of this fact, I'm often met with skepticism ("The label says'band-pass'!") but these days - with inexpensive NanoVNAs available for well under $100, they can check it for themselves - and likely be disappointed.

Many clubs have replaced their old Motorola, GE or RCA repeaters from the 70s and 80s with more modern amateur repeaters (I'm thinking of those made by Yaesu and Icom) and found that they were suddenly plagued with overload and IMD (intermod).  The reason for this is simple:  The old gear typically had rather tight helical resonator front-end filters while the modern gear is essentially a modified mobile rig - with a "wideband" receiver - in a box.  In this case, the only real "fix" would be the installation of band-pass cavities on the receive and transmit paths in addition to the existing duplexer.

In the case of APRS sharing a radio site, the problem is different:  Both are in the amateur band and it may be that even a "proper" pass cavity may not be enough to adequately reject the energy if the two frequencies are close to each other.  In this case, the scenario was about as good as it could be:  The repeater input was at 147.82 MHz - almost as far away as it could be from the 144.39 APRS frequency and still be in the amateur band.

What made this situation a bit more complicated was the fact that there was also a packet digipeater on 145.01 MHz - a bit closer to the repeater input,  but since it was about 600 kHz away from the 144.39 APRS frequency, that meant that just one notch wouldn't be quite enough to do the job:  We would need TWO.

Is it the receiver or transmitter?

Atop this was another issue:  Was it our receiver that was being desensed (overloaded) these packet transmitters, or was it that these transmitters were generating broadband noise across the 2 meter band, effectively desensing the repeater's receiver.

We knew that the operators of the packet stations did not have any cavity filtering on their own gear and were reluctant to spend the time, effort and money to install it unless they had compelling reason to do so.  Rather than just sit at a stalemate, we decided to do due diligence and install notch filtering on the receiver to answer this question - and give the operators of the packet gear a compelling reason to take action if it turned out that their transmitters were the culprit.

A simple notch cavity:

Suitable pass cavities are readily available for purchase new from a number of suppliers and used from auction sites - they are also pretty easy to make from copper and aluminum tubing - if you have the tools.  Because of the rather broad nature of a typical pass cavity, temperature stability is usually not much of an issue in that its peak could drift hundreds of kHz and only affect the desired signal by a fraction of a dB.

Another material that could be used to make reasonable-performance pass cavities is larger-diameter hardline or "Heliax"(tm).  Ideally, something on the order of 1-5/8" or larger would be used owing to its relative stiffness and unloaded "Q" and either air or foam dielectric cable may be used, the main difference being that the "Q" of the foam cable will be slightly lower and the cavity itself will be somewhat shorter.

Figure 2:
Cutting the (air core) cable to length
Refer to the calculator on the KF6YB web page, linked
at the end of this article.
Click on the image for a larger version.

The "Heliax notch cavity" described here can be built with simple hand tools, and it uses a NanoVNA for tuning and final adjustment.   While its performance will not be as good as a larger cavity, it will - in many cases - be enough to attenuate strong, out-of-band signals that can degrade receiver performance.

Using 1-5/8""Heliax":

Note:  For an online calculator to help determine the length of cable to use, see the link to KF6YB's site at the end of this article.

The "cavity" described uses 1-5/8" air-core "Heliax" - and it is necessary for the inner conductor to be hollow to accommodate the coupling capacitors.  Most - but not all - cable of this size and larger has a hollow center conductor.  Cables larger diameter than 1-5/8" should work fine - and are preferred - but smaller than this may not or may note be practical in situations where the notch and desired frequency are closely spaced - this, for reasons of unloaded "Q".  If the center conductor is solid or if its inside diameter cannot accommodate the coupling capacitors (described later on) you will have to improvise their construction, using either a discrete variable capacitor or a small "sleeve" capacitor - external to the piece of cable similar to the coupling capacitors described below.

Preparing the "shorted" end:

For 2 meters, a piece of cable 18" long was cut.  For the air dielectric, it's recommended that one cuts itgentlywith a hand saw rather than a power tool as the latter can "snag" and damage the center conductor.

Figure 3:
The "shorted" end of the stub with the slits bent to the middle
and soldered to the center conductor.
This end should be covered with electrical tape and/or
RTV/silicone to keep out insects/dirt.
Click on the image for a larger version.

For the "cold" (e.g. shorted) end, carefully (using leather gloves) remove about 3/4" (19mm) of the outer jacket and then clean the exposed copper shield with a wire brush, abrasive pad and/or sand paper.  With this done, use a pair of tin snips cut slots about 1/2" (12mm) deep and 1/4" (6mm) wide around the perimeter.  Once this is done, use a pair of needle nose pliers and remove every other tab, resulting is a "castellated" series of slots.  At this point, using a pair of diagonal pliers or a knife, cut away some of the inner plastic dielectric so that it is about 1/2" (12mm) away from the end of the center conductor.

Now, clean the center conductor so that it is nice and shiny and then bend the tabs that were cut inwards so that they touch the center conductor.  Using a powerful soldering iron (I used a 150 watter) or soldering gun - and, perhaps a bit of flux - solder the shield tabs to the center conductor all of the way around.  It's best to do this with the section of coax laying on its side so that hot solder/metal pieces do not end up inside the coax - particularly if air-core cable is used.  If you used acid-core flux, carefully remove it before proceeding.

With one end of the cable shorted you can trim back any protruding center conductor and file any sharp edges - again taking care to avoid getting bits of metal inside the cable or embedded in the foam.  At some point, you should cover the shorted end with RTV (silicone) and/or good-quality electrical tape to prevent contamination by dust or insects.

Preparing the "business" end:

Figure 4:
This shows how the tube for the coupling capacitor is placed.
This photo is from the band-pass version with two tubes.
Click on the image for a larger version.
At this point, the chunk of coax should be trimmed again, measuring from the point where the center conductor is soldered to the shield:  For air-core trim it to 17" (432mm) exactly and for foam core, trim it to 16-1/8" (410mm).  Again, using a sharp knife and gloves, remove about 3/4"(19mm) of the outer jacket and, again, clean the outer conductor so that it is bright and shiny.

Making coupling capacitors:

We now need to make a capacitor to couple the energy from the coaxial cable to the center resonator and for this, we could use either a commercially-made variable capacitor (an air-type up to about 20pF) or we could make our own capacitors:  I chose the latter.

Using RG-8 center for the coupling capacitor

For this, I cut a 4"(100mm) length of solid dielectric RG-8 coax, pulled out the center conductor and dielectric and threw the rest away.  I then fished around in my box of hardware and found a piece of hobby brass tubing into which the center of the RG-8 fit snugly.  If you wish, you can foam dielectric RG-8 center, but you may need to make the coupling capacitor slightly longer as the foam's dielectric constant is lower - and also the capacitance-per-unit length.

I then soldered to tubing inside the center conductor/resonator - offers good mechanical stability, preventing the piece of coax cable dielectric from moving around.

Using RG-6 center for the coupling capacitor:

While RG-8 and brass tubing is nice to use, I have also built these using the center of inexpensive RG-6 foam type "TV" coaxial cable and a small piece of soft copper water tubing.  This type of capacitor is fine for receive-only applications, but it is not recommended for more than a few watts:  The aforementioned RG-8 capacitor is better for that.

For this, I cut a 3"(75mm) long piece of RG-6 foam TV coaxial cable and from it, I removed and kept the center conductor and dielectric - removing any foil shield and then stripping about 1/2"(12mm) of foam from one end of each piece.

At this point, you'll need some small copper tubing:  I used some 1/4" O.D. soft-drawn "refrigerator" tubing, cutting a 2"(50mm) length and carefully straightening it out.  To cut this, I used a rotary pipe cutting tool which slightly swedged the ends - but this worked to advantage:  As necessary, I opened up the end cut with the deburring blade of the rotary cutting tool just enough that it allowed the inner dielectric of the RG-6 to slide in and out with a bit of friction to hold it in place.

Figure 5:
The PC Board plate soldered to the end of the coax.  This
is from the band-pass version, but you get the idea!
Click on the image for a larger version.

Using a hot soldering iron or gun, solder the tube for the coupling capacitor inside the Heliax's center conductor, the end flush with the end of the center conductor:  A pair of sharp needle-nose pliers to hold it in place is helpful in this task.

Making a box:

On the "business"(non-shorted) end of the piece of cable we need to make a simple box with a solid electrical connection to the outer shield to which we can mount the RF connectors with good mechanical stability.  For the 1-5/8" cable, I cut a piece of 0.062"(1.58mm) thick double sided glass-epoxy circuit board material into a square that was 3"(75mm) square and using a ruler, drew lines on it from the opposite corners to form an "X" to find the center.

Using a drill press, I used a 1-3/4"(45mm) hole saw to cut a hole in the middle of this piece of circuit board material, using a sharp utility knife to de-burr the edges and to enlarge it slightly so that it would snugly fit over the outside of the cable shield:  You will want to carefully pick the size of hole saw to fit the cable that you use - and it's best that it be slightly undersized and enlarged with a blade or file than oversized and loose.

Figure 6:
Bottom side of the solder plate showing the
connection to the coax.
Click on the image for a larger version.

After cleaning the outside of the coaxial cable and both sides of the circuit board material, solder it to the (non-shorted) end on both sides of the board, almost flush with just enough of the shield protruding through the top to solder it.  For this, a bit of flux is recommended, using a high-power soldering iron or gun - and it's suggested that it first be "tacked" into place with small solder joints to make sure that it is positioned properly.

When positioning the box, rotate it such that the two "capacitor tubes" that were soldered into the center conductor are parallel with one of the sides of the square - this to allow symmetry to the connectors:  This is depicted in Figure 8 where the left-hand and right-hand tubes (more or less) line up with their respective coaxial connectors.

Adding sides and connectors:

With the base of the box in place, cut four sides, each being 1-3/8"(40mm) wide and two of them being 3"(75mm) long and the other two being 2-1/2"(64mm) long.  First, solder the two long pieces to the top, using the shorter pieces inside to space and center them - and then solder the shorter pieces, forming a five-sided (base plus four sides) box atop the piece of cable.

Figure 7:
A look inside the box showing the connection to the center of
the capacitor, the "tuning" strips and ceramic trimmer.
Click on the image for a larger version.
Resonator adjustment capacitor:

You will need to be able to make slight adjustments to the frequency of the center conductor of the Heliax resonator.  If all goes well, you will have cut the coaxial cable to be slightly short - meaning that it will resonate entirely above the 2 meter band.  The installation of the coupling capacitor will lower that frequency significantly - but it should still be above the frequency of interest so a means for "fine tuning" is necessary.

Figure 7 shows two strips of copper:  One soldered to the center conductor (the sleeve of the coupling capacitor, actually) and another soldered to the inside for the Heliax shield.  These to plates are then moved closer/farther away to effect fine-tuning:  Closer = lower frequency, farther = higher frequency.  Depending on how far you need to lower the frequency, you can make these "plates" larger or smaller - or if you can't quite get low enough in frequency with just one set of these "plates", you can install another set.  

It is recommended that you do NOT install the copper strips for tuning just yet:  Go through the steps below before doing so.

If your resonant frequency is too low - don't despair yet:  It's very likely that you'll have to reduce the coupling capacitor a bit (e.g. pull it out of the tubing and/or cut it a bit shorter) and this will raise the frequency as well.

How it's connected:

A single notch cavity is typically connected on a signal path using a "Tee" connector as can be seen in Figure 1:  At the notch's resonant frequency, the signal is literally "shorted out", causing attenuation.  

As can be seen in Figure 7, there is only one connector (BNC type) on our PC board box - but we could have easily installed two BNC connectors - in which case we would run a wire from one connector to the center capacitor as shown and then run another wire from the capacitor to the other connector.

Adjusting it all:

For this, I am presuming that you have a NanoVNA or similar piece of equipment:  Even the cheapest NanoVNA - calibrated according to the instructions - will be more than adequate in allowing proper adjustment and measurement of this device.

Using two cables and whatever adapters you need to get it done, put a "Tee" connector on the notch filter and connect Channel 0 on one side of the Tee and Channel 1 on the other side of the Tee and put your VNA in "through" mode.  (Comment:  There are many, many web pages and videos on how to use the NanoVNA, so I won't go through the exact procedure here.)

Configure the VNA to sweep from 10 MHz below to 10 MHz above the desired frequency and you should see the notch - hopefully near the intended frequency:  If you don't see the notch, expand the sweep farther and if you still don't see the notch, re-check connections and your construction.

At this point, "zoom in" on the notch so that you are sweeping, say, from 2 MHz below to 2 MHz above and carefully note the width and depth of the notch.  Now, pull out the center capacitor (the one made from the guts of RG-8 or RG-6 cable) a slight amount:  The resonant frequency will move UP when you do this.

The idea here is to reduce the coupling capacitance to the point where it is optimal:  If you started out with too much capacitance in the first place, the depth of the notch will be somewhat poor (20dB or so) and it will be wider than desirable.  As the capacitance is reduced, it should get both narrower and deeper.  At some point - if the coupling capacitance is reduced too much - the notch will no longer get narrower, but the depth will start to get shallower. 

Comment:  You may need to "zoom in" with the VNA (e.g. narrow the sweep) to properly measure the depth of the notch.  As the VNA samples only so many points, it may "miss" the true shape and depth of the notch as it gets narrower and narrower.

The "trick" with this step is to pull a bit of the coax center out of the coupling capacitor and check the measurement.  If you need to pull "too much" out (e.g. there's a loop forming where you have excess) then simply unsolder the piece, trim it by 1/4-1/2"(0.5-1cm), reinstall, and then continue on until you find the optimal coupling.

It's recommended that when you do approach the optimal coupling, be sure that you have a little bit of adjustment room - being able to push in/pull out a bit of the capacitor for subsequent fine tuning.

At this point your resonant (notch) frequency will hopefully be right at or higher than your target frequency:  If it is too low, you may need to figure out how to shorten the resonator a bit - something that is rather difficult to do.  If you already added the "capacitor plates" for fine-tuning as mentioned above, you may need to adjust them to reduce the capacitance between the ground and the center conductor and/or reduce their size.

Presuming that the frequency is too high (which is the desirable state) then you will probably need to add the copper capacitor strip plates as describe above, and seen in Figure 7.  You should be able to move the resonant frequency down toward your target by moving the plates together.  Remember:  It is the proximity of the plate connected to the center conductor of the resonator to the ground that is doing the tuning!  If you can't get the frequency low enough, you can add more strips to the center conductor - but you will probably want to remove the coupling capacitor (e.g. the coax center conductor) to prevent melting it when soldering.

Optimizing for "high" or "low" pass:

As described above, the notch will be more or less symmetrical - but in most cases you will want a bit of asymmetry - that is, you'll want the effect of the notch to diminish more on one side than the other.  Doing this allows you to place the notch frequency (the one to block) and the desired frequency (the one that you want) closer together without as much attenuation.

Figure 8:
The simplest form of the "high pass" notch, used during
initial testing of the concept - See the results in Figure 9.
Click on the image for a larger version.

"High-pass" = Parallel capacitor

In our case - with the higher of the two notches as 145.01 MHz and the desired signal at 147.82 MHz, we want the attenuation to be reduced rapidly above the notch frequency to avoid attenuating the 147.82 signal - and this may be done by putting a capacitor in parallel with the center of the coupling capacitor and ground:  A careful look at Figure 7 will reveal a small ceramic trimmer capacitor.

This configuration is more clearly seen in Figure 8:  There, we have the simplest - and kludgiest - possible form of the notch filter where you can see two ceramic trimmer capacitors connected across the center coupling capacitor and the center pin of the BNC connector.  Off the photo (to the upper-left) was the connection to a "tee" connector and the NanoVNA.  If you just want to get a "feel" for how the notch works and tunes, this mechanically simple set-up is fine - but it is far too fragile and unstable for "permanent" use.

For 2 meters, a capacitor that can be varied form 2-35pF or so is usually adequate - the higher the capacitance, the more effect there is on the asymmetry - but at some point (with too much capacitance) losses and filter "shape" will start to degrade - particularly with inexpensive ceramic and plastic trimmer capacitors.  Ideally, an air-type variable capacitor is used, but an inexpensive ceramic trimmer will suffice for receive-only applications - and if the separation is fairly wide - as is the case here.  For transmit applications, the air trimmer - or a high-quality porcelain type is recommended.

"Low-pass" = Parallel inductor

While the parallel capacitor will shift the shape of the notch's "shoulders" for "low notch/high pass" operation, the use of a parallel inductor will cause the response to become "low pass/high notch" where the reduced attenuation is below the notch frequency.  If we'd needed to construct a notch filter to keep the 147.22 repeater's transmit signal out of the 145.01 packet's receiver, we would use a parallel inductor.

It is fortunate that an inductor is trivial to construct and adjust.  For 2 meters, one would start out with 4-5 turns wound on a 3/8"(10mm) drill bit using solid-core wire of about any size that will hold its shape:  12-18 AWG (2-1mm diameter) copper wire will do.  Inductance can be reduced by stretching the coil of wire and/or reducing the number of turns.  As with the capacitor, this adjustment is iterative:  Reducing the inductance will make the asymmetry more pronounced and with lower inductance, the desired frequency and the notch frequency can be placed closer together - but decrease the inductance too much, loss will increase.

Comment:  The asymmetry of the "pass" and "notch" is why some of the common repeater duplexers have the word "pass" in their product description:  It simply means that on one side of the notch or the other the attenuation is lower to favor receive/transmit.

Results:

Figure 9:
VNA sweep of one of the prototype notch filter
depicted in Figure 8.  This shows the asymmetric nature of
the notch and "pass" response when a parallel capacitor is
used.
Click on the image for a larger version.

Figure 9 shows a the sweep of one of the notch filters from a NanoVNA screen.

The blue trace shows the attenuation plot:  At the depth of the notch (marker #1) we have over 24dB of attenuation, which is about what one can expect from a notch cavity simply "teed" into the NanoVNA's signal path.

We can also see the asymmetry of the blue trace:  Above the notch frequency we see Marker #2 - which is a few MHz above the notch and how the attenuation decreases rapidly - to less than 0.5dB.  In comparison the blue trace below the notch frequency has higher attenuation near the notch frequency.

Again, if we'd placed an inductor across the circuit rather than a capacitor, this asymmetry would be reversed and we'd have the lower attenuation below the notch frequency.

Note:  This sweep was done with the configuration depicted in Figure 8 to test how well everything would work.  Once I was satisfied that this notch filter could be useful, I rebuilt it into the more permanent configuration and tuned it properly, onto frequency.

Putting two notches together:

Because we needed to knock down both 144.39 and 145.01 MHz, we can see from the Figure 9 that we'd need two notch filters cascaded to provide good attenuation and not affect the 147.82 MHz repeater input frequency.  A close look at Figure 1 will reveal that these two filters are, in fact, cascaded - the signal from the antenna (via the receiver branch of the repeater's duplexer) coming in via one of the BNC Tees and going out to the receiver via another.

The cable between the two notches should be an electrical quarter wavelength - or an odd multiple thereof (e.g. 3/4, 5/4) to maximize the effectiveness of the two notches together.  A quarter wave transmission line has an interesting property:  Short out one end and the impedance on the other end goes very high - and vice-versa.  To calculate the length of a quarter-wave line we can use some familiar formulas:

300/Frequency (in MHz) = Wavelength in meters

If we plug 145 MHz into the above equation we get a length of 2.069 meters.

Since we are using coaxial cable, we need to include its velocity factor.  Since the 1/4 wave jumper is foam-type RG-8X we know that its velocity factor is 0.79 - that is, the RF travels 79% of the speed of light through the cable, meaning that it should be shorter than a wavelength in free space, so:

2.069 * 0.79 = 1.63 meters

(Solid dielectric cable - like many types of RG-8 and RG-58 will have a velocity factor of about 0.66, making a 1/4 wave even shorter!)

Since this is a full wavelength, we divide this length by 4 to get the electrical quarter-wavelength:

1.63 / 4 = 0.408 meters (16.09")

As it turns out, the velocity factor of common coaxial cables can vary by several percent - but the length of a quarter-wave section is pretty forgiving:  It can be as much as 20% off in either direction without causing too much degradation from the ideal - but it's good to be as precise as possible.  When determining the length of the 1/4 wave jumper, one should include the length to the tips of the connectors, not just the length of the cable itself. 

Figure 10:
The response of the two cascaded notch filters - one tune to
144.39 and the other to 145.01 MHz.
Click on the image for a larger version.

Because we know that the notch filters present a "short" at their tuned frequency, that means that the other end of a 1/4 wave coax at that same point will go high impedance - making the "shorting" of the second cavity even more effective.  In testing - with the two notches tuned to the same frequency, the total depth of the notch was on the order of 60dB - significantly higher than the sum of the two notches individually - their efficacy improved by the 1/4 wave cable between them.

As we needed to "stagger" the two notches, the maximum depth was reduced, but as can be seen in Figure 10, the result is quite good:  Markers 1 and 2 show 144.39 and 145.01 MHz, respectively with more than 34 dB of attenuation while Marker 3 at the repeater input frequency of 147.82 has an attenuation of just 0.79dB - not to bad for a homebrew filter made from scrap pieces!

Comment:  If you are wondering of the 0.79dB attenuation was excessive, consider the following:  Many repeaters are at shared sites with other users and equipment - in this case, there were two other land-mobile sites very nearby along with a very large cell site.  Because of this, there is a bit of excess background noise generated that is out of control of the amateur repeater operator - but this also means that the ultimate sensitivity is somewhat limited by this noise floor.  Using an "Iso-Tee", it was determined that the sensitivity of this repeater - even with coax, duplexer and now notch filter losses - was "site noise floor" limited by a couple of dB, so the addition of this filter did not have an effect on its actual sensitivity.

Putting it together:

Looking again at Figure 1, you will noticed that the two notches filters are connected together mechanically:  Short pieces of PVC "wood"(available from the hardware store) were cut and a hole saw was used to make two holes in each piece, slipped over the end and then secured with RTV ("Silicone") adhesive.

Rather than leaving the tops of the PC board boxes open where bugs and debris might cause detuning, they were covered with aluminum furnace tape which worked just as well as soldering a metal lid would have - plus it was cheap and easy!

Did it work?

At the time we installed the filter, the packet stations were down, so we tested the efficacy of the filter by transmitting at high power on the two frequencies alternately.  Without the filter, a bit of desense was noted in the receiver, but this was absent with it inline.

With at least 34dB of attenuation at either packet frequency we were confident that the modest amount of desense (on the order of 10-15dB - enough to mask weak signals, but not strong ones) - IF it was caused by receiver overload, would be completely solved by attenuating those signals by a factor of over 2000.  If it had no effect at all, we would know that it was, in fact, the packet transmitters generating noise.

Some time later the packet stations were again active - but causing a bit of desense, but this was not unexpected:  We were not sure if the cause of the desense was due to the repeater's receiver being overloaded, or noise from the packet transmitter - but because the amount of desense was the same after adding the notch filter we can conclude that the source of desense was, in fact, noise from the packet transmitter.

Having done due diligence and installed these filters on our receiver, we could then report back to the owner of the packet transmitters what we had done and more authoritatively request that they install appropriate filtering on their transmitters (notch or pass cavities - preferably the latter) in order to be good neighbors, themselves.

* * *

"I have 'xxx' type of cable - will it work?"

The dimensions given in this article are approximate, but should be "close-ish" for most types of air and foam dielectric cable.  While I have not constructed a band-pass filter with much smaller cable like 1/2" or 3/4", it should work - but one should expect somewhat lower performance (e.g. not-as-narrow band-pass with higher losses) - but it may still be useful.

Because of the wide availability of tools like the NanoVNA, constructing this sort of device is made much easier and allows one to characterize both its insertion loss and response as well as experimentally determining what is required to use whatever large-ish coaxial cable that you might have on-hand.

"Will this work on (some other band)?"

Yes, it should:  Notch-only filters of this type were constructed for a 6 meter repeater - and depending on your motivation, one could also build such things for 10 meters or even the HF bands!

It is likely that, with due care, that one could use these same techniques on the 222 MHz and 70cm bands provided that one keeps in mind their practical limitations.

 

 * * *

Related articles:

  • A 2-meter band-pass cavity using surplus Heliax - link - This article describes constructing a simple band-pass filter using 1-5/8" Heliax. The techniques used in that article are the same as those applied here.
  • Second Generation Six-Meter Heliax Duplexer by KF6YB - link  - This article describes a notch type duplexer rather than pass cavities, but the concerns and construction techniques are similar.
  • When Band-Pass/Band-Reject (Bp/Br) Duplexers really aren't bandpass - link - This is a longer, more in-depth discussion about the issues with such devices and why pass cavities should be important components in any repeater system.

 

* * *

This page stolen from ka7oei.blogspot.com

[End]


"TDOA" direction finder systems - Part 2 - Determining signal bearing from switching antennas in software.

0
0

Note:

This is a follow-up to a Part 1 blog post on this topic where we discuss in general how "rotating"(or switched) antennas may be used to determine the apparent bearing of a transmitter.  It is recommended that you read Part 1 FIRST and you can find it at:  "'TDOA' direction finder systems - Part 1 - how they work, and a few examples." - LINK.

In part 1 (linked above) we discussed a simple two-element "TDOA"(Time Difference Of Arrival) system for determining the bearing to a transmitter.  This method takes advantage of the fact that - under normal conditions - one can presume the incoming signal to be a wave "front", which is to say like ripples in water from a very distant source, they "sweep" over the receiver in lines that are at a right-angle to the direction from the transmitter.  Note that in this discussion, most of the emphasis will be placed on how it is done in the analog domain with switching antennas as this can help provide a clearer picture of what is going on.

Why this works

If we are using a two-antenna array, we can divine a difference between the arrival time of the two antennas as this drawing - stolen from part 1 of this article - illustrates:

Figure 1:
A diagram showing how the "TDOA" system works.
Click on the image for a larger version.

 

As illustrated in the top portion of the above illustration, the wave front "hits" the two elements at exactly the same time so, in theory, there is no difference between the signal from each of these elements.  In the bottom portion of the illustration, we can see that the wave front will hit the left-most element first and the RF will be out of phase at the second element (e.g. one element will "see" a the positive portion of the wave and the other will see the negative portion of the wave).

If we constrain ourselves with having just ONE receiver to use, you might ask yourself how one might use the signal from two antennas?  The answer is that one switches between the two antennas electronically - typically with diodes.  If the two signals are identical in their time of arrival - and the length of coaxial cable between the antenna and when one switches "perfectly" between the two antennas and there is no disturbance in the received signal, we know that the signal is likely to be broadside of our two-antenna array.

If the signal is NOT broadside to the the array, there will be a "glitch" in the waveform coming out of our receiver when we switch our antenna.  Because we are using an FM receiver - which detects modulation by observing the frequency change caused by audio modulation - we can also detect that "glitch".  To understand how this works, consider the following:

Recall the "Doppler Effect" where the pitch of the horn of a car increases from its original when it is moving toward the observer - and it is lower in pitch when it moves away from the observer:  It is only at the instant that the car is closest to the observer that the pitch heard is the actual pitch of the horn.

Now, consider this same thing when we look at the lower diagram of Figure 1.  If we switch from the left-hand antenna to the right-hand antenna, we have effectively moved away from the transmitter and for an instant the frequency of the received signal was lower because - from the point of the receiver on the end of the coax cable - the antenna moved away from the transmitter.  Because changes in frequency going up and down cause the voltage coming out of the receiver to go up and down by the same amount, we will get a brief "glitch" from having changed the frequency for a brief instant when our antenna "moved".

If we then switch back from the right-hand antenna to the left-hand antenna, we have suddenly moved it closer to the transmitter and, again, we shift the frequency - but in the opposite direction, and the glitch we get in the receiver is opposite as well.

We can see the glitching of this signal in the following photo, also stolen from "Part 1" of this article:

Figure 2:
Example of the "glitches" seen on the audio of a receiver connected to a TDOA system that switches antennas.

The photo in Figure 2 is that of an oscilloscope trace of the audio output of the FM receiver connected to it and in it, we can see a positive-going "glitch" when we switch from one antenna to the other, and a negative-going glitch when we switch back again.

If we have a simple circuit that is switching the antennas back-and-forth - and it "knows" when this switch happens, we can determine several things:

  • When the two antennas are broadside to the transmitter.  If we have the situation depicted in the top drawing of Figure 1, both antennas are equidistant and there will be NO glitches detected.
  • When antenna "A" is closer to the transmitter.  If we arbitrarily assign one of the antennas as "A" and the other as "B", we can see - by way of our "thought experiment" above - that if antenna "A" is closer to the transmitter than "B", our frequency will go DOWN for an instant when we switch from "A" to "B" - and vice-versa when it switches back.  Let us say that this produces the pattern of "glitches" that we seen in Figure 2.
  • When antenna "B" is closer to the transmitter.  If we take the above situation and rotate our two-antenna array around 180 degrees, antenna "B" will be closer to the transmitter than "A" and when our switch from "A" to "B" happens, our frequency will go UP for an instant when it does so - and vice-versa.  In that case, our oscilloscope will show the glitches depicted in Figure 2 upside-down.

In other words, by looking at the polarity of the glitches from our receiver, we can tell if the transmitter is to our left or to our right.  We can also infer a little bit about how far to the left or right our transmitter is by looking at the amplitude of the glitches:  If the signal is off the side of the antenna as depicted in the lower part of Figure 1, the glitches will be at the strongest - and the amplitude of the glitches will diminish as we get closer to having the two elements parallel as depicted in the top part of Figure  1.

There is an obvious limitation to this:  Unless we sweep the antenna back and forth, allwe can do is tell if the antenna is to our left or right.

Walking about with an antenna like this it is easy to sweep back and forth and with some practice, one can infer whether the the transmitter is to the left or right and in front or behind - but if you have a fixed antenna array (one that is not moving) or if you are in a vehicle where their orientation is fixed with respect to the direction of travel, this becomes inconvenient as you cannot tell if it is in front or behind.

Adding more antennas

Suppose that we want to know both "left and right" and "front and back" at the same time - and in that case, you would be correct if you presumed that you were to be able to do this by adding one more antenna and - and then did some switching between them.  Consider the case in Figure 3, below:

Figure 3:
A 3-antenna vertical array, with elements A, B and C.  A right-angle is formed between antennas "A" and "B" and "A" and "C".   Also see Figure #4.
Click on the image for a larger version.
 

In Figure 3 and 4 we have three vertical antennas - separated by less than 1/4 wavelength at the frequency of interest and we also have two transmitters located 90 degrees apart from each other.  Note that these antennas are laid out in a "three-sided square" - that is, if you were to draw lines between "A" and "B" and "A" and "C" they would form a precise right angle.

We know already from our example in Figure 1that if we are receiving Transmitter #1 that we will get our "glitch" if we switch between antenna "A" and "B" - but since antennas "A" and "C" are the same distance from Transmitter #1, we will get NO glitch.

Similarly, if we are listening to Transmitter #2, if we switch between antenna "A" and "C", we will get a glitch as "C" is closer to the transmitter than "A" - but since antennas "A" and "B" are the same distance, we would get not glitch.

From this example we can see that if we have three antennas, we can switch them alternately to resolve our "Left/Right" and "Front/Back" ambiguity at all times.  For example, let us consider what happens in the presence of Transmitter #2:

  • Switch from antenna "A" to antenna "B":  The antennas are equidistant from Transmitter #2, so there is no glitch.
  • Switch from antenna "A" to antenna "C":  We get a glitch in our received audio when we do this because antenna "C" is closer to Transmitter #2 than antenna "A".  Furthermore, we can tell by the polarity of the glitch that antenna "C" is closer to the transmitter.

Let us now presume that our array in Figure 3 and 4 was atop a vehicle and the front of the vehicle was pointed toward the left - toward Transmitter #1:  With just the above information we would know that this transmitter was located precisely to our right - and that if we wanted to drive toward it, we would need to make a right turn.

Figure 4:
A 3-antenna vertical array, with elements A, B and
C as viewed from the top.
Click on the image for a larger version.

Bearings in between the antennas

What if there a third transmitter (Transmitter #3 in Figure 4) located halfway between Transmitter #1 and Transmitter #2 and we were still in our car pointed at Transmitter #1?  You would be correct in presuming that:

  • Switching between Antenna "A" and "B" would indicate that the unknown transmitter would be to the front of the car.
  • Switching between Antenna "A" and "C" would indicate that the unknown transmitter would be to the right of the car.
  • We get "glitches" when switching between either pairs of antennas (A/B and A/C) - but these "glitches" are at lower amplitude than if the transmitter were in the direction of Transmitter #1 or Transmitter #2.

Could it be that if we measured the relative amplitude and polarity of the glitches we get from switching the two pairs of antennas (A/B and A/C) that we could infer something about the bearing of the signal?

The answer is YES.

By using simple trigonometry we can figure out - by comparing the amplitude of the glitches and noting their relative polarity - the bearing of the transmitter with respect to the antenna array - and the specific thing we need is the inverse function "ArcTangent".

If you set your "Wayback" machine to High School, you will remember that you could plot a point on a piece of X/Y graph paper  and relative to the origin, use the ratio of the X/Y values to determine the angle of a line drawn between that point and the origin.  As it turns out, there is a function in many computer languages that is useful in this case - namely the "atan2()" function in which we put our "x" and "y" values.

Figure 5:
Depiction of the "atan2" function and how to get the angle, θ.
This diagram is modified from the Wikipedia "atan2"
article - link.

Click on the image for a larger version.
Let us consider the drawing in Figure 5.  If you remember much of your high-school math, you'll remember that if straight-up is zero degrees and the right-pointing arrow is 90 degrees that the "mid-point" between the two would naturally be 45 degrees.

What you might also remember is that if you were drop a line between the dot marked as (x,y) in Figure 5 and the "x" axis - and draw another line between it and the "y" axis - those lines would be the same length.

By extension, you can see that if you know the "x" and "y" coordinates of the dot depicted in Figure 5 - and "x" and/or "y" can be either positive or negative - you can represent any angle.

Referring back to Figure 2, recall that you will get a "glitch" when you switch antennas that are at different distances from the transmitter - and further recall that in Figures 3 and 4 that you can use the switching between antennas "A" and "B" to determine if the transmitter is in front or behind the car - and "A" and "C" to determine if it is to the left or right of the car.

If we presume that the "y" axis (up/down) is front/back of the car and the "x" axis is right/left, we can see that if we have an equal amount of "glitching" from the A/B switch ("y" axis) and the A/C switch ("x" axis) - and both of these glitches go positive - we would then know that the transmitter was 45 degrees to the right of straight ahead.

Similarly, if we were to note that our "A/B"("y" axis) glitch was very slightly negative - indicating that the signal was behind and and that our "A/C" glitch was strongly negative indicating that it was far to our left:  This condition is depicted with the vector terminating in point "A" in Figure 5 to show that the transmitter was, in fact, to the left and just behind us - perhaps at an angle of about 260 degrees.

Using 4 antennas

The use of three antennas isn't common - particularly with an "L"(right-angle) arrangement - but one could do that.  What is more common is to arrange four antennas in a square and "rotate" them using diode switches with one antenna being active at a given instant.  Consider the diagram of Figure 6.

Figure 6:
A four antenna arrangement.
Click on the image for a larger version.

In this arrangement we have four antennas arranged in a perfect square - and this time we are going to switch them in the following pattern:

    A->B->C->D->A

Now let us suppose that we are receiving Transmitter "A" - so we would get the following "glitch" patterns on our receiver:

  • A->B:  Positive glitch
  • B->C:  No glitch
  • C->D:  Negative glitch
  • D->A:  No glitch

As expected, going from "A" to "B" results in a glitch that we'll call "positive" as antenna "B" is farther away from the transmitter than "A" - but when we "rotate" to the other side and switch from "C" to "D" - because we are going toan antenna that is closer, the glitch will have the opposite polarity as the one we got when we switched from "A" to "B" - but both glitches will have the same amplitude.

Since antenna pairs B/C and A/D are the same distance from the transmitter we will get no glitch when we switch between those antennas.

As you can see from the above operation, every time we make one "rotation", we'll get four glitches - but they will be in equal and opposite pairs - which is to say the A->B and the C-> are one pair with opposite polarity and B->C and D->A are the other pair with opposite polarity.  If we take the measured voltage of these pairs of glitches and subtract each set, we will end up with vectors that we can throw into our "atan2" function and get a bearing - and what's more, since we are getting the same information twice(the equal-and-opposite pairs) this serves to increase the effective amplitude of the glitch overall to help make it stand out better from modulation and noise that may be on the received signal.

Similarly, if we were receiving a signal from Transmitter #3 (in Figure 6) we could see that being at a 45 degree angle, each of our four glitches would have the same strength but differing polarities - with the vector pointing in that direction.

A typical four-antenna ARDF unit will "spin" the antenna at anywhere between 300 and 1000 RPM - the lower frequencies being preferable as it and their harmonics are better-contained within the 3 kHz voice bandwidth of a typical communications-type FM receiver.

Figure 7:
Montreal "Dopplr 3" with compass rose,
digital bearing indication and adjustable switched-
capacitor band-pass filter running "alternate"
firmware (see KA7OEI link below).
Click on the image for a larger version.

Improving performance - filtering

As can be seen in the oscillogram of Figure 2, the switching glitches are of pretty low amplitude - and they are quite narrow meaning that they are easily overwhelmed by incidental audio and - in the case of weaker signals - noise.  One way to deal with this is to use a very narrow audio band-pass filter - typically something on the order of a few Hz to a few 10s of Hz wide.

In the analog world this is typically obtained using a switched-capacitor - the description of which would be worthy of another article - but it has the advantage of its center frequency being set by an external clock signal:  If the same clock signal is used for both the filter and to "spin" the antenna, any frequency drift is automatically canceled out.

It is also possible to use a plain, analog band-pass filter using op amps, resistors and capacitors - but these can be problematic in that these components - particularly the capacitors - are prone to temperature drift which can affect the accuracy of the bearing, often requiring repeated calibration:  This problem is most notable during summer or winter months when the temperature can vary quite a bit - particularly in a vehicle.

By narrowing the bandwidth significantly - to just a few Hz - it is far more likely that the energy getting through it will be only from the antenna switching and not incidental audio.

There is another aspect related to narrow-band filtering that can be useful:  Indicating the quality of signal.  In the discussions above, we are presuming that opposite pairs of antennas will yield equal-and-opposite "glitches"(e.g. A->B and C->D are mirror images, and B->C and D->A are also mirror images) - but in the case of multipath distortion - where the receive signal can come from different directions due to reflection and/or refraction - this may not be the case.  If the above "mirroring" effect is not true, this causes changes in the amplitude of the tone from the antenna spin rate (the "switching tone") which can include the following:

  • The switching tone can decrease overall due to a multiplicity of random wave fronts arriving at the antenna array.
  • The switching tone's frequency can double if each antenna's slightly-different position is getting a different portion of a multipath-distorted wave front.
  • The switching tone can be heavily frequency-modulated by the rapidly-changing wave fronts.
If you have everoperated VHF/UHF from a moving vehicle, you have experienced all three of the above to a degree:  It's likely that you have stopped at a light or a sign, only to find out that the signal to which you were listening faded out and/or got distorted - only to appear again if you moved your vehicle forward or backwards even a few inches/centimeters.  Imagine this happening to four antennas in slightly different locations on the roof of your vehicle!

Each of the above cause the switching tone in the receiver to be disrupted and with the worse disruption, less of the signal will get through the narrow filter.  Of course, having a good representation of the antenna's switching tone does not automatically mean that it is going to indicate a true bearing to the transmitter as you could be receive a "clean" reflection - but you at least you can detect - and throw out - obviously "bad" information!

Improving performance - narrow sampling

In addition to - or instead of narrow-band sampling - there's another method that could be used and that is narrow sampling.  Referring to Figure 2 again, you'll note that the peaks of the glitches are very narrow.  While the oscillogram of Figure 2 was taken from the speaker output of the receiver, many radios intended for packet use also include a discriminator output for use with 9600 baud and VARA modes which has a more "pristine" version of this signal.

Because we can know precisely when this glitch arrives (e.g. we know when we switch the antenna - and we can determine by observation when, exactly, it will appear on the radio's output) we can do a grab the amplitude of this pulse with a very  narrow window and thus reject much of the audio content and noise that can interfere with our analysis.  

Further discussion of this technique is beyond the scope of this article, but it is discussed in more detail here.

Improving performance - vector averaging

If you have ever used a direction-finding unit with an LED compass rose before, you'll note that in areas of multipath that the bearing seems to go all over the place - but if you look very carefully (and are NOT the one driving) you may notice something interesting:  Even in areas of bad multipath, there is likely to be a statistical weight toward the true bearing rather than a completely random mess.  This is a very general statement and it refers more to those instances where signals are blocked more by local ground clutter rather than a strong reflection from, say, a mountain, which may be more consistent in their "wrongness".

While the trained eye can often spot a tendency from seemingly-random bearings, one can bring math to the rescue once again.  Because we are getting our signal bearings by inputting vectors into the "atan2" function, we could also sum the individual "x" and "y" vectors over time and get an average.  
 
This works in our favor for at least two reasons:
  1. It is unlikely that even multipath signals are entirely random.  As signals bounce around from urban clutter, it is likely that there will be a significant bias in one particular direction.
  2. Through vector averaging, the relative quality of a signal can be determined.  If you get a "solid" bearing with consistently-good signals, the magnitude of the x/y vectors will be much greater than that from a "noisy" signal with a lot of variation.

In the case of #1, it is often that, while driving through a city among buildings that the bearing to a transmitter will be obfuscated by clutter - but being able to statistically reduce "noise" may help to provide a clue as to a possible bearing.

In the case of #2, being able to determine the quality of the bearing can, through experience, indicate to you whether or note you should pay attention to the information that you are getting:  After all, getting a mix of good and bad information is fine as long as you know which is the bad information!

Typically one would use a slidingaverage consisting of a recent history of samples.  If one uses the "vector average" method described above it is more likely that poor-quality bearings will have a lesser influence on the result. 

Antenna switching isn't ideal

Up to this point we have been talking about using a single receiver with a multi-antenna array that sequentially switches individual antennas into the mix - but electronic switching of the antennas is not ideal for several reasons:

  • The "modulation" due to the antenna switching imparts sidebands on the received signals.  Because this switching is rather abrupt, this can mean that signals 10s and 100s of kHz away can raise the receive system noise floor and decrease sensitivity.
  • The switching itself is quite noisy in its own right and can significantly reduce the absolute sensitivity of the receive system.  For this reason, only "moderate-to-strong" signals are good candidates for this type of system.
  • In the presence of multipath, the switching itself can result in the signal being more highly disrupted than normal.  This isn't too much of a problem since it is unlikely that one could get a valid bearing in that situation, anyway, but it can still be mitigated with filtering as described above.
If one is actively direction-finding with gear like this, it should not be the only tool in their toolbox:  Having a directional antenna - like a small Yagi - and suitable receiver (one with a useful, wide-ranging signal level meter) is invaluable both for situations where the signal may be too weak to be reliably detected with a TDOA system and when you are so close to it that you may have to get out of the vehicle and walk around.

Doing this digitally

There is something to be said about the relative simplicity of an analog TDOA system:  You slap the antennas on the vehicle, perform a quick calibration using a repeater or someone with a handie-talkie, and off you go.  To be sure, a bit of experience is invaluable in helping you to determine when you should and should not trust the readings that you are getting - but eventually, if the signal persists, you will likely find the source of the signal.

These days there are a number of SDR (Software-Defined Radio) systems - namely the earlier Kerberos and more recent Kraken SDRs.  Both of these units use multiple receivers that are synchronized from the same clock and use in-built references for calibration.

The distinct advantage of having a "receiver per antenna" is that one need not switch the antennas themselves, meaning that the noise and distortion resulting from the electronic "rotation" is eliminated.  Since the antennas are not switched, a different - yet similar - approach is required to determine the bearing of the signal - but if you've made it this far, it's not unfamiliar:  The use of "atan2" again:  One can take the vector difference of the signal between adjacent antennas and get some phasing information - and since we have four antennas, we can, again, get two equal and opposite pairs(assuming no multipath) of bearing data.

If you have two signals from adjacent antennas - let's say "A" and "B" from Figure 6 - we already know that the phasing will be different on the signal if the antenna hits "A" first rather than "B" first and this can be used in conjunction with its opposite pair of antennas ("C" and "D") to divine one of our vectors:  A similar approach can be done with the other opposite pairs - B/C and D/A.

This has the potential to give us better-quality bearings - but the same sorts of averaging and noise filtering must be done on the raw data as it has no real advantage over the analog system in areas where there is severe multipath:  It boils down to how it does its filtering and signal quality assessment and, more importantly, how you, the operator, interpret the data based on experience gained from having used the system enough have become familiar with it.

As far as absolute sensitivity goes between a Kerberos/Kraken SDR and an analog unit - that's a bit of a mixed bag.  Without the switching noise, the absolute sensitivity can be better, but in urban areas - and particularly if there is a strong signal within the passband of the A/D converter (which has only 8 bits) the required AGC may necessarily reduce the gain to where weaker signals disappear.
 
There are other possibilities when it comes to SDR-based receivers - for example, the SDRPlay RSPduo has a pair of receivers within it that can be synchronous to each other:  Using one of these units with a pair of magnetic loops can be used to effect the digital version of an old-fashioned goniometer!  This has the advantage of relative simplicity and can take advantage of the relatively high performance of the RSP compared to the RTL-SDR. 

Finally, there exist multi-site TDOA systems where the signals are received and time-stamped with great precision:  By knowing when, exactly, a signal arrives and then comparing this with the arrival time at other, similar, sites it is (theoretically) possible to determine the location of origin - a sort of "reverse GPS" system.  This system has some very definite, practical limits related to dissemination of receiver time-stamping and the nature of the received signal itself and would be a topic of of a blog post by itself!

Equipment recommendations?
 
My "go to " ARDF unit for in-vehicle use is currently a Montreal "Dopplr 3" running modified firmware (written by me - see the link to the "KA7OEI ARDF page, below) with four rooftop antennas.  Having used this unit for nearly 20 years, I'm very familiar with its operation and have used it successfully many times to find transmitters - both in for fun and for "serious" use (e.g. stuck transmitter, jammer, etc.) 
 
This unit has the advantage of being "grab 'n' go" in that it takes only a few seconds to "boot up" and it has a very simple, intuitive compass rose display. I believe that its performance is about as good as it can possibly be with a "switched antenna" type of ARDF unit:  For the most part, if a signal is audible, it will produce a bearing.

A disadvantage of this unit to some would be that it's available only in the form or a circuit board (still available from FAR circuits - link ) which means that the would-be builder must get the parts and put it together themselves.

"Pre-assembled" options for this type of unit include the MFJ-5005 which can sometimes be found on the used market and several options from the former Ramsey Electronics - along with the Dick Smith ARDF unit:  Information on these units may be found on the K0OV page linked below.

Another possible option is the "Kraken SDR":  I have yet to use one of these units, but I'm considering doing so for evaluation and comparison - which I will report here if I am actually able to do so.

Final words

This (rambling) dissertation about TDOA direction finding hopefully provides a bit of clarity when it comes to understanding how such things work - but there are a few things common to all systems that cannot really be addressed by the method of signal processing - analog or digital:
  • Bearings from a single fixed location should be suspect.  Unless you happen to have an antenna array atop a tall tower or mountain, expect the bearing that you obtain to be incorrect - and even if you do have it located in the clear, bogus readings are still likely.
  • Having multiple sources of bearings is a must.  Having more than one fixed location - or better yet having one or more sources of bearings from moving vehicles is very useful in that this dramatically decreases the uncertainty.
  • The most important information is often just knowing the direction in which you should start driving.  Expecting to be able to located a signal with a TDOA system with any reasonable accuracy is unrealistic.  It is often the case that when a signal appears, the most useful piece of information is simply knowing in which direction - to the nearest 90 degrees - that one should start looking.
  • The experience of the operator is paramount.  No matter which system you are using, its utility is greatly improved with familiarity of its features - and most importantly, its limitations.  In the real world, locating a signal source is often an exercise in frustration as it is often intermittent and variable and complicated by geography.  No-one should reasonably expect to simply purchase/build such a device and have it sit on the shelf until the need arises - and then learn how to use it!

 * * *

Related links:

  • K0OV's Direction Finding page - link - By Joe Moell, this covers a wide variety of topics activities related to ARDF. 
  • WB2HOL's ARDF Projects - link - This page has a number of simple, easy to build antenna/DF projects.
  • KrakenSDR page - link - This is the product description/sales page for the RTL-SDR based VHF/UHF SDR.

 

This page stolen from ka7oei.blogspot.com

[END]



Remote (POTA) operation from the Conger Mountain BLM Wilderness Area (K-6085)

0
0

It is likely that - almost no matter where you were - you were aware that a solar eclipse occurred in the Western U.S. in the middle of October, 2023.  Wanting to go somewhere away from the crowds - but along the middle of the eclipse path - we went to an area in remote west-central Utah in the little-known Conger Mountains.

Clint, KA7OEI operating CW in K-6085 with Conger
mountain and the JPC-7 loaded dipole in the background.
Click on the image for a larger version.

Having lived in Utah most of my life, I hadn't even heard of this mountain range even through I knew of the several (nearly as obscure) ranges surrounding it.  This range - which is pretty low altitude compared to many nearby - peaks out at only about 8069 feet (2460 Meters) ASL and is roughly 20 miles (32km) long.  With no incorporated communities or paved roads anywhere nearby we were, in fact, alone during the eclipse, never seeing any other sign of civilization:  Even at night it was difficult to spot the glow of cities on the horizon.

For the eclipse we set up on BLM (Bureau of Land Management) land which is public:  As long as we didn't make a mess, we were free to be there - in the same place - for up to 14 days, far more than the three days that we planned.  Our location turned out to be very nice for both camping and our other intended purposes:  It was a flat area which lent itself to setting up several antennas for an (Amateur) radio propagation experiment, it was located south and west of the main part of the weather front that threatened clouds, and its excellent dark skies and seeing conditions were amenable to setting up and using my old 8" Celestron "Orange tube" C-8 reflector telescope.

(Discussion of the amateur radio operations during the eclipse are a part of another series of blog entries - the first of which is here:  Multi-band transmitter and monitoring system for Eclipse monitoring (Part 1) - LINK)

Activating K-6085

Just a few miles away, however, was Conger Mountain itself - invisible to us at our camp site owing to a local ridge - surrounded by the Conger Mountain BLM Wilderness Area, which happens to be POTA (Parks On The Air) entity K-6085 - and it had never been activated before.  Owing to the obscurity and relative remoteness of this location, this is not surprising.

Even though the border of the wilderness area was less than a mile away from camp as a crow files, the maze of roads - which generally follow drainages - meant that it was several miles driving distance, down one canyon and up another:  I'd spotted the sign for this area on the first day as we our group had split apart, looking for good camping spots, keeping in touch via radio.

Just a few weeks prior to this event I spent a week in the Needles District of Canyonlands National Park where I could grab a few hours of POTA operation on most days, racking up hundreds of SSB and CW contacts - the majority of being the latter mode (you can read about that activation HERE).  Since I had already "figured it out" I was itching to spend some time activating this "new" entity and operating CW.  Among those others in our group - all of which but one are also amateur radio operators - was Bret, KG7RDR - who was also game for this and his plan was to operate SSB at the same time, on a different band.  As we had satellite Internet at camp (via Starlink) we were able to schedule our operation on the POTA web site an hour or so before we were to begin operation.

In the late afternoon of the day of the eclipse both Bret and I wandered over, placing our stations just beyond the signs designating the wilderness study area (we read the signs - and previously, the BLM web site - to make sure that there weren't restrictions against what we were about to do:  There weren't.) and several hundred feet apart to minimize the probability of QRM.  While Bret set up a vertical, non-resonant end-fed wire fed with a 9:1 balun suspended from a pole anchored to a Juniper, I was content using my JPC-7 loaded dipole antenna on a 10' tall studio light stand/tripod.

Bret, KG7RDR, operating 17 Meter SSB - the mast and
vertical wire antenna visible in the distance.
Click on the image for a larger version.
Initially, I called CQ on 30 meters but I got no takers:  The band seemed to be "open", but the cluster of people sending out just their callsign near the bottom of the band indicated to me that attention was being paid to a rare station, instead.  QSYing up to 20 meters I called CQ a few times before being spotted and reported by the Reverse Beacon Network (RBN) and being pounced upon by a cacophony of stations calling me.

Meanwhile, Bret cast his lot on 17 meters and was having a bit more difficulty getting stations - likely due in part to the less-energetic nature of 17 meter propagation at that instant, but also due to the fact that unlike CW POTA operation where you can be automatically detected and "spotted" on the POTA web site, SSB requires that someone spot your signal for you if you can't do it yourself:  Since we had no phone or Internet coverage at this site, he had to rely on someone else to do this for him.  Despite these challenges, he was able to make several dozen contacts.

Back at my station I was kept pretty busy most of the time, rarely needing to call CQ - except, perhaps, to refresh the spotting on the RBN and to do a legal ID every 10 minutes - all the while making good use of the narrow CW filter on my radio.

As it turned out, our choice to wait until the late afternoon to operate meant that our activity spanned two UTC days:  We started operating at the end of October 14 and finished after the beginning of October 15th meaning that with a single sitting, each of us accomplished two activations over the course of about 2.5 hours.  All in all I made 85 CW contacts (66 of which were made on the 14th) while Bret made a total of 33 phone contacts.

We finally called it quits at about the time the sun set behind a local ridge:  It had been very cool during the day and the disappearance of the sun caused it to get cold very quickly.  Anyway, by that time we were getting hungry so we returned to our base camp.

Back at camp - my brother and Bret sitting around
the fake fire in the cold, autumn evening after dinner.
Click on the image for a larger version.

My station

My gear was the same as that used a few weeks prior when I operated from Canyonlands National Park (K-0010):  An old Yaesu FT-100 equipped with a Collins mechanical CW filter feeding a JPC-7 loaded dipole, powered from a 100 amp-hour Lithium-Iron-Phosphate battery.  This power source allowed me to run a fair bit of power (I set it to 70 watts) to give others the best-possible chance of hearing me.

As you would expect, there was absolutely no man-made noise detectable from this location as any noise that we would have heard would have been generated by gear that we brought, ourselves.  I placed the antenna about 25'(8 meters) away from my operating position, using a length of RG-8X as the feedline, placing it far enough away to eliminate any possibility of RFI - not that I've ever had a problem with this antenna/radio combination.

I did have one mishap during this operation.  Soon after setting up the antenna, I needed to re-route the cable which was laying on the ground, among the dirt and rocks, and I instinctively gave it a "flip" to try to get it to move rather than trying to drag it.  The first couple of "flips" worked OK, but every time I did so the cable at the far end was dragged toward me:  Initially, the coax was dropping parallel with the mast, but after a couple flips it was at an angle, pulling with a horizontal vector on the antenna and the final flip caused the tripod and antenna to topple, the entire assembly crashing to the ground before I could run over and catch it.

The result of this was minor carnage in that only the (fragile!) telescoping rods were mangled.  At first I thought that this would put an end to my operation, but I remembered that I also had my JPC-12 vertical with me which uses the same telescoping rods - and I had a spare rod with that antenna as well.  Upon a bit of inspection I realized, however, that I could push an inch or so of the bent telescoping rod back in and make it work OK for the time-being and I did so, knowing that this would be the last time that I could use them.

The rest of the operating was without incident, but this experience caused me to resolve to do several things:

  • Order more telescoping rods.  These cost about $8 each, so I later got plenty of spares to keep with the antenna.
  • Do a better job of ballasting the tripod.  I actually had a "ballast bag" with me for this very purpose, but since our location was completely windless, I wasn't worried about it blowing over.
  • If I need to re-orient the coax cable, I need to walk over to the antenna and carefully do so rather than trying to "flip" it get it to comply with my wishes.

* * *

Epilogue:  I later checked the Reverse Beacon Network to see if I was actually getting out during my initial attempt to operate on 30 meters:  I was, having been copied over much of the Continental U.S. with reasonably good signals.  I guess that everyone there was more interested in the DX!

P.S.  I really need to take more pictures during these operations!


This page stolen from ka7oei.blogspot.com

[END]

Reducing RFI (Radio Frequency Interference) for a POE (Power Over Ethernet) camera or access point

0
0

One of the (many) banes of the amateur radio operator's existence is often found at the end of an Ethernet cable - specifically a device that is being powered via "Ethernet":  It is often the case that interference - from HF through UHF - emanates from such devices.

Figure 1:
POE camera with both snap-on ferrites installed -
including one as close to the camera as possible -
and other snap-on/toroids to suppress HF through VHF.
Click on the image for a larger version.

Why this happens

Ethernet by itself is usually relatively quiet from an (HF) RF standpoint:  The base frequency of modern 100 Megabit and gigabit Ethernet is typically above much of HF and owing to the fact that the data lines are coupled via transformers making them inherently balanced and less prone to radiate.  Were this not the case, the integrity of the data itself would be strongly affected by the adjacent wires within the cable or even if the cable was routed near metallic objects as it would radiate a strong electromagnetic field - and any such coupling would surely affect the signal by causing reflections, attenuation, etc.

This is NOT the case with power that is run via the same (Ethernet) cable.  Typically, this power is sourced by a switching power supply - too often one that is not filtered well - and worse, the device at the far end of the cable (e.g. a camera or WiFi access point - to name two examples) is built "down to a cost" and itself contains a switching voltage converter with rather poor filtering that is prone to radiation of RF energy over a wide spectrum.  Typically lacking effective common-mode filtering - particularly at HF frequencies (it would add expense and increase bulk) - the effect of RF radiating from the power-conducting wires in an Ethernet cable can be severe.

Even worse than this, Ethernet cables are typically long - often running in walls or ceilings - effectively making them long, wire antennas, capable of radiating even at HF.  The "noisy" power supply at one or both ends of this cable can act as transmitters.

What to do

While some POE configurations convey the DC power on the "spare" conductors in an eight conductor cable (e.g. the blue and brown pairs), some versions use the data pairs themselves (often using center-tapped transformers in the Ethernet PHY) meaning that it may not be easy to filter just the DC power.

While it is theoretically possible to extract the power from the Ethernet cable, filter it and and reinsert it on the cable, the various (different) methods of doing this complicate the matters and doing so - particularly if the DC is carried on the data pairs - can degrade the data integrity by requiring the data to transit two transforms incurring potential signal attenuation, additional reflection and affecting frequency response - to name just a few.  Doing this is complicated by the fact that the method of power conveyance varies as you may not know which method is used by your device(s).

It is possible to subject the entire cable and its conductors to a common-mode inductance to help quash RFI - but this must be done carefully to maintain signal integrity.

Comment: 

Some POE cameras also have a coaxial power jack that permits it to be powered locally rather than needing to use POE.  I've observed that it is often the case that using this local power - which is often 12-24 volts DC (depending on the device) - will greatly reduce the noise/interference generated by the camera and conducted on the cable - provided, of course, that the power supply itself is not a noise source.  Even if a power supply is used near the camera, I would still suggest putting its DC power cable through ferrite devices as described below to further-reduce possible emissions.

Ferrite can be your friend

For VHF and UHF, simple snap-on ferrites can significantly attenuate the conduction of RF along, but these devices are unlikely to be effective at HF - particularly on the lower bands - as they simply cannot add enough impedance at lower frequencies.

To effectively reduce the conduction of RF energy on HF, one could wrap the Ethernet cable around a ferrite toroidal core, but this is often fraught with peril - particularly with Gigabit Ethernet cable - as tight radius turns can distort the geometry of the cable, affect the impedance and cause cross-coupling into other wire pairs.  If this happens, one often finds that the Ethernet cable doesn't work reliably at Gigabit speeds anymore (being stuck at 100 or even 10 Megabits/second) or starts to "flap" - switching between different speeds  or slowing down due to retransmissions on the LAN.

One type of Ethernet cable that is quite resistant to geometric distortion caused by wrapping around a toroidal core is the flat Ethernet cable (sometimes erroneously referred to as "CAT6" or "CAT7").  This cable is available as short jumpers around 6 feet (2 meters) long and, with the aid of a female-female 8P8C (often called "RJ-45") coupler can be inserted into an existing Ethernet cable run.  As it is quite forgiving to being wrapped around ferrites, this flat cable can be pre-wound with such devices and inserted at the Ethernet switch end and/or the device end at a later time.  I have found that with reasonable quality cable an couplers that this does not seem to degrade the integrity of the data on the LAN cable - at least for moderate lengths (e.g. 50 feet/15 meters or less) - your mileage may vary with very long cable runs.

A double-female "splice" connector will be required to insert the jumpers - described below - into an existing Ethernet run and, unfortunately, it's sometimes difficult to find known-good quality devices that will not degrade the connection, so testing the splice on a Gig-Ethernet before you install it somewhere is a good idea.

Practical examples

Best attenuation across HF

Figure 2:
Three toroids wound on "flat" Ethernet cable.  An FT114-43
is used on each end with an FT114-31 in the middle.
Click on the image for a larger version.
Using a test fixture with a VNA, I determined that for best overall attenuation across the entire HF spectrum I needed three ferrite toroids on the 2 meter long flat Ethernet jumper.  All three of these were FT-114 size (1.14", 29mm O.D.) with the first and last being of material type 43 and the center being type 31:  Both types 31 and 43 offer good impedance to low HF but 43 is more effective on the higher bands - namely 10 and 6 meters:  The three toroids, separated by a few inches/cm, offer better all-around rejection from 160 meters through 10/6 meters than just one.  Having said this, it is unrealistic to expect more than 20dB or so of attenuation to be afforded by ferrite devices at high HF/low VHF - "because physics".

One might be tempted to use the more-available FT-240 size of toroids (2.4", 60mm O.D.) but this is unnecessarily large, comparatively fragile and expensive:  While one can fit more turns on the larger toroid, one hits the "point of diminishing returns"(e.g. little improvement with additional turns) very quickly owing to the nature of the ferrite and coupling between turns.  Using the FT-114 size is the best balance as it can accept 6-8 turns with the cable's connector installed, and more than 6-8 turns is rapidly approaching the point of diminishing returns for a single ferrite device, anyway.

In bench testing with a fixture, it was found that three toroids on a piece of flat Ethernet cable provided the best, overall attenuation across HF and to 6 meters - significantly better than any combination of FT114 or FT240 toroids of either 43 or 31 mix alone:  Figure 2, above, shows what this looks like.  Two FT114-43 and one FT114-31 toroid were used - the #31 toroid being placed in the center, providing the majority of series impedance at low HF and a #43 at each end being more effective at higher HF through 6 meters.

To construct this, the flat Ethernet cable was then marked with a silver marker in the center and four turns were wound from each end, in turn, for a total of eight turns on the FT114-31.  Placing an FT114-43 at 12 inches (25cm) and winding seven turns puts the FT114-43 fairly close to each connector, allowing the installation of one or two snap-on ferrites very close  to the connector if it is determined that more suppression is required to suppress radiation at VHF frequencies.  Small zip-ties (not shown in Figure 2) are used to help keep the turns from bunching up too much and also to prevent the start and stop turns from getting too close to each other:  Do not cinch these ties up enough to distort the geometry of the Ethernet cable as that could impact speed - particularly when using Gig Ethernet.

It is important that, as much as possible, one NOT place a "noisy" cable in a bundle with other cables or to loop it back onto itself - both of which could cause inadvertent coupling of the RFI that you are trying to suppress into the other conductors - or to the far side of the cable you are installing.

Best attenuation at VHF andHF

If you are experiencing interference from HF through VHF, you will need to take a hybrid approach:  The use of appropriate snap-on and toroidal ferrite devices.  While snap-on ferrite devices are not particularly useful for HF - especially below about 20 MHz - they can be quite effective at VHF, which is to be expected as that is the purpose for which they are typically designed.  Similarly, a ferrite toroid such as that described above - particularly with type 43 or 31 material - will likely have little effect on VHF radiation - particularly in the near field.

Figure 3:
A combination of a snap-on device with an extra turn looped
through it and two ferrites to offer wide-band suppression
from HF through VHF.
Click on the image for a larger version.

Figure 3 shows such a hybrid approach with a snap-on device on the left and two toroids on the right to better-suppress a wider range of frequencies.  In this case it is important that the snap-on device be placed as close to the interference source as possible (typically the camera) as even short lead lengths can function as effective antennas at VHF/UHF.  You may also notice that the snap-on has two turns through its center as this greatly improves efficacy.

Doing this by itself is not likely to be as effective in reducing radiation at VHF/UHF from the cable itself, often requiring the placement of additional ferrite devices.  Figure 1 shows the installation of several snap-on devices placed as close to the POE camera as physically possible - mainly to reduce radiation at VHF and UHF as at those frequencies even a few inches or centimeters of cable emerging from the noise-generating device can act as an effective antenna.

Determining efficacy

During the installation of these devices on my POE cameras I was more interested in how much attenuation would be afforded at VHF:  Since I'd already used the "chokes on a flat cable" approach like that in Figures 2 and 3 I knew that this would likely be as effective as was practical - but because the VHF/UHF noise could be radiated by comparatively short lengths of "noisy" cable, I needed to be able to quantify that what I did made a difference - or not.

Figure 4:
The cable in Figure 3 installed, but not yet
tucked into place as depicted in Figure 1.
(This does not show the snap-on ferrites installed
where the wire exits the camera housing.
)
The female-female RJ45/8P8C "splice" can be
seen in the upper-left corner of the picture.
Click on the image for a larger version.

For HF this was quite simple:  I simply tuned my HF receiver to a frequency where I knew that I could hear the noise from the cameras and compared S-meter readings with the system powered up and powered down.  This approach is best done at a time during which the frequency in question is "dead" or at least weak (e.g. poor propagation) - 80/40 meters during the midday and 15/10 meters at night is typical.

For VHF this required a bit more specialized equipment.  My "Go-To" device for finding VHF signals - including noise - is my VK3YNG DF sniffer which has extremely good sensitivity - but it also has an audible "S-meter" in terms of a tone that rises with increasing signal level.  Switching it to this mode and placing it and its antenna at a constant distance fairly close to the device being investigated allowed me to "hear" - in the form of a lower-pitched tone - whether or not the application of a ferrite device made a difference.

Slightly less exotic would be an all-mode receiver capable of tuning 2 meters such as the Yaesu FT-817, Icom IC-706, 703 or 705.  In this case the AM mode would be selected and the RF gain control advanced such that the noise amplitude audibly decreased:  This step is important as not doing this could mean that if the noise decreased, the AGC in the receiver would simply compensate and hiding the fact that the signal level changed.  By listening for a decrease in the noise level one can "hear" when installing a snap-on ferrite made a difference - or not.

One cannot use a receiver in FM mode for this as an FM detector is designed to produce the same amount of audio (including noise) at any signal level:  A strong noise source and a weak one will sound exactly the same.  It's also worth noting that the S-meter on a receiver in FM mode - or an FM-only receiver - are typically terrible in the sense that their indications typically start with a very low signal and "peg" the meter at a signal that isn't very strong at all which means that if you try to use one, you'll have to situate the receiver/antenna such that you get a reading that is neither full-scale or at the bottom of the scale to leave room for the indication of change.

Of course, a device like a "Tiny SA"(Spectrum Analyzer) could be used to provide a visual indication, using the "Display Line", markers and stored traces to allow a quick "before and after" indication.  As mentioned above, one would want to place the antenna and the receiving device (an actual receiver or spectrum analyzer) at a fairly close distance to the device being investigated - but keep it in the same location during the entire time so that one can get meaningful "before and after" readings.

Conclusion

With the use of ferrites alone, one should not expect to be able to completely suppress radiation of RF noise from an Ethernet cable - the typical maximum to be reasonably expected is on the order of about 20dB (a bit over 3 "S" units).  In a situation where the POE device is very close to the antenna, it may not be possible to knock the interference down to the point of inaudibility.

The most effective use will be for noise sources will be at some distance from the receive antenna - particularly if a long cable is used that may act as an antenna.

Be prepared to install appropriate ferrite devices at both ends of the cable as it's often the case that not only does the POE device itself (camera, wireless device) radiates noise but also the POE switch itself:  No-name brand POE power supplies and switches are, themselves often very noisy and the proper course of action would be to first swap out the supply with a known quiet device before attaching ferrite.

As every interference situation is unique, your mileage may vary and the best road to success is being able to quantify that changes you have made made things better or worse.


This page stolen from ka7oei.blogspot.com

[END]

Repairing a dead Kenwood TS-850S

0
0

Recently, a Kenwood TS-850S - a radio from the mid-early 1990s - crossed my workbench.  While I'm not in the "repair business", I do fix my own radios, those of close friends, and occasionally those of acquaintances:  I've known this person for many years and have many mutual friends.

If you are familiar with the Kenwood TS-850S to any degree, you'll also know that they suffer from an ailment that has struck down many pieces of electronic gear from that same era:  Capacitor Plague.

Figure 1:
The ailing TS-850S.  The display is normal - except
for the frequency display showing only dots.  This error is
accompanied by "UL" in Morse.
Click on the image for a larger version.
This isn't the same "Capacitor Plague" of which you might be aware where - particularly in the early 2000s - many computer motherboards failed due to incorrectly formulated electrolytic capacitors, but rather early-era surface-mount electrolytic capacitors that began to leak soon after they were installed.

The underlying cause?

While "failure by leaking" is a common occurrence in electronics, this failure is somewhat different in many aspects.  At about this time, electronic manufacturers were switching over to surface-mount devices - but one of the later components to be surface-mounted were the electrolytic capacitors themselves:  Up to this point it was quite common to see a circuit board where most of the components were surface-mount except for larger devices such as diodes, transistors, large coils and transformers - and electrolytic capacitors - all of which would be mounted through-hole, requiring an extra manufacturing step.

Early surface-mount electrolytic capacitors, as it turned out, had serious flaws.  In looking at the history, it's difficult to tell what aspect of their use caused the problem - the design and materials of the capacitor itself or the method by which they were installed - but it seems that whatever the cause, subjecting the capacitors themselves to enough heat to solder their terminals to the circuit board - via hot air or infrared radiation - was enough to compromise their structural integrity.

Whatever the cause - and at this point it does not matter who is to blame - the result is that over time, these capacitors have leaked electrolyte onto their host circuit boards.  Since this boron-based liquid is somewhat conductive and mildly corrosive in its own right, it is not surprising that as surface tension wicks this material across the board, it causes devastation wherever it goes, particularly when voltages are involved.

The CAR board - the cause of "display dots"

In the TS-850S, the module most susceptible to leaking capacitors is the CAR board - a circuit that produces multiple, variable frequency signals that feeds the PLL synthesizer and several IF (Intermediate Frequency) mixers.  Needless to say, when this board fails, so does the radio.

They most obvious symptom of this failure is when damage to the board is so extensive that it can no longer produce the needed signals - and if one particularly synthesizer (out of four on the board) fails, you will see that the frequency display disappears - to be replaced with just dots - and the letters "UL" are sent in Morse Code to indicate the "Unlock" condition by the PLL.

Figure 2:
The damaged CAR board.  All but one of the surface-mount
electrolytic capacitors has leaked corrosive fluid and damaged
the board.  (It looked worse before being cleaned!)
Click on the image for a larger version.
Prior to this, the radio may have started going deaf and/or transmitter output was dropping as the other three synthesizers - while still working - are losing output, but this may be indicative of another problem as well - more on this later.

Figure 2 shows what the damaged board looks like.  Actually, it looked a bitworse than thatwhen I first removed it from the radio - several pins of the large integrated circuits being stained black.  As you can see, there are black smudges around all (but one) of the electrolytic capacitors where the corrosive liquid leaked out, getting under the green solder mask and even making its way between power supply traces where the copper was literally being eaten away.

The first order of business was to remove this board and throw it in the ultrasonic cleaner.  Using a solution of hot water and dish soap, the board was first cleaned for six minutes - flipping the board over during the process - and then very carefully, paper towels and then compressed air was used to remove the water.

Figure 3:
The CAR board taking a hot bath in soapy water in an
ultrasonic cleaner.  This removes not only debris, but spilled
electrolyte - even that which has flowed under components.
Click on the image for a larger version.
At this point I needed to remove all of the electrolytic capacitors:  Based on online research, it was common for all of them to leak, but I was lucky that the one unit that had not failed (a 47uF, 16 volt unit)"seemed" OK while all of the others (all 10uF, 16 volt) had disgorged their contents.

If you look at advice online, you'll see that some people recommend simply twisting the capacitor off the board as the most expedient removal procedure, but I've found that doing so with electrolyte-damaged traces often results in ripping those same traces right off the board - possibly due to thinning of the copper itself and/or some sort of weakening of the adhesive.

My preferred method is to use a pair of desoldering tweezers - which is more or less a soldering iron with two prongs that will simultaneously heat both pins of the part simultaneously, theoretically allowing its quick removal.  While many capacitors are easily removed with this tool, some are more stubborn:  During manufacture, drops of glue were used under the part to hold it in place prior to soldering and this sometimes does its job too well, making it difficult to remove it.  Other times, the capacitor will explode (usually just a "pop") as it is being heated, oozing out more corrosive electrolyte.

With the capacitors removed, I tossed it in the ultrasonic cleaner for other cycle in the same warm water/soap solution to remove any additional electrolyte that had come off - along with debris from the removal process.  It is imperative when repairing boards with leaking capacitors that all traces of electrolyte be completely removed or damage will continue even after the repair.

At this point one generally needs to don magnification and carefully inspect the board.  Using a dental pick and small blade screwdriver, I scraped away loose board masking (the green overcoating on the traces) as well as bits of copper that had detached from the board:  Having taken photos of the board prior to capacitor removal - and with the use of the Service Manual for this radio, found online - I was confident that I could determine where, exactly, each capacitor was connected.

When I was done - and the extent of the damage was better-revealed - the board looked to be a bit of a mess, but that was the fault of the leaking capacitors.  Several traces and pads in the vicinity of the defunct capacitors had been eaten away or fallen off - but since these capacitors are pretty much placed across power supply rails, it was pretty easy to figure out where they were supposed to connect.

Figure 4:
The CAR board, reinstalled for testing.
Click on the image for a larger version.
As the mounting pads for most of these capacitors were damaged, I saw no point in replacing them with more surface-mount capacitors - but rather I could install through-hole capacitors on the surface, laying them down as needed for clearance - and since these new capacitors included long leads, they could be used to "rebuild" the traces that had been damaged.

The photo shows the final result.  Different-sized capacitors were used as necessary to accommodate the available space, but the result is electrically identical to the original.  It's worth noting that these electrolytic capacitors are in parallel with surface-mount ceramic capacitors (which seem to have survived the ordeal) so the extra lead length on these electrolytics is of no consequence - the ceramic capacitors doing their job at RF as before.  After successful testing of the board, dabs of adhesive were used to hold the larger, through-hole capacitors to the board to reduce stress on the solder connections under mechanical vibration.

Following the installation of the new capacitors, the board was again given two baths in the ultrasonic cleaner - one using the soap and water solution, and the other just using plain tap water and again, the board was patted dry and then carefully blown dry with compressed air to remove all traces of water from the board and from under components and then allowed to air dry for several hours.

Testing the board

After using an ohmmeter to make sure that the capacitors all made their proper connections, I installed the board in the TS-850S and... it didn't work as I was again greeted with a "dot" display and a Morse "UL".

I suspected that one of the "vias" - a point where a circuit traces passes from one side to another through a plated hole - had been "eaten" by the errant electrolyte.  Wielding an oscilloscope, I quickly noted that only one of the synthesizers was working - the one closest to connector CN1 - and this told me that at least one control signal was missing from the rest of the chips.  Probing with the scope I soon found a missing serial data signal ("PDA") used to program the synthesizers "stopped" beyond the first chip and a bit of testing with an ohmmeter showed that from one end to the other, the signal had been interrupted - no doubt in a via that had been eaten away by electrolytic action.

Figure 5:
Having done some snooping with an oscilloscope, I noted
that the "PDA" signal did not make it past the first of the
(large) synthesizer chips.  The white piece of #30 Kynar
wire-wrap wire was used to jump over the bad board "via"
Click on the image for a larger  version.

The easiest fix for this was to use a piece of small wire - I used #30 Kynar-insulated wire-wrap wire - to jumper from where this control signal was known to be good to a point where it was not good (a length of about an inch/two cm) and was immediately rewarded with all four synthesizer outputs being on the correct frequencies, tuning as expected with the front-panel controls.

Low output

While all four signals were present and on their proper frequencies - indicating that the synthesizers were working correctly - I soon noticed, using a scope, that the second synthesizer output on about 8.3 MHz was outputting a signal that was about 10% of its expected value in amplitude.  A quick test of the transmitter indicated that the RF output was only about 15 watts - far below that of the 100 watts expected.

Again using the 'scope, I probed the circuit - and comparing the results with the nearly identical third synthesizer (which was working correctly) and soon discovered that the amplitude dropped significantly through a pair of 8.3 MHz ceramic filters.

The way that synthesizers 2 and 3 work is that the large ICs synthesize outputs in the 1.2-1.7 MHz area and mix this with a 10 MHz source derived from the radio's reference to yield signals around 8.375 and 8.83 MHz, respectively - but this mix results in a very ugly signal spectrally - full of harmonics and undesired products.  With the use of these ceramic bandpass filters - which are similar to 10.7 MHz filters those found in analog AM and FM radios - these signals are "cleaned up" to yield the desired output over a range of the several kiloHertz that they vary depending on the bandpass filter and the settings of the front panel "slope tune" control.

Figure 6:
The trace going between C75 and CF1 was cut and a bifilar-
wound transformer was installed to step up the impedance
from Q7 to that of the filter:  R24 was also changed to 22
ohms - providing the needed "IF-7-LO3" output level at J4.
Click on the image for a larger version.

The problem here seemed to be that the two ceramic 8.3 MHz filters  (CF1, CF2)were far more lossy than they should have been.  Suspecting a bad filter, I removed them both from the circuit board and tested them using a NanoVNA:  While their "shape" seemed OK, their losses were each 10dB more than is typical of these devices indicating that they are slowly degrading.  A quick check online revealed that these particular frequency filters were not available anywhere (they were probably custom devices, anyway) so I had to figure out what to do.

Since the "shape" of the individual filter's passbands were still OK - a few hundred kHz wide - all I needed was to get more signal:  While I could have kludged another amplifier into the circuit to make up for the loss, I decided, instead, to reconfigure the filter matching.  Driving the pair of ceramic filters is an emitter-follower buffer amplifier (Q7) - the output of which is rather low impedance - well under 100 ohms - but these types of filters typically "want" around 300-400 ohms and in this circuit, this was done using series resistors - specifically R24.  This method of "matching" the impedance is effective, but very lossy, so changing this to a more efficient matching scheme would allow me to recover some of the signal.

Replacing the 330 ohm series resistor (R24) with a 22 ohm unit and installing a bifilar-wound transformer (5 turns on a BN43-2402 binocular core) wired as a 1:4 step-up transformer (the board trace between C75 and CF1 was cut and the transformer connected across it) brought the output well into the proper amplitude range and with this success, I used a few drops of "super glue" to hold it to the bottom of the board.  It is important to note that I "boosted" the amplitude of the signal prior to the filtering because to do so after the filtering - with its very low signal level - may have also amplified spurious signals as well - a problem avoided in this method.

Rather than using a transformer I could have also used a simple L/C impedance transformer (a series 2.2uH inductor with a 130pF capacitor to ground on the "filter side" would have probably done the trick) but the 1:4 transformer was very quick and easy to do.

With the output level of synthesizer #2 (as seen on pin CN4) now up to spec (actually 25% higher than indicated on the diagram in the service manual) the radio was now capable of full transmit output power, and the receiver's sensitivity was also improved - not surprising considering that the low output would have starved mixers in the radio's IF.

A weird problem

After all of this, the only thing that is not working properly is "half" of the "Slope Tune" control:  In USB the "Low Cut" works - as does the "High Cut" on LSB, but the "High Cut" does not work as expected on USB and the "Low Cut" does not work as expected on LSB.  What happens with the settings that do NOT work properly, I hear the effect of the filter being adjusted (e.g. the bandwidth narrows) but the radio's tuning does not track the adjustment as it should.  What's common to both of these "failures" is that they both relate to high frequency side of the filter IF filters in the radio - the effect being "inverted" on LSB.

I know that the problem is NOT the CAR board or the PLL/synthesizer itself as these are being properly set to frequency.  What seems to NOT be happening is that for the non-working adjustments, the radio's CPU is not adjusting the tuning of the radio to track the shift of the IF frequency to keep the received signal in the same place - which seems like more of a software problem than a hardware problem:  Using the main tuning knob or the RIT one can manually offset this problem, but that is obviously not how it's expected to work!

In searching the Internet, I see scattered mentions of this sort of behavior on the TS-850 and 950, but no suggestions as to what causes it or what to do about it:  I have done a CPU reset of the radio and disconnected the battery back-up to wipe the RAM contents, but to no avail.  Until/unless this can be figured out, I advised the owner to set the affected control to its "Normal" position.

Figure 7:
The frequency display shows that the synthesizer is now
working properly - as did the fact that it outputs full power
and gets good on-the-air signal reports.
Click on the image for a larger version.

Final comments

Following the repair, I went through the alignment steps in the service manual and found that the radio was slightly out alignment - particularly with respect to settings in the transmit output signal path - possibly during previous servicing to accommodate the low output due to the dropping level from the CAR board.  Additionally, the ALC didn't seem to work properly - being out of adjustment - resulting in distortion on voice peaks with excessive output power.

With the alignment sorted, I made a few QSOs on the air getting good reports - and using a WebSDR to record my transmissions, it sounded fine as well.

Aside from the odd behavior of the "Slope Tune" control, it seems to work perfectly.  I'm presently convinced that this must be a software - not a hardware - problem as all of the related circuits function as they should, but don't seem to be being "told" what to do.

* * * * *

This page stolen from ka7oei.blogspot.com


[END]






Latest Images