ElectroOptical Innovationshttps://electrooptical.net/News/2022-08-05T01:19:40.609354+00:00Silicon Photomultiplier Module Design2021-01-25T16:30:28+00:002022-08-05T01:19:40.609354+00:00Simon Hobbshttps://electrooptical.net/News/author/simon/https://electrooptical.net/News/silicon-photomultiplier-module-design/<p>Internal Developments<br/><br/>In the last year or two we've been doing a lot of work aimed at replacing photomultiplier tubes (PMTs) in instruments, using <em>avalanche photodiodes</em> (APDs) and <em>silicon photomultipliers</em> (SiPMs). These devices are arrays of single-photon detectors, so they're also known as <em>multi-pixel photon counters</em> (MPPCs). Our main application areas include biomedical instruments such as flow cytometers and microplate readers, which have to measure low light levels very precisely but don't need the ultralow dark current of PMTs. (Follow-on articles will talk about our SiPM work in airborne lidar and SEM cathodoluminescence, as well as on improving the performance of actual PMTs.)<br/><br/>PMTs have been around since the 1930s, and remain the undisputed champs for the very lowest light levels. We love PMTs, but we have to admit that they're delicate and not that easy to use—they tend to be bulky, they need high voltage, and they need regular replacement. Most of all, PMTs are very expensive.</p>
<p><br/>We've been working with several customers on developing products using Hamamatsu, Broadcom, and On Semi (formerly SensL) SiPMs. They have different strengths, but all three series are excellent devices that have far better linearity in analog mode than we initially expected. (There's a fair amount of doom-and-gloom about that in the specialized technical literature.)<br/><br/>Our first product design used the Hamamatsu S13362, and can go from counting single photons to working in analog in dim room lights, with just the twist of a knob. Subsequently we've had the opportunity to do a couple of devices for time-of-flight lidar using OnSemi's MicroFCs, which we developed from our existing IP. Recently we've been consulting on microplate and flow cytometry applications. All of these applications have in common that they're moving to the newer solid state option and away from PMT-based designs.</p>
<p><br/>These applications are challenging enough without having to develop the photodetection hardware. With so much customer interest, we've been focusing on developing a series of SiPM modules that act as drop-in replacements for traditional PMT modules, including all their nice features such as wide-range voltage-controlled gain, ±5 V input, and selectable bandwidths from DC–200 kHz to DC–300 MHz. Our existing designs are available on a flexible licensing model that generates considerable savings compared with either purchased PMT modules or internally-funded development, and gives you complete control over your supply chain.<br/><br/>Because these technologies are new, we can provide customized proof-of-concept (POC) demos showing how they work in your exact application. We've delivered prototypes and POCs in as little as one week at low cost, so you can make a real-world engineering evaluation without sacrificing a lot of budget or schedule.<br/><br/>For more information on our SiPM/MPPC designs, or help with your low-light measurements, send us an <a href="mailto:pcdhobbs@electrooptical.net">email</a> or give us a call at <a href="tel:914-236-3005">+1 914 236 3005</a>—we're interested in solving your detection and system problems.</p>Signal to Noise Ratio and You, Part 22021-01-24T13:05:47+00:002022-01-20T13:29:48.986793+00:00Philip Hobbshttps://electrooptical.net/News/author/pcdh/https://electrooptical.net/News/signal-to-noise-ratio-and-you-part-2/<p>In <a href="https://electrooptical.net/News/digital-lock-in-principles/">Part 1</a>, we discussed ways to get better measurements by improving the <em>signal to noise ratio</em> (SNR), and saw that although it was often a win to measure more slowly and use lowpass filters, going too far actually makes things worse, because of the way noise concentrates at low frequency. Here we introduce a more sophisticated approach that generally works better: the <em>lock-in amplifier.</em></p>
<!-- In building an ultrasensitive instrument, we're always fighting to improve our signal-to-noise ratio (SNR). The SNR is the ratio of signal power to noise power in the measurement bandwidth, and is limited by noise in the instrument itself and the noise of any background signals, such as the shot noise of the background light or the slight hiss of a microphone. </p>
<p>If the signal is weak, it will have proportionally more noise, so that the apparatus has to be designed to get rid of as much noise as possible. There are a number of ways to do this. The best is to get more signal or reduce the noise, for instance by increasing the laser power and using a <a href="/articles/lc120d-ultraquiet-diode-laser-system/">laser noise canceller</a>, but eventually we hit a practical limit. At that point, we're left with several options, all of which boil down to filtering in one form or another.</p>
<p>Filters can be hardware or software, but their job is to pass the desired signal frequencies and reject noise at other frequencies. Of course some of the noise lands on top of our signal and so makes it through the filter anyway.</p>
<p>A low-pass filter passes frequencies below its cutoff and attenuates higher ones. If the signal is concentrated below the cutoff frequency, the filter rejects the high-frequency noise while preserving the signal (and the low-frequency noise, of course). By slowing down the measurement, for example by reducing the scan speed, the bandwidth of the signal's frequency spectrum can be reduced and the filter made correspondingly narrower.</p>
<p> A problem with this simple approach is that in most cases there's a concentration of noise at low frequencies (near DC), so filtering doesn't help as much as one might expect--in fact, it's not uncommon for the noise to get <em>worse</em> as the measurement gets slower, which is rather unintuitive. It's because there is a lower limit to the signal spectrum as well as an upper. If we're taking 1000 measurements, each with an averaging time of a millisecond, then the signal spectrum is predominantly contained between 1 Hz and 1 kHz. A measurement that takes a second doesn't contain much signal information or noise between 0 Hz (DC) and 1 Hz. Slowing it down to one measurement per hundred seconds reduces the lower cutoff to (1/100) Hz and the upper cutoff to 10 Hz. That narrows the bandwidth, all right, but interestingly it typically makes the noise worse rather than better. Let's look at why.</p>
<p>To find the total noise, we have to add up the noise contributions at all frequencies in the filter passband. In other words, the total noise power is the integral of the noise power spectral density (PSD). The low frequency noise PSD often goes like 1/<em>f</em>, whose integral is ln(<em>f</em>). Thus if the passband is between <em>f</em><sub>1</sub> and <em>f</em><sub>2</sub>, the total noise goes as ln(<em>f</em><sub>2</sub>) - ln(<em>f</em><sub>1</sub>) = ln(<em>f</em><sub>2 </sub>/ <em>f</em><sub>1</sub>). Because the ratio <em>f</em><sub>2 </sub>/ <em>f</em><sub>1</sub> is the same in both the fast and slow measurements, the 1/<em>f</em> noise is also the same—sacrificing a factor of 100 in speed hasn't improved things at all. In fact, since things like thermal drifts rise more steeply than 1/<em>f</em>, going slower is likely to make things worse in real cases. So lowpass filtering can help, but only up to a point.</p>
-->
<p>We were considering a typical <em>baseband</em> signal, one that goes from near DC to some much higher frequency. Audio is a typical example, with a bandwidth usually quoted as 20 Hz to 20 kHz. To escape the low frequency noise, we need to move our signal up in frequency, out of baseband. In lock-in detection we make the signal periodic in time at some <em>carrier</em> frequency <em>f<sub>c</sub></em> chosen to be several times higher than the required bandwidth. This is generally pretty easy to do, as we'll see, and doing so ensures that none of the signal we care about remains near DC. Our noise rejection filter now needs to be a narrow bandpass centered at <em>f<sub>c</sub></em>, so as to reject both low-and high-frequency noise. We'll also need some means of measuring the amplitude and phase of the AC signal. That's more complicated, of course, but with this setup we can narrow the bandwidth as much as we like and still get the full SNR improvement. A lock-in amplifier is a device for making such narrow-band AC measurements conveniently. It's basically a radio that measures the phase and amplitude of its input, so that we recover a lowpass-filtered version of the baseband modulation signal that we care about, with no 1/<em>f</em> noise pollution to worry about. At this point we need to geek out a little bit and talk about <em>modulation</em>, which is what we mean by moving the signal away from baseband.</p>
<p>An AC signal that passes through a narrowish filter can be looked at as a sine wave with some amplitude and phase: <em>g(t) = A</em> cos(2π <em>f t</em> + <em>φ</em>), where the signal information is contained in slowish variations of <em>A</em> and <em>φ</em>, the amplitude and phase (the modulation). This is familiar from broadcast radio: you can send music and speech program material over the air by encoding it as amplitude modulation (AM) or frequency modulation (FM). AM changes the heights of the peaks of the sinusoidal carrier wave in response to the audio signal (<em>A</em> varies), while FM changes the position of the peaks in time (<em>φ</em> varies). FM maps the baseband signal <em>s(t)</em> onto the instantaneous frequency, so <em></em><em></em> d<em>φ</em>/d<em>t</em> is proportional to <em>s(t)</em>. In <em>phase modulation</em> (PM), which is less common in radios but more useful in measurements, the signal maps directly: <em>φ</em> is proportional to <em>s(t)</em>. The two are collectively known as <em>angle modulation</em>.</p>
<p>All types of modulation widen the carrier spectrum, forming <em>sidebands</em> above and below <em>f<sub>c</sub></em> that carry the signal information. It's generally preferable to talk about AM and PM, especially in discussions of noise, because in PM a flat baseband spectrum produces flat sidebands, whereas in FM it doesn't. That makes PM much easier to think about.</p>
<p>A carrier with both AM and PM can be written as <em>g</em>(<em>t</em>) = <em>A</em>(<em>t</em>) cos( 2<em>π</em> <em>f<sub>c</sub> t</em> + <em>φ</em>(<em>t</em>) ), where <em>A</em> and <em>φ</em> are slowly varying compared with <em>f<sub>c</sub></em>. From trigonometry, we know that</p>
<p>cos(<em>a+b</em>) = cos <em>a</em> cos <em>b</em> - sin <em>a</em> sin <em>b</em>, or in this case, cos( 2<em>π</em> <em>f<sub>c</sub> t</em> + <em>φ</em> ) = cos( 2<em>π</em> <em>f<sub>c</sub> t</em> ) cos <em>φ</em> - sin( 2π <em>f<sub>c</sub> t</em> ) sin <em>φ</em> (PM)</p>
<p>Thus by measuring the amplitudes of the sine and cosine components of the signal, we can recover its phase. Rearranging the same trigonometric identity shows us how to do this:</p>
<p>cos<em> a</em> cos <em>b</em> = ( cos(<em>a-b</em>) + cos(<em>a+b</em>) ) / 2 and</p>
<p>sin <em>a</em> sin <em>b</em> = ( cos(<em>a-b</em>) - cos(<em>a+b</em>) ) / 2.</p>
<p>Thus if we multiply our signal by <em>local oscillator</em> (LO) signals sin(2π<em> <em>f<sub>c</sub></em> t </em>) and cos(2π<em>f<sub>c</sub> t </em>), we get</p>
<p><em>I = A</em> cos <em>φ</em> cos(2<em>π <em>f<sub>c</sub></em> t</em>) cos(2<em>π <em>f<sub>c</sub></em> t </em>) = <em>A</em> cos <em>φ</em> [cos(0) + cos(4<em>π <em>f<sub>c</sub></em> t </em>)], which is <em>I = A</em> cos <em>φ</em> + (a signal near 2<em>f<sub>c</sub></em> ), and</p>
<p><em>Q = A</em> sin <em>φ</em> sin(2<em>πf<sub>c</sub> t </em>) sin(2π<em>f<sub>c</sub> t </em>) = <em>A</em> sin <em>φ</em> cos(0) cos(4<em>πf<sub>c</sub> t </em>), which is <em>Q = A</em> sin φ + (another signal near 2<em>f<sub>c </sub></em>).</p>
<p>Lowpass filtering gets rid of the 2<em>f<sub>c</sub></em> components of <em>I</em> and <em>Q</em> and rejects noise exactly as our narrow bandpass filter would, with the same tradeoff of bandwidth <em>vs.</em> measurement speed but without the excess low-frequency noise. Baseband signals <em>I</em> and <em>Q</em> are the so-called <em>in-phase</em> and <em>quadrature phase</em> signals,. (You can think of "quadrature" as referring to the signal shifted a quarter cycle, though it actually comes from an old term for integration: sin <em>x</em> is the integral of cos <em>x</em>.) (The LO is the same signal we'll use to modulate the measurement (using <em>e.g.</em> an optical chopper or something more intelligent), so there's no problem there.)</p>
<p>Thus the procedure of multiplying by the sine and cosine phases of the carrier converts the modulated carrier into a pair of baseband signals containing both the amplitude and phase information. Because of the lowpass filtering, the exact waveform of the modulated wave (sine, square, or something else) doesn't matter much--only sinusoidal components sufficiently close to <em>f<sub>c</sub></em> contribute. This property of sines and cosines is called <em>orthogonality</em>. Very often only one of the two is of interest, usually <em>I</em>, but one can also recover <em>A</em> and <em>φ</em> easily:</p>
<p><em>A</em>= √( <em>I</em><sup> 2</sup> + <em>Q</em><sup> 2</sup> ) and <em>φ</em> = tan<sup>-1</sup>(<em>Q / I </em>).</p>
<p>(One has to worry about a few other things when computing <em>φ</em>, such as which quadrant it's in, whether you're dividing by zero, and whether it needs unwrapping to avoid ambiguities of multiples of 2<em>π</em>.) The multiplications also of course produce the cross terms, proportional to</p>
<p>cos(2<em>πf<sub>c</sub> t </em>) sin(2<em>πf<sub>c</sub> t </em>) = 1/2 sin(4<em>π f<sub>c</sub> t </em>),</p>
<p>but these have no baseband component and so get filtered out as well, showing that the sine and cosine components are orthogonal even though their frequencies are the same. </p>
<p>The sine and cosine LO signals can be derived from a reference frequency that you supply, or generated internally. Generally this reference is the same source used to generate the AC modulation of the measured signal, but it'll still work even if the two are different (the frequency error will show up as a ramp in <em>φ</em>(<em>t</em>), of course).</p>
<p>So that's the general principle of how lock-in amplifiers can improve our SNR by narrowing the measurement bandwidth while avoiding the low-frequency noise. In Part 3 we'll look at how that's done, in both analog and digital lock-in amplifiers.</p>
<!--
<p>using two multipliers, one for I and one for Q, with the sine and cosine LO signals derived from a reference frequency that you supply. Generally this reference is the same source used to generate the AC modulation of the measured signal. There are two basic kinds of lock-ins: analog, where the multipliers and filters are physical circuits, and digital, where the signal is first digitized and the multipliers and narrow filters are done numerically by software or programmable logic. Either way, the orthogonality of sines and cosines is what makes lock-ins work.</p>
<p></p>
<p>A lock-in is basically a radio that measures the phase and amplitude of its input using two multipliers, one for I and one for Q, with the sine and cosine LO signals derived from a reference frequency that you supply. Generally this reference is the same source used to generate the AC modulation of the measured signal. There are two basic kinds of lock-ins: analog, where the multipliers and filters are physical circuits, and digital, where the signal is first digitized and the multipliers and narrow filters are done numerically by software or programmable logic. Either way, the orthogonality of sines and cosines is what makes lock-ins work. A fine but important point is that accurate digitization requires that the signal first pass through an analog filter to prevent high frequency junk from appearing at lower frequencies, a phenomenon called _aliasing_. This is familiar from moire' patterns in bridge railings and fences seen from the highway, or the tendency of stagecoach wheels in old Western movies to appear to rotate slowly backwards instead of quickly forwards. If the digitizer is sampling at f_s samples per second, the antialiasing filter has to reject frequencies above f_s/2, the so-called _Nyquist frequency_. (This requirement follows from the sampling theorem.) Real-world antialiasing filters are not infinitely sharp, so they have to start rolling of sooner than that. The maximum useful signal frequency is thus a bit below Nyquist, typically by 20%-30%. . Lock-ins are of course amplifiers as well; the amplification is mostly done ahead of the multipliers, and is generally range-switched rather than continuously variable like a volume control. It's a lot easier to make the amplifier quiet and stable that way, and those things matter a great deal in a lock-in. Because the signal of interest is often very much smaller than the wideband noise, lock-ins have to have a lot of _dynamic reserve_. Dynamic reserve is the ratio of the maximum allowable (signal + noise) to the full-scale signal amplitude on a given range, and is often a factor of 100 to 10,000 (40 to 80 decibels). The smallness of the desired signal is why the amplifiers have to be so stable and quiet, and the multipliers and the digitizer as well. (Minor problems in the digitizer system become very objectionable for this reason--in my experience nobody gets their first digital lock-in design quite right, because they aren't paranoid enough about this.) This is more than compensated for by the massive increase in multiplication accuracy and stability afforded by computer arithmetic compared with analog multiplier chips. Thus if it's done properly, a digital lock-in is better than an analog one, other things being equal. Done badly, it can easily be much worse. Quantization Noise ~~~~~~~~~~~~~~ Digital lock-ins also exhibit quantization noise, which requires a bit of explanation. An M-bit digitizer measuring a voltage V produces an M-bit binary fraction F = V/Vref, where V_ref is the reference voltage supplied to the digitizer. Digitizers come in various resolutions, usually between 10 and 24 bits. A plot of the output code vs. input voltage thus looks like a staircase, ideally a perfectly straight staircase with perfectly equal tread widths. The analog section of the digitizer contributes noise like any normal circuit, but in addition the digitizing operation introduces _quantization noise_, the inaccuracy inherent in converting a continuously-variable voltage into one of those 2M discrete steps. This is inherently a complicated thing to model, but we're saved by Widrow's theorem, which says that as long as the signal is at least a few steps in amplitude, the digitizing operation can be accurately modelled by a noiseless digitizer acting on a signal with added uniformly-distributed (white) noise of amplitude N = V<sub>ref<\sub> 2-M / &surd;12. Mathematically the digital signal behaves just like a slightly noisier version of the analog one. Interestingly, the wideband noise provides an important benefit by exercising a much wider range of digitizer steps than the signal alone, effectively smoothing out minor irregularities in the staircase. (Pseudorandom noise is sometimes added in analog and subtracted again digitally to ensure that this happens in a known way, a procedure called _dithering_.) Sampling Rate Because the antialiasing filter is not adjustable, the digitizer must be run at a sufficiently-fast f<sub>s</sub> that the out-of-band components and higher harmonics of the modulation are attenuated enough that they don't reduce the accuracy of the measurement. There is thus little advantage in adjusting f<sub>s</sub>, so it's generally fixed in a given instrument. That means that even near the upper signal frequency limit the digitizer samples more than twice per cycle of the input signal, and at lower frequency many more times than that. Digitizers, Averaging, and Widrow's Theorem ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lock-ins use adjustable adjustable lowpass filters on I and Q to allow the user to trade off noise rejection vs. measurement speed. These lowpass filters form an average of the sampled values of I and Q. Averaging P samples of the signal reduces the filter bandwidth to f_s/(2P) and reduces the noise amplitude by a factor of 1/sqrt(P), so using narrower filters will reduce noise but require a slower measurement. Traditionally, lock-ins have used 1- or 2-pole RC analog filters, which work very similarly to simple digital filters used for continuous averaging of sampled data; the stored average (analog or digital) undergoes a slow exponential decay with time and new information is added to replace it. In this way, choosing a 1-s time constant results in the output being a moving exponential average of the past 4 or 5 seconds' worth of data. At lower carrier frequencies, the The Stanford Research Systems SRS 850 Digital Lock-In Amplifier The SRS 850 works in the above fashion, with a few additional details. It uses an 18-bit digitizer, a fixed sampling frequency of 256 kHz, and an antialiasing filter cutoff of 108 kHz. It allows a maximum reference frequency of 102.4 kHz . Its digital filters can be adjusted for time constants from 10 us to 30 ks (8.33 hours). It forms the LO signals by computing sines and cosines digitally to an accuracy of 24 bits, the word size of its internal digital signal processor. This is the same relative precision as an IEEE-standard 32-bit floating-point number, which has a 24-bit significand (23 bits plus one sign bit) and an 8-bit exponent. To do this, it must advance the numerical phase by 2 pi f_ref/f_s per sample. The reference frequency is continally measured and the numerical phase step and phase offset adjusted so that the positive-going zero-crossing of the cosine LO coincides with the positive-going zero-crossing of the reference. This is an example of a _digital phaselocked loop_ (DPLL).</sub></p>
-->Technology: Low Noise Thermoelectric Cooler (TEC) Controllers2020-10-29T14:06:35+00:002022-01-20T13:30:56.872586+00:00Philip Hobbshttps://electrooptical.net/News/author/pcdh/https://electrooptical.net/News/technology-low-noise-thermoelectric-cooler-tec-controllers/<h3>Thermoelectric (Peltier) Coolers</h3>
<p>A thermoelectric cooler is a solid-state device made from two alumina ceramic plates with an array of metallized pillars in between. The pillars are also ceramic--they're made of alternating <em>p</em>-type and <em>n</em>-type bismuth telluride (Bi<sub>2</sub>Te) semiconductors, alloyed with antimony telluride (<em>p</em>-type) or bismuth selenide (<em>n</em>-type), and connected in series electrically. The Peltier effect makes them electric-powered solid state heat pumps. (Thermocouples work the other way round, via the Seebeck effect, but the physics is the same.)</p>
<p>A recent <a href="https://doi.org/10.34133/2020/4361703">review paper</a> gives an interesting look at the state of the art and the many open questions in thermoelectric research. (You wouldn't think that solid-state beer fridges had such interesting physics inside, but they do.)</p>
<p><a href="https://commons.wikimedia.org/wiki/File:Peltierelement.png" title="Wikimedia Commons"><img alt="Peltier element (Wikimedia)" height="280" src="https://upload.wikimedia.org/wikipedia/commons/thumb/a/a2/Peltierelement.png/1280px-Peltierelement.png" width="509"/></a></p>
<p>(Source: <a href="https://commons.wikimedia.org/wiki/File:Peltierelement.png">Wikimedia Commons</a>. Note that real TECs have an even number of pillars so that both wires attach to the hot side, avoiding a massive heat leak through the heavy copper wire.)</p>
<p>They're easy to use: apply a current in one direction and heat moves from the top to the bottom side; switch directions and heat flows the other way. There are some finer points, of course</p>
<ul>
<li>They need current-source biasing, because their DC resistance is low and they produce a fairly large thermocouple voltage related to the temperature difference between the hot and cold plates. That makes voltage biasing less stable.</li>
<li>They're not that efficient--due to resistive (<i>I<sup>2</sup>R</i>) heating and heat conduction along the pillars, the hot side puts out a lot more heat than the cold side takes in. (How much more depends on the temperature drop, but it's at least 3×.)</li>
<li>They're mechanically fragile, especially in the shear direction, so you have to keep the cold plate lightweight and apply a nice big compressive preload via nylon screws.</li>
<li>If you put an ordinary on/off thermostat on one, it will die very rapidly from thermal fatigue. Linear control is key.</li>
<li>Pulse-width modulation (PWM) will reduce performance, because it generates more <i>I<sup>2</sup>R</i> heating than analog linear control does. Ordinary Class-AB linear control is somewhat wasteful, because a lot of heat is dissipated in the driver amp, which we have to get rid of without warming up the box too much in the process.</li>
</ul>
<p> For instrument use, we have to keep in mind one much less well-known property of TECs:</p>
<ul>
<li>There's an astounding amount of capacitance between the TEC elements and the cold plate. </li>
</ul>
<p>I just put a piece of 1-inch self-adhesive copper tape on the cold side of a typical 30-mm Marlow TEC, and measured 67 pF from there to the wiring, about 10 pF/cm<sup>2</sup>. A PWM (Class D) driver would put a few volts at maybe 100 kHz across that, with probably 30-ns edges. The resulting charge injection spikes would be around <br/>(3 V / 30 ns)× 67 pF ~ 7 mA,<br/>and maybe much worse. There are a lot of synchronous buck regulators out there whose switching edges are considerably faster than a nanosecond, which would put the spikes up near an amp.</p>
<p>Fast spikes that large will reliably make a mess of an ultrasensitive measurement, and that puts us in a dilemma. PWM control is too noisy, and even the usual one-or-two-section LC filter probably won't be enough for a low-noise laser or photoreceiver. On the other hand, old-timey Class-AB linear control wastes power and generates even more heat. What to do? One of EOI's building-blocks is a <em>Low Noise </em><em>Class-H TEC Driver,</em> which gives the best of both worlds.</p>
<h3>Class-H Amplifiers</h3>
<p>A Class-H amplifier is a linear amp running off a fast-responding switching power supply. The supply voltage is maintained just high enough for the linear amp to work properly, maybe 0.1 V of headroom. If the TEC needs 3 A at 1.3 V, the supply runs at 1.4 V instead of probably 5 V for a pure linear controller. That cuts the power dissipation in the linear amp from (5 V - 1.3 V)*3 A = 11 W down to 300 mW, a saving of 97%. In addition, our design draws on decades of experience in low-noise analog electronics, and so is able to cut the remaining switching spikes down by over 100 dB, to levels that are hard to measure. Since the TEC itself is dissipating 4 W or so, this is an excellent tradeoff. (If you're interested in all this amplifier-class business, I recommend this <a href="https://circuitcellar.com/wp-content/uploads/2019/10/2013-12-015-Lacoste.pdf">Circuit Cellar article</a> by Robert Lacoste.). Our linear amps are class-AB, with a proprietary reactive-feedback topology that drops the headroom requirement to absolute rock bottom while maintaining very high spike rejection.</p>
<p>The Low-Noise Class-H TEC driver comes in two versions, both of which supply very quiet, stable current-mode drive and millikelvin temperature stability. The simpler one is best for detection systems, which require cooling well below ambient temperature but don't need heating. Thus the driver's output works in one quadrant, with its output voltage and current both positive. It's simple, inexpensive, and takes up only 1 square inch of board space including the switching supply. (It's best if the switcher is on the other side of the ground plane--we actually use one of those subnanosecond switchers, the LMR23630, because it's small and efficient, and its noise doesn't hurt us.)</p>
<p>On the other hand, stabilized diode lasers generally work near room temperature, and so need both cooling and heating. Furthermore, lasers get turned off and on, and some are modulated, so that the thermal load on the cold plate is highly variable. This requires four-quadrant operation in general. Thus our more advanced TEC controller uses a symmetric current-conveyor topology that easily copes with whatever the laser is doing. The cost increase is only about 25%, and the board space about another half square inch, so either one can easily fit in a tight package with equally tight cooling and cost constraints. </p>
<p>Because of our engineering ethos and long experience in instrument design, we achieve this high performance and small size without using a lot of fancy parts, resulting in very low BOM cost. Any licensing cost is a small fraction of the money saved on the parts, so everybody wins.</p>
<p>We've used these building blocks in several products, from our ultraquiet laser driver to SEM cathodoluminescence detectors based on MPPCs [also called silicon photomultipliers (SiPMs)] and avalanche photodiode detectors (APD or SPAD) for biomedical systems. </p>
<p>Give us a call if you have an application we might be able to help with!</p>
<p></p>BEOS outtakes: Photographic Film2018-03-20T14:35:17+00:002022-02-02T18:09:42.922908+00:00Philip Hobbshttps://electrooptical.net/News/author/pcdh/https://electrooptical.net/News/photographic-film/<p>From the cutting room floor at <em>Building Electro-Optical Systems, Third Edition</em>:</p>
<p><strong>Photographic Film</strong><br/> Okay, okay. Photographic film isn't a detector of the sort we've been discussing. Film is so out of fashion, so inconvenient. It needs messy chemicals. Getting it to be highly sensitive requires all sorts of 1960s alchemy such as pre-flashing and hypersensitizing in a forming gas or hot hydrogen atmosphere. Why do we care about it at all, in these days of 4k x 4k CMOS imagers?<br/><br/> There are two reasons. The first is that a telescope has a lot more than 4k x 4k resolvable spots. The Palomar Schmidt has a 6.6\degrees\ square field. At 1 arcsecond resolution, its plates were digitized at 23040 pixels square for the Digital Palomar Observatory Sky Survey (DPOSS). That's 530 Mpel, which is a <em>lot</em> of imager chips, but just one photographic plate. The plate can be digitized later on a scanning microdensitometer that also has many more than 4k x 4k resolvable spots. Even an ordinary 35-mm camera produces images equivalent to 30 Mpel--and that's real pixels, not Marketing Megapixels (TM) (see Section 3.9.14). The defect density in photographic film is lower than in IC imagers, too, and it makes a nice archival record that is guaranteed to represent the measurement data well. </p>
<p>Photographic film has a power-law response over a huge range of signals. The <em>contrast exponent gamma</em> can be anywhere from 4 down to 0.5, which compresses the dynamic range and makes bright and dim objects visible simultaneously. Using low-contrast developers such as <a href="https://iopscience.iop.org/article/10.1086/128407/pdf" target="_blank">POTA</a>, photographic film can record images whose dynamic range approaches 10<sup>6 </sup>:1, optical, <em>e.g.</em> a bomb flash and its surroundings, which is a task beyond any silicon imaging sensor whatever.(1)<br/><br/>Film has two sorts of noise: <em>grain,</em> which is analogous to the digital nibblies from CCD pixels, and <em>fog</em> which is analogous to dark current. Fog is due to a few grains being rendered developable by a few loose electron/hole pairs in the emulsion, and contributes random noise in the same way.</p>
<p><br/>The second reason to talk about film is that some <a href="https://doi.org/10.1038/47223">modern alchemy</a> has got photographic film up to a QE of 1.0, and a multiplication gain of 2, so that a single photon can expose a grain of silver halide. This is the quantum efficiency of the best CCDs, so there's no waste of photons any more. The trick is to add formate ions to the emulsion to scavenge all the excess holes without increasing the fog. Unlike other hypersensitizing tricks, this one works at room temperature and is stable indefinitely.</p>
<p>(1) POTA was invented by Marilyn Levy of the Army's Photo-Optical Technical Area at Ft. Monmouth NJ (hence the name). She was working on improving aerial reconnaissance photography, where there's often lots of light but very deep shadows.</p>
<p>Next: The Hurter-Driffield Curve</p>Thermal Runaway Found Useful2018-02-24T20:51:39+00:002022-01-20T12:06:28.957916+00:00Philip Hobbshttps://electrooptical.net/News/author/pcdh/https://electrooptical.net/News/thermal-runaway-found-useful/<p><a href="https://electrooptical.net/static/oldsite/www/sed/TemperatureBalancer.png">This odd circuit </a> is an <em>on-chip temperature balancer</em> that uses thermal runaway to force N transistor arrays to all run at the same temperature. BJT dissipation goes up at low temperature, with very high gain. Here's its <a href="https://electrooptical.net/static/oldsite/www/sed/TemperatureBalancerSteppResp100usPerDIv.tif">step response.</a><br/><br/></p>