MATRIX FAQs
DC POWER SUPPLY
1.What is a DC power supply?

A DC power supply is a device that converts mains alternating current (AC) into stable direct current (DC) output. Depending on specific applications, it can be of the regulated type (maintaining a constant output voltage) or unregulated type.
2.What is an adjustable power supply?
An adjustable power supply allows users to adjust the output voltage and/or current within a specified range, suitable for applications requiring different power levels during testing or prototyping.
3.What is a programmable DC power supply?
A programmable DC power supply enables users to remotely set and control voltage, current, and other parameters via software or interfaces such as USB, GPIB, or Ethernet, making it ideal for automated testing applications.
4.What is a linear power supply?
A linear power supply converts AC voltage to a lower DC voltage using a transformer, then regulates it with a linear voltage regulator. It offers stable output with low noise but has lower efficiency, primarily due to heat dissipation.
5.What is a bidirectional power supply?
A bidirectional power supply can both supply power to a Device Under Test (DUT) and absorb power from it. This feature is particularly important in applications such as battery testing, renewable energy systems, and development of electric vehicle (EV) powertrains.
6.What is a switching power supply?
A switching power supply efficiently converts AC or DC power using high-frequency switching regulators. Compared to linear power supplies, they are smaller, lighter, and more energy-efficient but may generate electromagnetic noise.
7.Can a DC power supply simulate a battery?
Some programmable DC power supplies are specifically designed as battery simulators. By setting parameters such as internal resistance, voltage drop, and discharge characteristics, these power supplies can simulate battery behavior. This function is especially useful for testing battery-powered devices under different charge-discharge conditions, ensuring accurate and repeatable test results without using actual batteries.
8.Which industries use DC power supplies?
DC power supplies are widely used in various industries, including:
Electronics and semiconductor testing: For circuit verification, component characteristic analysis, and fault diagnosis.
Automotive and electric vehicles (EVs): Critical for battery testing, powertrain validation, and inverter testing.
Aerospace and defense: For high-precision testing of avionics, radar, and satellite systems.
Medical equipment: Supporting the development and testing of implantable and portable medical electronics.
Renewable energy: Essential for testing solar panels, fuel cells, and energy storage systems.
9.What is the difference between a DC power supply and an AC power supply?
A DC power supply outputs direct current (DC), providing stable voltage for testing and powering electronic devices, circuit boards, and components. An AC power supply outputs alternating current (AC), simulating electricity from the grid or generators. AC power supplies are typically used to test products powered by mains electricity, such as household appliances and industrial equipment.
10.Why are ripple and noise in a DC power supply important?
Ripple and noise refer to undesirable fluctuations in the output voltage of a DC power supply. Low ripple and noise are particularly critical for applications requiring clean, stable power, such as precision analog circuits, RF testing, and medical instruments. Excessive noise may interfere with sensitive components, leading to inaccurate test results, degraded signal quality, or abnormal device operation. Choosing a power supply with low ripple and noise ensures reliable performance in critical applications.
11.What key parameters should be considered when selecting a DC power supply?
When selecting a DC power supply, the following five parameters are critical: (1) dynamic response, (2) parallel or series operation capability, (3) load regulation, (4) line regulation, (5) ripple.
12.What is the function of remote sense in a power supply?
The remote sense function of a power supply compensates for voltage drops in leads to improve accuracy, with a typical compensation voltage of around 2V.
13.What causes noise, and how to reduce or eliminate noise from the power supply to the DUT?
Noise is a type of high-frequency pulse train generated by the power supply itself, caused by sharp pulses during the switching on/off transitions of switches. It can be reduced by adding a ferrite bead or using grounding methods.
14.If an ohmmeter is unavailable, is there a simple way to determine if a power supply has voltage output?
Short-circuit the positive and negative output terminals of the power supply, set a valid current value, turn on the output. If the screen shows an actual output voltage of 0V and the current matches the set value, the power supply has voltage output.
15.How to achieve negative voltage supply using a DC power supply?
(1) In one case, reverse-connect the positive and negative terminals of the power supply to the DUT.
(2) In another case, short-circuit the negative terminal of the first channel and the positive terminal of the second channel, then connect them to the COM terminal of the DUT input.
16.What is dual-frequency?
Switching power supplies are controlled by frequency, with different operating frequencies in working mode and standby mode, enabling energy conservation and environmental protection.
17.What do the five-fold protections refer to?
Overvoltage protection, overcurrent protection, overtemperature protection, overload protection, and short-circuit protection.
18.What are the differences between switching power supplies and linear power supplies?
① A switching power supply converts DC to high-frequency pulse current, stores energy in inductors and capacitors, and releases energy according to predetermined requirements using the characteristics of inductors and capacitors to adjust output voltage or current. A linear power supply has no high-frequency pulses or energy storage components; it uses the linear characteristics of components to instantly feedback and control input to stabilize voltage and current when the load changes.
② Switching power supplies can step down or step up voltage; linear power supplies can only step down voltage.
③ Switching power supplies have high efficiency; linear power supplies have low efficiency.
④ Linear power supplies have fast control speed and low ripple; switching power supplies have higher ripple.
19.What serial port options are available?
RS232 serial port, RS485 serial port, USB serial port, and 110V/220V switching. Communication protocols are provided for optional serial ports.
20.Can the input voltage be switched?
110V/220V switching is achievable but requires additional installation.
21.Which button is the one-key lock, and what function does it lock?
The 【Lock】 key locks the keyboard input function to prevent accidental parameter changes.
22.Can 100 groups of voltage and current be stored/recalled?
Some models can set 0–99, totaling 100 groups of quick parameters: voltage, current, and timing duration.
23.Why is the product described as high-precision?
The common precision in the market is 10mV, while ours is 1mV, offering higher precision.
24.Why is the current fluctuating significantly?
It is likely that the load resistance is changing while the power supply voltage remains stable. To rule out issues with the power supply, ensure the connected load resistance is constant. Short-circuit the positive and negative output terminals with output clips—if the current stabilizes, the power supply is functioning normally.
25.If the user sets the voltage to 24V, what is an appropriate overvoltage protection setting?
It is recommended to set it 2–3V higher than the normal operating voltage. This ensures no interference with normal use while providing a certain level of protection.
26.When controlling the power supply via a PC, can the PC control it in an AC form?
No. Controlling the rack power supply requires sending commands in accordance with its communication protocol.
27.How does a DC power supply switch between constant current and constant voltage modes?
It switches automatically. Based on the preset current, it enters constant current mode when the set current value is reached, and remains in constant voltage mode when the set current value is not reached
28.There is voltage but no current output.?
If voltage is displayed but no current, it may be due to a poorly connected load or excessively large load resistance, resulting in an actual output current too small to be detected. Therefore, further check if the output port has normal voltage output. If normal, perform a short-circuit test to verify if the current functions properly.
29.Can our linear power supply set a current ramp-up?
For example, a multi-segment program: e.g., ramp up to 3A in 10 minutes, hold 3A for 60 seconds, then ramp up to 10A in 4 minutes, and hold 10A for 10 seconds.
This can be achieved through automated control via LIST programming.
30.What is the approximate voltage rise and fall time when using external analog control?
The voltage rise time is approximately 20ms. It can be measured by enabling the output via external analog control, setting the voltage to 0V, then changing the voltage setting to 12V—the 0–12V ramp-up time is approximately 20ms. The voltage fall time is approximately 300ms (no-load condition), measured by setting the output voltage to 12V, enabling the output, then changing the voltage setting to 0V and capturing the fall time.
31.What is the meaning of voltage and current indicators for load regulation?
The voltage indicator refers to the difference between no-load and full-load conditions; the current indicator refers to the difference between full-load current and the current after overload current limiting.
32.When choosing a communication interface, which has better stability: LAN or GPIB?
The most stable interface is LAN. LAN ports feature fault tolerance, anti-interference capabilities, and high transmission rates. LAN uses two protocols: TCP and UDP, with TCP offering the aforementioned advantages. The LAN bottom layer includes network and data link layers, with handshake and verification mechanisms. During data transmission from a PC, a checksum is calculated and sent with the data. Upon receiving the data, the LAN recalculates the checksum—if they match, the transmission is deemed error-free. If an error is detected (e.g., garbled data, incorrect values, or interrupted transmission), the bottom layer triggers automatic retransmission. Thus, LAN has better fault tolerance and anti-interference than other interfaces. GPIB lacks checksum and handshake mechanisms, so interference may result in transmission of previous data or abnormal values (as reported by users). However, with genuine GPIB cables, such issues are theoretically rare.
33.What test equipment do R&D users of switching power supplies need to evaluate and measure the performance of their products and components?
During R&D debugging and testing, switching power supplies focus on electrical performance metrics such as line regulation, load regulation, input inrush current, efficiency testing, output ripple testing, rated parameters, dynamic characteristic testing, input limit testing, and overvoltage/overcurrent protection. Common equipment includes AC/DC power supplies, electronic loads, power analyzers, multimeters, and oscilloscopes. During selection, high-performance, high-precision power supplies/electronic loads are recommended.
34.What is leakage current in a switching power supply? How to reduce it?
Leakage current in a switching power supply refers to unintended current flow from the input terminal to ground or other reference points, caused by incomplete insulation of internal components.
Leakage current is typically a small current induced by capacitive coupling or poor insulation, which may pose potential risks to human safety—especially when touching conductive parts or in cases of improper grounding. Therefore, reducing leakage current is crucial for ensuring power supply safety.
Methods to reduce leakage current:
Improve insulation materials and design: Optimize insulation materials and structural design to ensure excellent insulation performance, minimizing leakage current.
Use high-quality components: Select high-grade electronic components (e.g., capacitors, insulation materials) to reduce potential sources of leakage.
Conduct rigorous insulation testing: Perform strict insulation and withstand voltage tests during production to ensure reliable insulation under normal operating conditions.
Optimize grounding and shielding: Design rational grounding and shielding structures to reduce current coupling between the power supply and ground.
Compliance with standards: Ensure the switching power supply meets relevant safety standards and certifications, such as those established by the International Electrotechnical Commission (IEC).
Note that reducing leakage current is an important aspect of power supply safety but not the sole consideration. Other factors like overload protection, overtemperature protection, and overvoltage protection must also be comprehensively addressed to ensure the safe and reliable operation of the switching power supply.
35.What is line regulation of a switching power supply?
- Line regulation of a switching power supply refers to the stability of its output voltage when the input voltage varies.
When the input voltage fluctuates (e.g., due to grid voltage variations), the switching power supply adjusts its output to maintain stability. Line regulation quantifies the degree of output voltage change in response to input voltage variations, usually expressed as a percentage, calculated as follows:
Line regulation = [(Vo_max – Vo_min) / Vo_nom] × 100%
Where:
Vo_max = output voltage at maximum input voltage;
Vo_min = output voltage at minimum input voltage;
Vo_nom = rated output voltage of the switching power supply.
For example, if a switching power supply has a rated output voltage of 12V, an output voltage of 11.8V at maximum input voltage, and 12.2V at minimum input voltage, its line regulation is:
Line regulation = (12.2V – 11.8V) / 12V × 100% = 3.33%
A smaller line regulation indicates better stability of the output voltage when the input voltage changes. A lower line regulation means the switching power supply can provide more stable output voltage under varying input conditions, helping to protect and stabilize the normal operation of connected devices or circuits.
36.What is ripple noise in a switching power supply? What factors affect it?
Ripple noise in a switching power supply refers to high-frequency variations or fluctuations in the output voltage or current. It typically exists as an AC signal superimposed on the DC output and can be observed and measured using tools like oscilloscopes. Ripple noise may interfere with certain sensitive electronic devices and systems, so it needs to be controlled within specific limits in some applications.
Factors influencing ripple noise in switching power supplies:
Switching frequency: Ripple noise is related to the switching frequency—the rate at which switching devices in the power supply toggle. Generally, higher switching frequencies lead to increased high-frequency components in the ripple noise.
Output filtering: Switching power supplies usually employ filter circuits to reduce output ripple noise. These circuits use components like capacitors and inductors to filter out high-frequency components, and their design and performance directly affect the ripple noise level.
Load current variations: Ripple noise is also influenced by changes in load current. Large or sudden changes in load current can cause an increase in output voltage or current ripple noise.
Switching devices and circuit design: Ripple noise characteristics vary with the selection of switching devices and circuit topology. Choosing appropriate switching devices and circuit structures during design can help reduce ripple noise.
In summary, ripple noise in switching power supplies refers to high-frequency variations in output voltage or current, affected by factors such as switching frequency, output filtering, load current changes, and the selection of switching devices and circuit design.
37.How to reduce harmonic currents in a switching power supply?
To reduce harmonic currents in a switching power supply, the following methods can be adopted:
- Filters: Using appropriate filters can reduce harmonic currents at the output. Common filters include LC and LCL filters, which attenuate high-frequency harmonic components, thereby reducing the amplitude of harmonic currents.
- Resonant circuits: Adding resonant circuits at the output can effectively reduce harmonic currents by transferring them to other frequencies through resonance.
- Modulation techniques: Employing suitable modulation techniques (e.g., PWM—Pulse Width Modulation) can adjust the conduction time of switching devices, reducing the amplitude of harmonic currents.
- Optimized design: Optimizing circuit topology and component parameters (e.g., rational selection of inductors, capacitors, and transformers) during design can minimize harmonic current generation.
- Compliance with standards: Adhering to relevant electrical safety and EMC (Electromagnetic Compatibility) standards is also important, as these standards specify limits for power supply harmonic currents to ensure they do not exceed acceptable levels.
Note that reducing harmonic currents is a complex engineering task requiring comprehensive consideration of circuit design, filtering techniques, modulation methods, and standard compliance. In practice, these methods should be combined based on specific scenarios, and guidance from professional power supply design engineers or experts is advisable.
38.What do PWM and PFM mean in switching power supplies? What are their respective advantages and disadvantages?
PWM stands for Pulse Width Modulation. It regulates the average output power by varying the pulse width of the signal while keeping the period constant. In PWM, the cycle remains unchanged, but the pulse width adjusts according to the amplitude of the input signal. This modulation method is widely used in power electronics, motor drives, and other fields.
PFM stands for Pulse Frequency Modulation. It adjusts the average output power by changing the pulse frequency while keeping the pulse width constant. The frequency varies with the amplitude of the input signal, making PFM suitable for power management systems and low-power applications.
Advantages:
PWM:
- High precision: Enables accurate output control with high resolution through pulse width adjustment.
- Lower noise: High-frequency pulse output helps reduce noise.
- Wide applicability: Well-suited for power electronics (e.g., motor drives, inverters) with strong adaptability.
PFM:
- Low quiescent power consumption: Adapts pulse frequency to load demands, achieving low quiescent current—ideal for low-power applications.
- Strong anti-interference capability: Sensitive to input signal changes, allowing rapid frequency adjustment to adapt to environmental variations.
Disadvantages:
PWM:
- Complex output filtering: High-frequency pulses require filtering to obtain a smooth output.
- Electromagnetic interference: High-frequency pulses may cause EMI issues, requiring suppression measures.
PFM:
- Low efficiency at light loads: Wide frequency adjustment ranges under light loads can lead to reduced efficiency.
- Large output ripple: Frequency variations may result in significant output ripple, requiring additional filtering.
39.What factors affect the EMC of a switching power supply?
Switching frequency: The operating frequency affects the level of radiated and conducted electromagnetic interference. Higher frequencies typically lead to higher radiation peak frequencies, demanding stricter EMC design.
Power supply design: Overall design and circuit layout impact EMC performance. Good design considers grounding layout, signal shielding, and filter usage to reduce interference.
Input and output filtering: Input and output filters play a key role. Input filters suppress high-frequency noise on power lines, while output filters reduce noise on output lines.
Grounding design: Proper grounding is critical for EMC, providing effective common-mode and differential-mode noise suppression, reducing ground noise, and enhancing anti-interference capability.
Component selection: Choosing appropriate components (e.g., low-noise devices, shielding materials, and high-frequency oscillation suppressors) affects EMC performance.
PCB layout: Rational PCB (Printed Circuit Board) layout reduces interference between signal loops, avoiding loop and antenna effects.
Environmental interference: Other electronic devices and electromagnetic fields in the environment may interfere with the switching power supply (e.g., nearby wireless communication devices or electromagnetic radiation sources).
EMC testing and certification: Conducting EMC tests and obtaining certifications (in line with international standards) ensures compliance in specific electromagnetic environments.
These factors are interrelated, and controlling them is crucial for ensuring good EMC performance, reducing interference to surrounding equipment, and enhancing the power supply’s own anti-interference capability.
40.What factors are related to the efficiency of a switching power supply?
The efficiency of a switching power supply is influenced by several key factors:
- Efficiency of switching devices: Main switching devices (e.g., MOSFETs, diodes) introduce switching losses during conduction and turn-off. Selecting high-efficiency devices improves overall efficiency.
- Switching frequency: Determines switching speed and losses. Higher frequencies reduce switching time and losses, improving efficiency.
Efficiency of control circuits and feedback loops: Efficient control and feedback circuits minimize energy loss during monitoring and regulation, enhancing overall efficiency.
- Output load: Efficiency may decrease under large load variations or light loads. Proper load matching and design improve efficiency.
- Conversion losses: Include switching losses of devices, magnetic losses of inductors, and losses in filter capacitors. Optimizing these reduces conversion losses.
- Heat dissipation: Poor heat dissipation or high ambient temperatures increase losses. Good thermal design maintains lower operating temperatures, improving efficiency.
- Input-output voltage difference: Larger differences increase losses in switching devices and conversion components, reducing efficiency.
To improve efficiency, measures such as selecting high-efficiency devices, optimizing control/feedback loops and thermal design, and matching appropriate loads should be taken, considering specific requirements and design conditions.
41.What are the possible reasons for low output voltage of a switching power supply?
Low input voltage: The output voltage of a switching power supply typically depends on the input voltage. If the input voltage is too low, the output voltage will decrease accordingly. This may be caused by input power supply faults, grid voltage fluctuations, or excessive power line impedance.
- Overload: The output voltage is affected by the load. If the load exceeds the rated capacity of the switching power supply, the power supply may fail to provide sufficient current, resulting in a voltage drop.
- Control circuit failure: The control circuit, which regulates switching devices, can cause abnormal operation of switches (e.g., failure to turn on/off properly) if faulty, affecting output voltage stability and accuracy.
- Feedback loop issues: Switching power supplies use feedback loops to maintain stable output voltage. Problems such as excessive deviation, damaged feedback components, or abnormal regulation circuits in the feedback loop can lead to low output voltage.
- Faulty switching components: Aging, damage, or overload of switching devices (e.g., transistors, diodes) can impair output voltage stability and level.
- High temperature: Poor heat dissipation or high ambient temperatures can increase internal temperatures, degrading component performance and stability, thereby reducing output voltage.
These are common causes of low output voltage. For troubleshooting, inspect these aspects, perform repairs, or consult professionals.
42.What is load regulation of a switching power supply?
Load regulation of a switching power supply refers to the stability of its output voltage when the load changes.
When the load connected to the power supply varies, the switching power supply adjusts its output to maintain stability. Load regulation quantifies the degree of output voltage change with load variations, expressed as a percentage:
Load regulation = [(Vo_max – Vo_min) / Vo_nom] × 100%
Where:
Vo_max = output voltage under maximum load;
Vo_min = output voltage under minimum load;
Vo_nom = rated output voltage of the switching power supply.
For example, if a switching power supply has a rated output voltage of 12V, an output of 11.9V at maximum load, and 12.1V at minimum load, its load regulation is:
Load regulation = (12.1V – 11.9V) / 12V × 100% = 1.67%
A smaller load regulation indicates better output voltage stability during load changes, ensuring reliable operation of connected devices or circuits under varying load conditions.
43.How does altitude affect switching power supplies?
At high altitudes, thin air and reduced oxygen levels can alter the operating conditions of electronic components. To ensure reliability and safety, increasing the safety distance between components in switching power supplies may be necessary.
Safety distance refers to insulation gaps or electrical clearances on electronic components, designed to prevent arcing or breakdown between components under normal or abnormal conditions (e.g., overvoltage, overcurrent). These distances are determined by equipment design requirements, operating voltage, and environmental conditions.
In high-altitude environments, thin air reduces insulation performance. Increasing safety distance helps reduce electric field strength between components, lowering the risk of breakdown.
Additionally, high altitudes may cause elevated temperatures or inadequate heat dissipation. High temperatures can stress insulation materials, reducing their performance. Thus, increasing safety distance also improves heat dissipation, helping maintain components within acceptable temperature ranges.
Note that increasing safety distance is not universally applicable. Specific safety distances must be evaluated based on equipment requirements and relevant standards. For equipment used at high altitudes, refer to design guidelines and specifications to ensure safety and reliability.
44.What is the dynamic load characteristic of a switching power supply? What factors is it related to?
The dynamic load characteristic of a switching power supply describes how its output voltage and current change under varying load conditions and its ability to respond to load changes, reflecting stability and performance during load variations.
It is closely related to the following factors:
- Load change rate: A key factor in dynamic performance. When the load changes suddenly, the power supply must quickly adjust output voltage and current to meet new requirements, involving response speed and stability under rapid load changes.
- Output voltage adjustment time: The time required for the output voltage to stabilize after a load change. Shorter adjustment times indicate faster adaptation to load changes, ensuring stable output.
- Load capacity and range: Related to the designed maximum load power (load capacity) and the range of loads within which the power supply operates stably (load range). The power supply should maintain stable output within these parameters.
- Feedback control mechanism: The design and performance of feedback loops (which monitor output voltage and adjust accordingly) directly affect the power supply’s response capability and stability during load changes.
In summary, dynamic load characteristics encompass the power supply’s ability to adjust output voltage/current during load changes, the adjustment time, and adaptability to load capacity/range, influenced by load change rate, adjustment time, load parameters, and feedback control mechanisms.
45.What are harmonics in a switching power supply?
Harmonics in a switching power supply refer to non-fundamental frequency components present in the output current or voltage. Switching power supplies operate by converting input voltage into high-frequency pulse signals using high-frequency switching devices (e.g., MOSFETs), which are then filtered and regulated to obtain the desired output. The switching operation of these devices introduces high-frequency harmonic components in the output current or voltage.
Harmonics are typically expressed as multiples of the fundamental frequency, such as the 2nd harmonic (2×fundamental frequency), 3rd harmonic (3×fundamental frequency), and 4th harmonic (4×fundamental frequency). These harmonics distort current or voltage waveforms and may interfere with other electrical equipment and power systems.
Harmonic currents arise from the nonlinear characteristics of switching devices and the frequency response of circuit components. To meet electrical safety and electromagnetic compatibility (EMC) requirements, harmonic currents must be limited through filter design, modulation techniques, and optimized circuit design.
46.What are the common topologies of switching power supplies?
Common topologies of switching power supplies include:
- Buck Converter (step-down topology): A basic topology that reduces input voltage to a lower output via switch operation, suitable for applications requiring output voltage lower than input.
- Boost Converter (step-up topology): Increases input voltage to a higher output via switch operation, suitable for applications requiring output voltage higher than input.
- Buck-Boost Converter (step-up/step-down topology): Can either increase or decrease input voltage, offering a wide output range, with conversion controlled by switch operation.
- Flyback Converter: An isolated topology using a transformer for input-output isolation, with voltage conversion via switch control, suitable for applications requiring isolation.
- Cuk Converter: Can step voltage up or down via alternating charging/discharging of an inductor, providing a wide output range.
- Forward Converter: An isolated topology similar to the flyback converter but with output current transferred through a forward transformer winding.
- Full-Bridge Converter: Uses a four-switch bridge structure for voltage conversion, capable of high power output and high conversion ratios.
- Half-Bridge Converter: Similar to the full-bridge topology but with two switches (typically one high-side and one low-side), offering high efficiency and compact size for medium-power applications.
- Push-Pull Converter: Uses two coupled windings and periodic switching for voltage conversion, providing high power conversion capability.
- Series Resonant Converter: Utilizes resonant circuit characteristics for voltage conversion, operating at high frequencies with high efficiency and small size.
Each topology has unique working principles, features, and applications. Selection depends on input-output voltage range, power requirements, efficiency needs, and design constraints.
47.What does EMC mean in switching power supplies?
EMC stands for Electromagnetic Compatibility, referring to the ability of electronic devices, systems, or components to coexist and operate normally in a shared electromagnetic environment without causing or being affected by harmful interference.
In modern society, we are surrounded by electronic devices (e.g., computers, phones, TVs, wireless communication systems) that emit electromagnetic radiation and are sensitive to their environment. Electromagnetic interactions between devices may cause interference, impairing performance, reliability, and safety.
The goal of EMC in switching power supplies is to ensure interoperability and compatibility between the power supply and the system in an electromagnetic environment, minimizing interference and failures. It involves two aspects:
EMI (Electromagnetic Interference): Electromagnetic radiation emitted by the device during operation, which may interfere with other equipment. Controlling EMI involves circuit design, layout, and shielding to reduce radiated and conducted interference.
EMS (Electromagnetic Susceptibility): The device’s sensitivity to external electromagnetic fields, which may cause errors or performance degradation. Enhancing EMS involves anti-interference design, electromagnetic shielding, filtering, and grounding.
EMC implementation requires compliance with international standards (e.g., IEC 61000 series) to ensure compatibility in electromagnetic environments. By addressing EMC, interference is reduced, reliability and interoperability are improved, and normal system operation is ensured.
48.Why does the accuracy of my instrument not match the specifications when I turn it on after receiving it?
Ensure the instrument has been warmed up for at least 30 minutes and is operating within a temperature range of 20–30°C. These are necessary conditions for the instrument to maintain stability and meet specifications.
49.What is Ramp Time?
Ramp Time refers to the time required for the test voltage to gradually increase from 0V to the rated test voltage. It is typically used in DC withstand voltage testing of capacitive loads to avoid excessive instantaneous charging current caused by sudden voltage application to capacitive DUTs, which could lead to measurement misjudgment.
50.What are the differences between switching power supplies and linear power supplies? How to choose?
Switching power supplies generally have higher power density, providing greater power in the same volume, with the characteristics of small size and light weight.
Linear power supplies offer better ripple noise performance and fast recovery characteristics. However, compared to switching power supplies of the same power, they are larger in size and lighter in weight.
51.Precautions for DC power supply ripple testing
When testing the output ripple of a DC power supply, a linear load must be connected. Oscilloscope settings should refer to ripple testing requirements, typically including: AC coupling, bandwidth limit: 20MHz, input probe set to 1:1, etc.
52.Explanation of DC power supply operating modes (CV/CC)
One is the Constant Voltage (CV) mode, operating according to the characteristics of a constant voltage power supply.
The other is the Constant Current (CC) mode, operating according to the characteristics of a constant current power supply.
The operating mode of a DC power supply depends on three parameters: the set voltage, set current, and load resistance.
53.What is the difference between line regulation and load regulation?
Line regulation refers to the influence of input voltage on output voltage, while load regulation refers to the influence of load changes on output voltage.
54.What is the difference between CC mode and a constant current source?
CC mode (Constant Current mode) is a fixed current mode. When the loop current exceeds the set value, the power supply enters CC mode, limiting the current to the set value while the voltage may drop. A constant current source is designed to output a stable current as its core goal, with higher current stability accuracy and a wider load adaptation range (some constant current sources can maintain constant current within a range of 0Ω to several thousand ohms).
55.Precautions for using the SENSE function?
The sense terminal must not be left floating; it must be short-circuited with the source terminal at the load end.
56.Explanation and application precautions of four-wire output for power supplies?
When the current exceeds 5A, the sense function is generally available. The main role of sense is to compensate for voltage drops caused by line losses. It should be noted that during use, the sense terminal must be short-circuited with the source terminal at the load end and must not be left floating.
57.Output modes of DC power supplies: Constant Voltage (CV) mode and Constant Current (CC) mode
Most DC power supplies provide a set voltage output, referred to as Constant Voltage (CV) mode, which is more accurately described as “setting a fixed output voltage”. In this mode, the current of the power supply varies with the load.
However, this varying current cannot exceed the maximum designed current of the power supply (determined by application and cost design goals) or the maximum set current (to protect the circuit under test). Thus, once the load current exceeds either of these values, the power supply automatically switches from CV mode to Constant Current (CC) mode.
Take a DC power supply with a maximum designed voltage/current of 30V/3A as an example: When the load does not exceed 3A, the power supply operates in CV mode, with the set output voltage ranging from 0 to 30V (based on the minimum setting resolution). For instance, with an output of 12V, as long as the load does not exceed 3A, the ideal output voltage is a fixed 12V (deviations are affected by specifications such as output accuracy, ripple/noise, line regulation, and load regulation), while the current value varies with the load.
Once the load exceeds 3A (according to Ohm’s law, 12V/3A = 4Ω; loads below 4Ω will draw more than 3A), since the power supply can only provide 3A, it switches to CC mode with a current of 3A, and the output voltage varies with the load.
The relationship between load, current, and resistance is as follows: Generally, the reason the current does not reach the set current is that the resistance is not small enough.
A large load = large current = small resistance; a small load = small current = large resistance.
Methods for setting the maximum current:
- Knob setting: The output must be short-circuited to view the current reading. It is recommended to first turn the knob to the 0 current position. If the knob is at the maximum current position, a spark may occur when short-circuiting the output (which may be startling). Slowly turn the knob from the 0 current position to the desired maximum set current.
- Button setting: Simply input the desired maximum set current using the numeric and unit buttons.
58.Under what conditions does a power supply activate Overvoltage Protection (OVP)?
The Overvoltage Protection (OVP) of a power supply is designed to protect the Device Under Test (DUT) or Device Under Circuit (DUC) by preventing excessive voltage from being applied to them.
Here are three possible scenarios for OVP activation:
- Scenario 1: The user forgets the OVP setting from a previous project, and the voltage required for the current test project is higher than the OVP setting.
Example: The OVP was set to 12.5V in the previous project, while the current project requires 15V. The set output voltage exceeds the 12.5V OVP setting, so the power supply activates the protection mechanism and stops output.
- Scenario 2: The output of the power supply exceeds the OVP setting due to compensation after connecting remote sense.
Example: The operating voltage of the circuit is 12V, with OVP set to 12.5V. Excessive line loss causes a 0.6V voltage drop on the line, resulting in only 11.4V at the DUT. The power supply activates compensation to 12.5V, making the DUT voltage 11.9V. Further compensation exceeds the 12.5V OVP setting, so the power supply activates the protection mechanism and stops output.
- Scenario 3: Due to the inductance of the test leads, instantaneous voltage during power supply switching or programmed voltage changes may exceed the OVP setting due to LC resonance of stray components in the leads.
59.What do programming resolution, display resolution, and readback resolution of a programmable DC power supply refer to?
Programming resolution refers to the minimum voltage and current that can be set via the keypad.
Display resolution refers to the minimum voltage and current that can be displayed on the power supply’s screen.
Readback resolution refers to the minimum voltage and current that can be read back internally by the device.
60.How does a power supply achieve constant voltage or constant current output?
Constant voltage output: Assume the load resistance is RL, and the ratio of the set voltage to the set current is RC. When RL > RC, the power supply operates in constant voltage mode.
Constant current output: Assume the load resistance is RL, and the ratio of the set voltage to the set current is RC. When RL < RC, the power supply operates in constant current mode.
AC Power Supply
1.What is an AC power supply? What are the characteristics of its current?

An AC power supply is a power source that outputs alternating current, where the magnitude and direction of the output current change periodically over time, with a common waveform being a sine wave. For example, the mains electricity used in daily life is a typical AC power supply. In China, the mains frequency is 50Hz, meaning the current direction changes 100 times per second (twice per cycle). This periodic change feature gives AC power supplies significant advantages in long-distance transmission and voltage transformation.
2.What is the most essential difference between an AC power supply and a DC power supply?
The most essential difference lies in whether the current direction changes. The current direction output by an AC power supply changes periodically, while the current direction output by a DC power supply remains constant. Additionally, the voltage of an AC power supply changes periodically over time, while the voltage of a DC power supply is usually stable. In terms of transmission, AC power can be easily transformed via transformers, facilitating long-distance high-voltage transmission to reduce losses; DC power is more common in short-distance power supply scenarios such as electronic devices and small appliances.
3.What are the common types of AC power supplies?
Common AC power supplies include mains electricity (i.e., grid power, widely used in households and industries), AC generators (such as diesel generators, hydroelectric generators, wind turbines, which convert mechanical energy into electrical energy to generate alternating current), AC voltage-stabilized power supplies (used to stabilize mains voltage and avoid voltage fluctuations affecting equipment), and some special-purpose AC power supply devices (such as adjustable AC power supplies used in laboratories, which can adjust output voltage and frequency).
4.What do the rated voltage and rated current of an AC power supply mean?
Rated voltage refers to the standard voltage value output by an AC power supply during normal operation. It is the voltage specified in the power supply design and the voltage required for electrical equipment to work normally. Rated current refers to the maximum current value that an AC power supply can output stably for a long time under the rated voltage. When using electrical equipment, its operating voltage and current should match the rated voltage and rated current of the AC power supply. If the equipment voltage exceeds the power supply’s rated voltage, it may damage the power supply or the equipment; if the equipment current exceeds the power supply’s rated current, it may cause the power supply to overload, leading to faults such as overheating.
5.What impact does the frequency of an AC power supply have on electrical equipment?
The frequency of an AC power supply has a significant impact on electrical equipment, especially inductive loads (such as motors, transformers, etc.). For motors, frequency directly affects their speed. Within a certain range, the higher the frequency, the faster the motor speed (following the relationship that speed is proportional to frequency). If the frequency does not meet the motor’s design requirements, it may cause abnormal motor speed, reduced efficiency, increased heat generation, or even damage. For transformers, too low a frequency will increase core loss and cause severe heating, affecting service life; too high a frequency may accelerate the aging of insulation materials. In addition, the clock circuits and control circuits of some electronic devices may also be affected by frequency, resulting in abnormal operation.
6.What is the normal voltage fluctuation range of an AC power supply?
Under normal circumstances, the voltage of an AC power supply (mainly referring to mains electricity) will fluctuate to a certain extent. In China, the specified fluctuation range of mains voltage is ±10% of the rated voltage, that is, the normal fluctuation range of 220V mains electricity is 198V-242V. Most electrical equipment can work normally within this range. If the voltage fluctuation exceeds this range, it may affect the normal operation of the equipment. For example, too low a voltage may cause the equipment to fail to start or run unstable; too high a voltage may burn out internal components of the equipment. For equipment sensitive to voltage, an AC voltage-stabilized power supply is usually required to stabilize the voltage.
7.What is the power factor of an AC power supply? What is its significance?
The power factor of an AC power supply is the ratio of active power (useful power) to apparent power (total power) in an AC circuit, ranging from 0 to 1. The power factor reflects the energy utilization efficiency of the power supply. The closer the power factor is to 1, the higher the electrical energy utilization efficiency, the smaller the reactive power (power used to establish magnetic fields and not do work externally), and the smaller the loss on the transmission line. Improving the power factor can reduce the waste of power resources and reduce the burden on the power supply system. Therefore, in industrial production, improving the power factor is an important measure to save electrical energy.
8.When using an AC power supply, why is it important to pay attention to grounding?
Grounding when using an AC power supply is mainly for safety. When there is a leakage fault inside electrical equipment, the current will flow into the ground through the grounding wire, preventing electric shock accidents when the human body touches the equipment. In addition, grounding can reduce electromagnetic interference of electrical equipment and improve the stability of equipment operation. Especially in some precision electronic equipment, good grounding is an important condition to ensure the normal operation of the equipment.
9.What consequences can a short circuit in an AC power supply cause? How to avoid it?
When a short circuit occurs in an AC power supply, a huge short-circuit current will be generated, causing a sharp increase in heat in the power supply line, which may burn out wires and power supply equipment, and even cause a fire. At the same time, a short circuit may cause the grid voltage to drop sharply in an instant, affecting the normal operation of other equipment. To avoid short-circuit accidents, it is necessary to ensure that the wiring of electrical equipment is correct and the insulation is good, do not connect wires randomly, and install appropriate fuses or circuit breakers in the circuit. When a short circuit occurs, the fuse will blow or the circuit breaker will trip, which can quickly cut off the power supply and protect the circuit and equipment.
10.How do I choose between a single-phase or three-phase AC power supply?
It mainly depends on the power and type of your equipment. For small household appliances such as TVs, refrigerators, and lamps, a single-phase 220V AC power supply is sufficient; for high-power industrial equipment such as motors and large machine tools, a three-phase 380V AC power supply is usually required. It can provide more stable high-power output and is more suitable for driving three-phase motors. You can tell me the power and model of the equipment, and I will recommend it to you specifically.
11.What power rating should I choose when buying an AC power supply?
It is recommended that you choose a power supply with a power 20%-30% higher than the rated power of the equipment. For example, if the rated power of the equipment is 1000W, a 1200-1300W power supply is better. This is because the equipment may have a power peak at the moment of startup. Leaving some margin can avoid power supply overload, extend the service life, and cope with occasional power fluctuations of the equipment.
12.What should I pay attention to when using a new AC power supply for the first time?
When using it for the first time, first check whether the input voltage of the power supply matches your grid voltage. For example, if the power supply is marked with 220V input, do not connect it to 380V electricity. Then, when connecting the equipment, ensure that the positive and negative poles (live wire, neutral wire, ground wire) are connected correctly. The ground wire must be connected properly to ensure safety. Before starting, adjust the output voltage to the value required by the equipment, then turn on the equipment power to avoid damage to the equipment due to excessive voltage.
13.Does the DC mode of an AC power supply have the constant current (CC) output of a DC power supply?
The DC mode of an AC power supply does not have the constant current (CC) output of a DC power supply.
The DC mode of an AC power supply can only provide fixed voltage (CV) output. When a DC power supply is overloaded, it will switch to constant current (CC) output, while the DC mode of an AC power supply will stop output when overloaded.
Electronic Loads
1.What is an electronic load?
Compared to traditional fixed loads, an electronic load is composed of variable parameter devices, whose parameter specifications can be modified as needed, making it widely applicable.
2.Why is the current fluctuating significantly?
It is likely that the load resistance is changing while the power supply voltage remains stable. To rule out issues with the power supply, ensure the connected load resistance is constant. Short-circuit the positive and negative output terminals with output clips—if the current stabilizes, the power supply is functioning normally.
3.What should be noted when measuring current with an electronic load?
When measuring current with an electronic load, there are high requirements for the customer’s sampling resistor, which must be free of temperature drift. It is recommended to measure under constant temperature conditions, such as using water circulation to maintain a constant temperature.
4.In addition to CC and CV modes, what other modes do DC electronic loads have, and what applications are they used for?
DC electronic loads also have operating modes such as CR, CR-LED, CW, CV+CC, CR+CC, etc. The CW mode can be used for constant power discharge testing of batteries; CV+CC is applicable to testing scenarios of charging piles or on-board chargers, where it works in CV mode while limiting the maximum current drawn to avoid triggering the overcurrent protection of the product; CR+CC is often used for testing voltage limiting, current limiting characteristics, constant voltage accuracy, and constant current accuracy of on-board chargers, preventing overcurrent protection of the charger. CR-LED is specifically used for testing the load-carrying characteristics of LED constant current sources.
5.Which operating modes of power supplies do the CC and CV modes of electronic loads measure, respectively?
The CC mode of an electronic load is used to test the constant voltage mode of a power supply, and the CV mode of an electronic load is used to test the constant current mode of a power supply.
6.Can DC electronic loads be used in CV mode after being paralleled?
If DC electronic loads are directly paralleled at their output terminals, they can only operate in CC mode.
Some models can operate in CV mode after being configured in master-slave parallel operation.
7.Do DC electronic loads themselves have ripple, and what is the parameter?
DC electronic loads themselves have no ripple, with ripple < 10mV.
8.How is the error range of resistance specifically calculated in the CR mode of a DC electronic load?
The range of resistance readback values for a DC electronic load is: (1/(1/R + (1/R)*0.01% + 0.08), 1/(1/R – (1/R)*0.01% – 0.08))
9.Decoding the zero-voltage (0 Volts) startup of electronic loads
The basic component of an electronic load is a MOSFET, which is a voltage-controlled variable resistor.
An electronic load is an important device for testing the transient response of a power supply. If we simplify the electronic load to a MOSFET, when this MOSFET is connected to a power supply, the voltage of the power supply must reach the Vds voltage of the MOSFET for current to flow. Therefore, from a basic structural perspective, an electronic load cannot start at zero voltage.
If your application requires zero-voltage startup, how can this be achieved?
It only requires connecting a power supply in series to offset the Vds voltage to enable startup at zero voltage.
Possible applications:
Supercapacitors, fuel cells
LCR Meter
1.What is an LCR meter, and what is its function?
An LCR meter, which stands for Inductance (L), Capacitance (C), and Resistance (R) meter, is a professional device used to accurately measure the parameters of these three basic electronic components. In the electronics manufacturing industry, it can be used to inspect whether component parameters on the production line meet standards, ensuring product quality; in maintenance scenarios, it helps technicians quickly determine if components on faulty circuit boards are damaged; in scientific research experiments, it provides researchers with precise component parameters, aiding in circuit design and optimization. For example, when developing a new type of power supply, an LCR meter is needed to accurately measure inductance and capacitance values to ensure stable power supply performance.
2.Functions of an LCR meter and correct usage methods
When using an LCR meter to measure component parameters, the key issue is measurement error. Firstly, there are internal errors of the LCR meter itself, and there are various other causes, among which errors caused by the connection to the test sample are one. Due to different models of LCR meters, the possible connection methods may vary. Generally speaking, the more complex the connection method, the more accurate the measurement can be.
3.What is the basic accuracy of an LCR meter? What is the actual accuracy?
The basic accuracy of an LCR meter refers to the accuracy achieved under optimal testing conditions. Generally, basic accuracy does not include possible errors from external sources, such as test fixtures or test leads; and this accuracy is obtained when the LCR meter is under specific parameter conditions such as the most suitable test signal, frequency, and the slowest measurement speed.
The actual accuracy of an LCR meter refers to the accuracy it can provide under actual measurement parameter requirements. Possible factors affecting the actual accuracy of an LCR meter include, in addition to the aforementioned test signal, frequency, and test speed, the loss factor (D) of the DUT, the internal resistance or range of the LCR meter, etc.
4.Why is four-wire measurement used in resistance testing? What is the difference from two-wire measurement?
When testing small resistances, four-wire measurement is generally used to avoid the influence of the internal resistance of the wires themselves on the measurement results. Approximately when the resistance is higher than 1kΩ, the resistance of the wires themselves can be ignored, and two-wire measurement can be used to test the resistance.
5.Can an LCR meter measure non-ideal components?
Yes, it can. Most components in practical applications are non-ideal. For example, inductors have internal resistance, and capacitors have equivalent series resistance and equivalent parallel resistance. An LCR meter can measure and calculate the equivalent parameters of these non-ideal components through specific algorithms. For instance, when measuring an inductor, in addition to providing the inductance value, it can also measure its equivalent series resistance; when measuring a capacitor, it can obtain parameters such as capacitance value, equivalent series resistance, and equivalent parallel resistance. These non-ideal parameters are crucial in analyzing circuit performance and troubleshooting, as they help you more accurately understand the actual working state of the components.
Withstand Voltage Tester
1.What is a withstand voltage tester, and in which scenarios is it mainly used
A withstand voltage tester, whose full name is the withstand voltage testing instrument, is also known as an electrical insulation strength tester, dielectric strength tester, etc. It is an instrument used to detect the insulation performance of electrical equipment. Its basic principle is to apply a voltage higher than the normal working voltage to the insulator of the tested equipment and evaluate the insulation performance by measuring the leakage current generated. In the electrical equipment manufacturing industry, it can be used to detect whether the insulation performance of products on the production line meets the standards, ensuring product quality; in the power system, it can conduct regular insulation testing on high-voltage electrical equipment (such as transformers, circuit breakers) to prevent faults; in scientific research experiments, it can help researchers test the insulation characteristics of new materials, providing data support for the research and development of new materials. For example, in the production process of mobile phone chargers, a withstand voltage tester is needed to detect their insulation performance to ensure the safety of users.
2.What is a withstand voltage test?
Withstand voltage test or high-voltage test (HIPOT test) is a test used to verify the quality and electrical safety characteristics of products (such as standards required by international safety organizations like UL, CE, VDE, CSA, TUV). This type of test is conducted by applying a high voltage for a specified time to the power input terminal of the electrical product when it is not powered on (but the switch is on), to confirm whether it meets the high-voltage impact value specified by safety organizations for the product type.
3.What is the capacity of a withstand voltage tester, and how is it calculated?
The capacity of a withstand voltage tester generally refers to the output power of AC testing. Therefore, the capacity of the withstand voltage tester is determined by the maximum AC output voltage multiplied by the maximum AC current.
4.How to determine the test voltage used when performing a withstand voltage test on a product?
Ideally, relevant safety standards’ test requirements should be obtained before the test.
For general non-standard withstand voltage tests, the recommended voltage = 1000V + 2 times the working voltage (test time 60 seconds). For example, for a product working at 120V, the recommended test voltage is at least 1240V.
5.How to decide whether to use AC or DC withstand voltage test when performing a withstand voltage test on a product?
A withstand voltage test can be AC, DC, or even both, depending entirely on the specific product type; generally speaking, AC withstand voltage tests are more often specified as mandatory. For example, consumer electrical products generally use AC power more than DC power. Therefore, at a minimum, the product must undergo a withstand voltage test the same as its most commonly used power supply.
6.What is the difference between AC withstand voltage test and DC withstand voltage test?
AC withstand voltage test cannot charge capacitive loads, and regardless of the duration of the AC voltage supply, the response current flows immediately and remains constant.
DC withstand voltage test will charge the capacitance of the DUT, so at the moment when DC withstand voltage is applied to the DUT, an instantaneous rising charging current can be seen; but this charging current will disappear when the capacitance of the DUT is fully charged.
Generally speaking, there is a certain ratio between the voltage values used in AC withstand voltage test and DC withstand voltage test, that is, AC * 1.414 = DC. For example, if the DUT is tested with 2kV AC, if it is changed to DC test, 2kV * 1.414 = 2.83kV DC withstand voltage must be used.
7.What is an Insulation Resistance (IR) test?
Insulation resistance is a measurement of the quality of insulating materials, and its testing method is very similar to the withstand voltage test. Similarly, when the electrical product is not powered on, a DC voltage up to 500V (or a maximum of 1000V) is applied to the two insulated points that need to be tested. The insulation resistance (IR) test usually gives a resistance value in megohms (MΩ). A typical judgment method is that the value of insulation resistance (IR) must not be lower than a certain megohm (MΩ) value.
8.What is the difference between a withstand voltage test and an insulation resistance test?
The insulation resistance (IR) test is a qualitative test that provides an indication of the relative quality of the insulation system. It is usually performed with 500V or 1000V DC voltage, and the result is measured in megohms (MΩ).
The withstand voltage (Hi-Pot) test is a quantitative test that also applies high voltage to the DUT, but the applied voltage is higher than that of the insulation resistance (IR) test; and it can be performed under AC or DC voltage depending on different requirements. The result is measured in milliamperes (mA) or microamperes (uA).
9.What is Continuity Check?
Continuity Check refers to using a small amount of current to confirm whether the grounding condition between the current chassis and the power line ground terminal exists. This detection function can only provide the tester with a preliminary confirmation of the product’s grounding condition.
10.What is a Ground Bond test?
Ground Bond test refers to measuring the impedance between the frame of the DUT and the grounding post; its purpose is to ensure that when the product is damaged, its protection circuit can perform the grounding function and handle the fault current. The test method is to pass a large DC current or AC RMS current (up to 42A) directly through the grounding circuit to determine the impedance of the grounding circuit.
11.What is the difference between Continuity Check and Ground Bond test?
Continuity Check only verifies that there is an electrical connection between the power line grounding point and the conductive surface of the product.
The Ground Bond test not only confirms the existence of the grounding connection but also confirms that the connection can withstand a higher current. Most standards require that the resistance between the power line grounding point and the conductive surface of the product must not exceed 0.1Ω.
12.What is an arc?
An arc refers to the sparking (lightning-like) phenomenon that occurs when high voltage passes through the insulation system and the voltage jumps from one conductor surface to another.
13.How to determine the test voltage for a withstand voltage test?
The determination of the test voltage for a withstand voltage test needs to follow the safety standards or regulations of the tested product. Relevant safety standards specify the voltage and test time for the withstand voltage test. In most test standards, the voltage for the withstand voltage test is: when the working voltage is between 42V and 1000V, the test voltage is twice the working voltage plus 1000V, and the test time is 1 minute.
14.The difference between AC withstand voltage test and DC withstand voltage test
AC withstand voltage test can apply the test voltage immediately, and basically no discharge is needed after the test. DC withstand voltage can measure products with capacitive loads.
Oscilloscope
1.What is an oscilloscope?
An oscilloscope is an instrument that visually displays waveforms showing how signal amplitude changes over time. It is a comprehensive signal characteristic tester and a basic type of electronic measuring instrument. It can not only display the waveform of the measured signal but also measure parameters such as the signal’s amplitude and frequency.
Oscilloscopes are divided into analog oscilloscopes, digital storage oscilloscopes, mixed-signal oscilloscopes, virtual digital oscilloscopes, etc.
Among them, digital storage oscilloscopes, abbreviated as DSO (Digital Storage Oscilloscopes), store signals in the form of digital encoding. RIGOL’s DS series are all digital oscilloscopes.
Mixed-signal oscilloscopes, often called MSO (Mixed Signal Oscilloscopes), are most notable for their ability to perform mixed measurements of digital and analog signals, providing great convenience for popular embedded development.
2.What is The concept of triggering?
To synchronize the scanning signal with the measured signal, certain conditions are set. The measured signal is continuously compared with these conditions, and scanning is initiated only when the measured signal meets these conditions, so that the scanning frequency is the same as the measured signal or has an integer multiple relationship. This technology is called “triggering”, and these conditions are referred to as “trigger conditions”.
Trigger modes include auto trigger, normal trigger, single trigger, etc.
There are many forms of trigger modes. The most commonly used and basic one is edge triggering; others include pulse width triggering, slope triggering, video triggering, and alternate triggering. In digital signals, there are also pattern triggering and duration triggering, etc.
3.What is the capture rate of an oscilloscope?
The capture rate of an oscilloscope refers to the number of waveforms captured by the oscilloscope per unit time, usually expressed in waveforms per second (wfms/s, an abbreviation of waveforms/s).
4.Why does the output not change when switching the output impedance between high impedance and 50 ohms?
This is because the physical output impedance of the signal source is fixed at 50 ohms, and the output impedance setting only adjusts the magnitude of the signal source’s output value through software.
First, when switching the output impedance, the signal amplitude setting changes accordingly. For example, a 1KHz, 2Vpp square wave with high output impedance, when switched to 50 ohms, the setting will change to a 1KHz, 1Vpp square wave. However, if we still set the oscilloscope’s input impedance to high impedance to observe this signal, the measured result will still be a 1KHz, 2Vpp square wave. This is caused by the mismatch between the oscilloscope’s input impedance and the signal source’s output impedance. At this point, we need to adjust the oscilloscope’s input impedance to 50 ohms to obtain the correct measured result: a 1KHz, 1Vpp square wave.
5.What is the function of external triggering?
An external trigger source can be used to trigger on a third channel while acquiring data on two channels. For example, an external clock or a signal from the circuit under test can be used as the trigger source.
6.What is the bandwidth of an oscilloscope?
Bandwidth is a physical quantity that characterizes the frequency range of signals observable by an oscilloscope, usually expressed in units of MHz/GHz. It is generally considered that the bandwidth of an oscilloscope is the frequency point at which the amplitude of the signal measured by the oscilloscope drops to 70.7% (-3dB) of the true amplitude of the signal. The signal here usually takes a sine wave as a reference, which has only one harmonic.
Insufficient bandwidth of the oscilloscope will lead to attenuation of waveform amplitude and waveform distortion.
The required bandwidth of the oscilloscope = the highest frequency component of the measured signal × 5
7.What impact does insufficient sampling rate have on testing and measurement?
Aliasing of the measured waveform;
Loss of waveforms;
Affecting the measurement of amplitude and edges of high-frequency signals.
8.What is the real-time sampling rate of an oscilloscope?
The real-time sampling rate corresponds to real-time sampling.
The real-time sampling rate is a physical quantity that characterizes the sampling capability of an oscilloscope. It refers to the number of points (sample, abbreviated as Sa) collected by the oscilloscope per unit time (1 second), and is usually expressed in MSa/s or GSa/s.
9.Can an oscilloscope measure current?
Yes, it can. A current probe compatible with the BNC interface can be used, and at the same time, a current-voltage correspondence table should be requested from the probe manufacturer for manual conversion.
10.When testing a 485 communication device with the probe set to 1X, communication errors occur. Why?
Because the input impedance of the probe in the 1X mode is 1MΩ, which is smaller than the 10MΩ in the 10X mode. Moreover, this impedance is connected in parallel to the measured system, reducing the system impedance, affecting voltage division, and causing communication failures.
11.What is the definition of the real-time bandwidth of a digital oscilloscope?
The real-time bandwidth of a digital oscilloscope, also known as single-shot bandwidth, refers to the bandwidth value of the oscilloscope during real-time/single-shot sampling, which is greatly affected by the sampling rate.
The real-time bandwidth of a digital oscilloscope is mainly limited by two aspects: the analog bandwidth of the oscilloscope and the sampling rate.
The analog bandwidth of the oscilloscope is determined by the bandwidth of the front-end amplifier; the real-time bandwidth of the oscilloscope ≤ (sampling rate / 5). In summary, the real-time bandwidth of a digital oscilloscope is the smaller value between the analog bandwidth and (sampling rate / 5).
12.If an oscilloscope is labeled as 60MHz, can it be understood that it can measure up to 60MHz at most?
A 60MHz bandwidth oscilloscope does not mean it can measure 60MHz signals well. According to the definition of oscilloscope bandwidth, if a 60MHz sine wave with a peak-to-peak value of 1V is input to a 60MHz bandwidth oscilloscope, a signal of approximately 0.7V will be seen on the oscilloscope.
13.What is the waveform capture rate of an oscilloscope, and what impact does it have?
The waveform capture rate refers to the number of waveforms captured by the oscilloscope per unit time. A digital oscilloscope does not display waveforms in real-time. During the period from update to the completion of display, the sampling chip is still continuously collecting waveforms, but the screen cannot display them. This period is called the dead time of the oscilloscope. Waveforms within the dead time cannot be observed, which leads to waveform loss. The higher the capture rate of the oscilloscope, the smaller the dead time, and the lower the probability of missing waveforms. Therefore, when observing some transient signals, using an instrument with a higher waveform capture rate increases the probability of observing the desired waveform, that is, improves testing efficiency.
14.When measuring a 10MHz sine wave with a 100MHz bandwidth oscilloscope, why are the results different when the probe is set to 1X and 10X?
The actual testing bandwidth is the comprehensive bandwidth of the system composed of the oscilloscope bandwidth and the probe. The bandwidth of the probe in 1X mode is usually only 6MHz, so there will be significant attenuation when measuring a 10MHz waveform. Therefore, the result when the probe is set to 10X (with full bandwidth) is correct. It is worth noting that the system bandwidth of a 250MHz oscilloscope combined with a 250MHz probe is less than 250MHz. Therefore, selecting a suitable probe is extremely important for oscilloscope testing.
15.How to measure mains electricity with a digital oscilloscope?
- Answer: There are two key points for measuring mains electricity:
①. Ensure that the peak-to-peak value of the mains electricity is within the range of the oscilloscope’s range,otherwise, the complete waveform cannot be seen. The effective value of mains electricity is 220V. For a digital oscilloscope with a maximum vertical range of 10V/div, with the assistance of a 10X probe, it can test voltages with a maximum peak-to-peak value of 800V;
②. Ensure that the ground clip of the passive probe is connected to the ground (not the neutral wire!). There is a voltage on the neutral wire (which can be verified with a digital multimeter), and contact with the probe’s ground clip will directly cause a short circuit.
Points to note when measuring mains electricity:
①. It is recommended to use a 100:1 passive probe to ensure the service life of the digital oscilloscope;
②. The ground clip of the probe must be connected to the ground wire, and must not be connected to the neutral wire or live wire to prevent short circuits;
③. If the budget allows, use a high-voltage differential probe (no need to worry about grounding issues during testing) or a high-voltage passive probe with an isolation transformer for the oscilloscope;
16.Why is it inaccurate to measure high-frequency signals when the probe is in 1X mode? How should it be measured?
Because the bandwidth of the probe in 1X mode is usually only 6MHz, it causes significant attenuation to high-frequency signals. When measuring high-frequency signals, it is important to use the 10X mode and avoid using the ground wire clip to prevent reflections caused by distributed inductance. It is recommended to use a grounding spring. For measuring the output of a BNC interface, a BNC interface converter of the probe or a BNC coaxial cable can be used for test connection.
17.How to measure power supply ripple with an oscilloscope?
Power supply ripple refers to small-amplitude oscillating waveforms generated by switching and rectification processes in switching power supplies, with amplitudes often at the mV level. Measuring ripple is equivalent to measuring a high-frequency AC waveform with a large DC offset, and in practical applications, its peak-to-peak value is emphasized. The following are some precautions:
①. The oscilloscope uses AC coupling and enables bandwidth limitation;
②. The probe is set to 1X mode to ensure signal fidelity;
③. The probe uses a grounding spring instead of a grounding wire to reduce high-frequency reflections;
④. Contact or be as close as possible to the test point to avoid introducing external noise interference.
18.What is floating ground testing? How to perform floating ground testing?
A floating ground signal means that no point in the signal system has an electrical connection with the reference point, and the reference point is usually the earth, hence the name “floating ground”. When testing floating ground signals, since all points in the system may have voltage relative to the ground, using general testing methods may cause a short circuit due to the connection of the probe’s ground clip, burning electrical equipment, and even endangering the operator’s safety in severe cases. The measurement of floating ground signals can refer to the measurement of differential signals, mainly including the following methods:
A. When high accuracy is not required, two channels can be used for measurement. The specific operation is to use the probes of two probes to contact two test points respectively, connect their ground clips, and then use the oscilloscope’s mathematical operation function to subtract the waveforms of the two channels. The resulting waveform is the measured waveform. It should be noted here to correctly set the minuend and subtrahend; otherwise, the result may be inverted.
B. When high accuracy is required, there are two methods: 1. Use a differential probe for measurement; 2. Use an isolation transformer to isolate the oscilloscope’s power supply. When using an isolation transformer, it is best to use only one channel; otherwise, the result may be inaccurate due to the common ground of the two channels.
19.What is the difference between the oscilloscope's automatic frequency measurement and hardware frequency counter measurement?
The oscilloscope’s software measurement calculates and measures the content currently displayed on the screen. When the oscilloscope is in auto-trigger mode, the displayed waveform is constantly refreshing. For some regular signals, the parameters change little or not at all after each waveform refresh, so the fluctuation range of the oscilloscope’s reading is not large. However, if the signal contains a lot of noise or is not a periodic waveform, the numbers displayed on the oscilloscope will keep jumping. At this time, pressing “STOP” will show that the measured value stabilizes after the waveform stops. The hardware frequency counter, on the other hand, obtains results by counting and calculating the triggers of the input signal through the internal hardware circuit of the oscilloscope. Therefore, for the same signal, moving the trigger level will lead to different test results displayed by the hardware frequency counter. If the signal is superimposed with a lot of noise, the hardware frequency counter’s display will also keep jumping because the triggering is irregular. Users need to choose the appropriate measurement method according to the actual situation.
20.What is the equivalent sampling rate of an oscilloscope?
The equivalent sampling rate corresponds to equivalent sampling.
Equivalent sampling, also known as repetitive sampling, is an effective technical means to make up for the insufficient real-time sampling rate.
Equivalent sampling is only effective for periodic or repetitive signals.
21.Knowledge related to probes
There are many types of oscilloscope probes with different performances, such as high-voltage probes, differential probes, active high-speed probes, etc., with prices ranging from several hundred RMB to ten thousand US dollars. The main factors determining the price of a probe are its bandwidth and functions. The probe is the part of the oscilloscope that contacts the circuit; a good probe can provide the fidelity required for testing. To achieve this, even passive probes must contain many passive component compensation circuits (RC networks) inside.
22.What is memory depth?
Memory depth refers to the number of waveform sample points that can be stored in the oscilloscope’s memory, usually expressed in pts (abbreviation of “points”).
Waveform storage time = memory depth / sampling rate
23.How long can an oscilloscope store waveforms, and what parameters does it depend on?
The waveform storage time of an oscilloscope = memory depth / real-time sampling rate, which is proportional to memory depth and inversely proportional to real-time sampling rate. Fundamentally, how long an oscilloscope can store waveforms is determined by the current time base setting and the choice of memory depth. Because at a certain time base setting, the sampling rate is fixed and affected by whether long storage is enabled.
24.What is the rise time of an oscilloscope?
Rise time is usually defined as the time it takes for a signal to rise from 10% to 90% of the rising edge.
The rise time of an oscilloscope is directly related to its bandwidth, with the relationship as follows:
Empirical formula: T_rise = 0.35 / oscilloscope bandwidth (for bandwidth below 1GHz)
25.What is trigger holdoff?
Trigger holdoff refers to temporarily disabling the oscilloscope’s trigger circuit for a period of time (i.e., holdoff time). During this period, the oscilloscope will not trigger even if there are signal waveform points that meet the trigger conditions. The function of the oscilloscope’s trigger part is to stably display waveforms, mainly those with large periodic repetition and many non-repetitive points that meet the trigger conditions within the large period.
26.What is the difference between AC effective value and root mean square (RMS)?
The root mean square is also called the effective value. Its calculation method is: first square, then average, and then take the square root. If an alternating current and a direct current pass through a resistor of the same resistance and generate the same amount of heat within the same time (one cycle), the value of this direct current is called the effective value of the alternating current.