Introducing CRYSOUND’s cutting-edge acoustic imaging camera, revolutionizing industrial inspections with advanced capabilities. The CRY8124 acoustic imaging camera excels at pinpointing leaks, identifying electrical partial discharge, and detecting mechanical deterioration. Setting a new standard for sensitivity and efficiency, the CRY8124 boasts 200 microphones (the most in the industry). The device detects smaller leaks and partial discharges from greater distance than any other handheld acoustic imaging camera on the market. The CRY8124 reporting software allows offline analysis, editing, and report generation, simplifying regular maintenance routines. Elevate the effectiveness of your industrial inspections with CRYSOUND’s acoustic cameras.

CRY8124 Advanced Acoustic Imaging Camera

$550.00
Buy on Amazon

Free Shipping

24x7 Customer Service
30-Day Return
2-Year Worry-Free Warranty
One-on One Expert Support

Payment Methods

NEW

CRY8124 Advanced Acoustic Imaging Camera

$550.00

Introducing CRYSOUND's cutting-edge acoustic imaging camera, revolutionizing industrial inspections with advanced capabilities. The CRY8124 acoustic imaging camera excels at pinpointing leaks, identifying electrical partial discharge, and detecting mechanical deterioration. Setting a new standard for sensitivity and efficiency, the CRY8124 boasts 200 microphones (the most in the industry). The device detects smaller leaks and partial discharges from greater distance than any other handheld acoustic imaging camera on the market.

The CRY8124 reporting software allows offline analysis, editing, and report generation, simplifying regular maintenance routines.

Elevate the effectiveness of your industrial inspections with CRYSOUND's acoustic cameras.

Key Applications

⚡ Electrical Systems

Partial discharge detection in transformers, switchgear, insulators, and high-voltage equipment

💨 Compressed Air Systems

Leak detection in pneumatic systems, air compressors, pipelines, and industrial gas lines

🏭 Industrial Facilities

Noise source identification, equipment diagnostics, and predictive maintenance

🔧 Mechanical Systems

Bearing failure detection, valve leakage, steam trap inspection, and rotating machinery analysis

Product Highlights
The Best Performance in the Industry

The Best Performance in the Industry

With 200 microphones, 100kHz bandwidth, and the fastest processor in the industry, the CRY8124 can pinpoint smaller leaks and partial discharges at a greater distance than any other system.

CRY8124 keeps operators at a much safer distance, avoiding exposure to toxic gases. Also, benefiting from the increased detection distance, the camera covers a wider area, improving test efficiency by more than 4x.

Detects leaks at twice the distance of 1st gen

Locates 2.7bar 0.0029L/min leak at 10m distance

The Best Performance in the Industry
Doubles the Detectability of Sound Sources in Noisy Environments

Doubles the Detectability of Sound Sources in Noisy Environments

In most field inspections, background noise is always the biggest concern that limits the performance of acoustic imaging cameras. With the CRY8124, we have introduced a new powerful algorithm, 'HyperVision.'

HyperVision mitigates mutual interference between different sources. The signals of the target sound source are emphasized, and previously unrecognized sound sources are accurately identified and presented.

Doubles the Detectability of Sound Sources in Noisy Environments
Thermal & Acoustic Imaging Display Simultaneously

Thermal & Acoustic Imaging Display Simultaneously

Integrating infrared technology with an acoustic camera significantly boosts its capabilities and efficiency. This combination enables real-time, simultaneous visualization of both thermal and acoustic images, ensuring that no detail goes unnoticed. By eliminating the need for separate thermal and acoustic inspections, it streamlines workflows and enhances overall inspection efficiency. This integration is especially valuable in high-temperature scenarios, such as with steam traps, where precise temperature measurement aids in informed decision-making.

Resolutions of 640×512 and 384×288 available

Measurement range from -20 to 500℃

Thermal & Acoustic Imaging Display Simultaneously
Ergonomic Design & Premium Display

Ergonomic Design & Premium Display

Weighing just 3 lbs (1.4 kg), this device is encased in a high-quality soft rubber coating that provides an excellent tactile feel and ensures a secure grip. The ergonomically designed handle enables comfortable single-handed operation, enhancing user convenience and safety in various scenarios.

The display features a stunning 8-inch LCD screen with a resolution of 1920x1200, delivering sharp, detailed visuals. With a brightness of 600 nits, the screen remains easily readable even in direct sunlight, ensuring reliable performance across different lighting conditions.

Rubber molded and weight only 3 lbs (1.4kg)

1920x1200, 8-inch LCD Display

Ergonomic Design & Premium Display
New Software for Leak Detection & Partial Discharge (PD)

New Software for Leak Detection & Partial Discharge (PD)

The CRY8124 acoustic imaging camera boasts a fully revamped software interface that simplifies the detection of gas and vacuum leaks, partial discharge (PD) localization, and PD type identification. This user-friendly interface enables operators to swiftly and accurately identify the sources of leaks, minimizing downtime and boosting operational efficiency. Additionally, its advanced PD detection features provide precise localization and classification of partial discharge types, ensuring the reliability and performance of electrical assets.

Locates and quantifies gas/vacuum leaks

Pinpoints PD and recognizes the type of PD

New Software for Leak Detection & Partial Discharge (PD)
Up to 10 Hours of Operation with Replaceable Battery

Up to 10 Hours of Operation with Replaceable Battery

The CRY8124 acoustic imaging camera comes with advanced smart replaceable batteries, enhancing convenience and extending operational time. Each battery provides up to 5 hours of continuous use, ensuring uninterrupted inspections.

With an extra spare battery included, the device supports a full day of testing. The standard package features two smart batteries and a smart battery charger for effortless battery management. LED indicators on each battery show the current charge level, allowing users to easily monitor battery status and plan their work efficiently.

Up to 10 Hours of Operation with Replaceable Battery
Enhancing Efficiency with Bluetooth®& Wi-Fi Connectivity

Enhancing Efficiency with Bluetooth®& Wi-Fi Connectivity

With Bluetooth® connectivity, users can pair wireless headsets with the camera, avoiding the risks associated with wired headsets in busy industrial environments. This feature also enables seamless data transfer to smartphones, facilitating rapid and efficient sharing of inspection data with enterprise applications. Additionally, Wi-Fi capability enhances the camera’s versatility by allowing easy data transfer to PC-based reporting software.

Connect to Bluetooth headphones and phone apps

Transfer data to PC wirelessly

Enhancing Efficiency with Bluetooth®& Wi-Fi Connectivity
Advanced Reporting Software

Advanced Reporting Software

The CRY8124 acoustic imaging camera is paired with advanced reporting software that significantly enhances user experience and efficiency. The software boasts a newly redesigned, intuitive interface, making navigation and operation easier than ever. Wireless data transfer capabilities allow users to seamlessly send inspection data from the camera to the software, eliminating the hassle of cables. Additionally, the software supports infrared data analysis and provides powerful tools for data post-processing, enabling comprehensive investigations and the creation of detailed reports.

Redesigned user interface

Wireless data transfer

Generates reports for acoustics and infrared data

Advanced Reporting Software
Technical Specifications
Microphone array
200 channels MEMS microphone
Frequency range
2k - 100k Hz
SPL range
28 - 132 dB
Minimum Detectable Leak
10 m 2.7 bar 0.0029 L/min,0.5 m 1.9 bar 0.0028 L/min
Test distance
0.2 - 200 m (1 - 656 ft)
Camera FOV
66°
Focal length
4.3 mm (0.17 inches)
Camea pixel
13M pixel
Digital zoom
1x - 6x
Fill light
LED * 4
Resolution
1920 * 1200
Size
8 inches
Touchscreen
Capacitive touchscreen
Brightness
600 nits,support auto and manual adjustment
Storage
64G internal,64G external TF card
Data format
.jpg (picture),.mp4 (video),.wav (audio),.cdat(data)
Video length
10 minutes
Date export
USB-C,Wi-Fi,TF card
Battery capacity
6600mAh @7.2V
Battery type
Smart battery with indicator,replaceable
Battery life
Up to 5 hours
Size
270 * 190 * 51 mm (10.6 * 7.5 * 2.0 inches)
Weight
1.4 kg (3 lbs)
Wi-Fi
802.11a/b/g/n/ac
Bluetooth
BT 5.2
GNSS
GPS+BDS+GLONASS+GALILEO+QZSS
Operating temp.
-20 - +50 ℃,10 - 95 %,no cond.
Storage temp.
-20 - +70 ℃,10 - 95 %,no cond.
IP rated
IP54
Storage size
64G internal,64G external
Warranty
2 years
Safety
IEC 61010-1
EMC
IEC 61326-1
Vibration
2g,IEC 60068-2-6
Shock
25g,IEC 60068-2-27
Drop test
1.2 m (4 ft)
Function
Multi-point imaging,directional focus,distance measurement,leak volume estimation,PRPD spectrum,type recongnition,picture labeling,report export,and etc.
USB-C 1
USB 3.0 for charging,HDMI,data export
USB-C 2
USB 2.0 for data export,USB sensor
3.5 mm audio jack
Headphone output
TF card slot
External storage
SIM card slot
Inserting a 4G/5G network communication card
Analog input
4 channels,20 - 100k Hz,IEPE,phantom power supply
Language
Chinese,English,Korean,French,Japanese,Russian,German,Italian,EU Spanish,Portuguese,Hungarian,Dutch,Cambodian,Vietnamese,Turkish,Polish
Main Features

Related Products

Sound Level Meter

A²B Microphone Testing: A Practical Measurement Setup and Workflow

As A²B microphones and sensors are increasingly adopted in automotive applications, the demand for reliable testing in both R&D and production is also growing. This article explains why A²B testing matters, highlights the advantages of A²B over traditional analog cabling in terms of interconnect and scalability, outlines key measurement KPIs (such as frequency response, THD+N, phase/polarity, and SNR), and presents a typical test-bench setup along with the corresponding solution configuration. Why A²B Microphone and Sensor Testing Matters In-cabin audio is no longer just "music playback". Modern vehicles depend on high-performance acoustic sensing for hands-free calling, in-cabin communication, voice assistants, ANC/RNC, and more—and these features increasingly rely on multiple microphones and even accelerometers deployed around the cabin. ADI notes that the rapid expansion of audio-, voice-, and acoustics-related applications is a key trend, and that new digital microphone and connectivity approaches are enabling broader adoption. To deliver consistent performance, teams need a test workflow that is repeatable across different node positions, harness lengths, and configurations—without turning every debug session into a custom project. The Interconnect Shift: From Shielded Analog Cables to Digital A²B Historically, scaling microphone counts often meant scaling shielded analog cabling, which adds weight, cost, and integration burden—sometimes limiting these features to premium vehicle segments. A²B (Automotive Audio Bus) addresses that interconnect problem by enabling a scalable, networked digital audio architecture with deterministic behavior—exactly what timing-sensitive acoustic applications need. Figures a and b show how such a design may be realized with the traditional analog and the digital A²B systems, respectively. Figure 1 (a) Analog system design with analog mic elements (shielded wires). (b) Digital system design with digital mic elements (A²B technology and UTP wires). What You'll Measure: Key A²B Microphone KPIs Frequency Response (FR) THD+N Phase / polarity (and channel-to-channel consistency for arrays) SNR AOP (if required by your program/spec) Typical Block Diagram-What the Bench Looks Like At CRYSOUND, we provide more than just the CRY580 A²B interface. We offer a full automotive audio testing solution, including audio acquisition cards, microphones and sensors, acoustic sources, custom fixtures, acoustic test boxes, and vibration shakers, delivering a complete and streamlined testing experience. Figure 2 Here's a description of the testing block diagram, including the use of the latest OpenTest Audio Test & Measurement Software https://opentest.com Solution BOM List The value of end-to-end delivery: reducing system integration time and minimizing coordination costs between multiple suppliers. We cover everything from R&D to production line testing. Figure 3 BOM list of the solution If you'd like to learn more about A²B testing, please fill out the Get in touch form below and we'll reach out shoutly.

Abnormal Noise Detection: From Human Ears to AI

With the rapid growth of consumer audio products such as headphones, loudspeakers and wearables, users’ expectations for “good sound” have moved far beyond simply being able to hear clearly. Now they want sound that is comfortable, clean, and free from any extra rustling, clicking or scratching noises. However, in most factories, abnormal noise testing still relies heavily on human listening. Shift schedules, subjective differences between operators, fatigue and emotional state all directly impact your yield rate and brand reputation. In this article, based on CRYSOUND’s real project experience with AI listening inspection for TWS earbuds, we’ll talk about how to use AI to “free human ears” from the production line and make listening tests truly stable, efficient and repeatable. Why Is Audio Listening Test So Labor-Intensive? In traditional setups, the production line usually follows this pattern: automatic electro-acoustic test + manual listening recheck. The pain points of manual listening are very clear: Strong subjectivity: Different listeners have different sensitivity to noises such as “rustling” or “scratching”. Even the same person may judge inconsistently between morning and night shifts. Poor scalability: Human listening requires intense concentration, and it’s easy to become fatigued over long periods. It’s hard to support high UPH in mass production. High training cost: A qualified listener needs systematic training and long-term experience accumulation, and it takes time for new operators to get up to speed. Results hard to trace: Subjective judgments are difficult to turn into quantitative data and history, which makes later quality analysis and improvement more challenging. That’s why the industry has long been looking for a way to use automation and algorithms to handle this work more stably and economically—without sacrificing the sensitivity of the “human ear.” From “Human Ears” to “AI Ears”: CRYSOUND’s Overall Approach CRYSOUND’s answer is a standardized test platform built around the CRYSOUND abnormal noise test system, combined with AI listening algorithms and dedicated fixtures to form a complete, integrated hardware–software solution. Key Characteristics of the Solution: Standardized, multi-purpose platform: Modular design that supports both conventional SPK audio / noise tests and abnormal noise / AI listening tests. 1-to-2 parallel testing: A single system can test two earbuds at the same time. In typical projects, UPH can reach about 120 pcs. AI listening analysis module: By collecting good-unit data to build a model, the system automatically identifies units with abnormal noise, significantly reducing manual listening stations. Low-noise test environment: A high-performance acoustic chamber plus an inner-box structure control the background noise to around 12 dBA, providing a stable acoustic environment for the AI algorithm. In simple terms, the solution is: One standardized test bench + one dedicated fixture + one AI listening algorithm. Typical Test Signal Path Centered on the test host, the “lab + production line” unified chain looks like this: PC host → CRY576 Bluetooth Adapter → TWS earphones Earphones output sound, captured by CRY718-S01 Ear Simulator Signal is acquired and analyzed by the CRY6151B Electroacoustic Analyzer The software calls the AI listening algorithm module, performs automatic analysis on the WAV data and outputs a PASS/FAIL result Fixtures and Acoustic Chamber: Minimizing Station-to-Station Variation Product placement posture and coupling conditions often determine test consistency. The solution reduces test variation through fixture and chamber design to fix the test conditions as much as possible: Fixture: Soft rubber shaped recess. The shaped recess ensures that the earbud is always placed against the artificial ear in the same posture, reducing position errors and test variation. The soft rubber improves sealing and prevents mechanical damage to the earphones. Acoustic box: Inner-box damping and acoustic isolation. This reduces the impact of external mechanical vibration and environmental noise on the measurement results. Professional-Grade Acoustic Hardware (Example Configuration) CRY6151B Electroacoustic Analyzer Frequency range 20–20 kHz, low background noise and high dynamic range, integrating both signal output and measurement input. CRY718-S01 Ear Simulator Set Meets relevant IEC / ITU requirements. Under appropriate configurations / conditions, the system’s own noise can reach the 12 dBA level. CRY725D Shielded Acoustic Chamber Integrates RF shielding and acoustic isolation, tailored for TWS test scenarios. AI Algorithm: How Unsupervised Anomaly Detection “Recognizes the Abnormal” Training Flow: Only “Good” Earphones Are Needed CRYSOUND’s AI listening solution uses an unsupervised anomalous sound detection algorithm. Its biggest advantage is that it does not require collecting many abnormal samples in advance—only normal, good units are needed to train a model that “understands good sound”. In real projects, the typical steps are as follows: Prepare no fewer than 100 good units. Under the same conditions as mass production testing, collect WAV data from these 100 units. Train the model using these good-unit data (for example, 100 samples of 10 seconds each; training usually takes less than 1 minute). Use the model to test both good and defective samples, compare the distribution of the results, and set the decision threshold. After training, the model can be used directly in mass production. Prediction time for a single sample is under 0.5 seconds. In this process, engineers do not need to manually label each type of abnormal noise, which greatly lowers the barrier to introducing the system into a new project. Principle in Brief: Let the Model “Retell” a Normal Sound First Roughly speaking, the algorithm works in three steps: Time-frequency conversion Convert the recorded waveform into a time-frequency spectrogram (like a “picture of the sound”). Deep-learning-based reconstruction Use the deep learning model trained on “normal earphones” to reconstruct the time-frequency spectrogram. For normal samples, the model can more or less “reproduce” the original spectrogram. For samples containing abnormal noise, the abnormal parts are difficult to reconstruct. Difference analysis Compare the original spectrogram with the reconstructed one and calculate the difference along the time and frequency axes to obtain two difference curves. Abnormal samples will show prominent peaks or concentrated energy areas on these curves. In this way, the algorithm develops a strong fit to the “normal” pattern and becomes naturally sensitive to any deviation from that pattern, without needing to build a separate model for each type of abnormal noise. In actual projects, this algorithm has already been verified in more than 10 different projects, achieving a defect detection rate of up to 99.9%. Practical Advantages of AI Listening No dependence on abnormal samples: No need to spend enormous effort collecting various “scratching” or “electrical” noise examples. Adapts to new abnormalities: Even if a new type of abnormal sound appears that was not present during training, as long as it is significantly different from the normal pattern, the algorithm can still detect it. Continuous learning: New good-unit data can be continuously added later so that the model can adapt to small drifts in the line and environment over the long term. Greatly reduced manual workload: Instead of “everyone listening,” you move to “AI scanning + small-batch sampling inspection,” freeing people to focus on higher-value analysis and optimization work. A Typical Deployment Case: Real-World Practice on an ODM TWS Production Line On one ODM’s TWS production line, the daily output per line is on the order of thousands of sets. In order to improve yield and reduce the burden of manual listening, they introduced the AI abnormal-noise test solution: ItemBefore Introducing the AI Abnormal-Noise Test SolutionAfter Introducing the AI Abnormal-Noise Test SolutionTest method4 manual listening stations, abnormal noises judged purely by human listeners4 AI listening test systems, each testing one pair of earbudsManpower configuration4 operators (full-time listening)2 operators (for loading/unloading + rechecking abnormal units)Quality riskMissed defects and escapes due to subjectivity and fatigueDuring pilot runs, AI system results matched manual sampling; stability improved significantlyWork during pilot stageDefine manual listening proceduresCollect samples, train the AI model, set thresholds, and validate feasibility via manual samplingDaily line capacity (per line)Limited by the pace of manual testingAbout 1,000 pairs of earbuds per dayAbnormal-noise detection rateMissed defects existed, not quantified≈ 99.9%False-fail rate (good units misjudged)Affected by subjectivity and fatigue, not quantified≈ 0.2% On this line, AI listening has essentially taken over the original manual listening tasks. Not only has the headcount been cut by half, but the risk of missed defects has been significantly reduced, providing data support for scaling the solution across more production lines in the future. Deployment Recommendations: How to Get the Most Out of This Solution If you are considering introducing AI-based abnormal-noise testing, you can start from the following aspects: Plan sample collection as early as possible Begin accumulating“confirmed no abnormal-noise”good-unit waveforms during the trial build /small pilot stage, so you can get a head start on AI training later. Minimize environmental interference The AI listening test station should be placed away from high-noise equipment such as dispensing machines and soldering machines. By turning off alarm buzzers, defining material-handling aisles that avoid the test stations, and reducing floor vibration, you can effectively lower false-detection rates. Keep test conditions consistent Use the same isolation chamber, artificial ear, fixtures and test sequence in both the training and mass-production phases, to avoid model transfer issues caused by environmental differences. Maintain a period of human–machine coexistence In the early stage, you can adopt a“100% AI + manual sampling”strategy, and then gradually transition to“100% AI + a small amount of DOA recheck,”in order to minimize the risks associated with deployment. Conclusion: Let Testing Return to “Looking at Data” and Put People Where They Create More Value AI listening tests, at their core, are an industrial upgrade—from experience-based human listening to data- and algorithm-driven testing. With standardized CRYSOUND test platforms, professional acoustic hardware, product-specific fixtures and AI algorithms, CRYSOUND is helping more and more customers transform time-consuming, labor-intensive and subjective manual listening into something stable, quantifiable and reusable. If you’d like to learn more about abnormal-noise testing for earphones, or planning to try AI listening on your next-generation production line—or discuss your blade process and inspection targets—please use the “Get in touch” form below. Our team can share recommended settings and an on-site workflow tailored to your production conditions.

Abnormal Noise Testing Explained: Principle,Method,and Configuration

In our previous blog post, "Abnormal Noise Detection: From Human Ears to AI"we discussed the key pain points of manual listening, introduced CRYSOUND's AI-based abnormal-noise testing solution, outlined the training approach at a high level, and showed how the system can be deployed on a TWS production line. In this post, we take the next step: we'll dive deeper into the analysis principles behind CRYSOUND's AI abnormal-noise algorithm, share practical test setups and real-world performance, and wrap up with a complete configuration checklist you can use to plan or validate your own deployment. Challenges Of Detecting Anomalies With Conventional Algorithms In real factories, true defects are both rare and highly diverse, which makes it difficult to collect a comprehensive library of abnormal sound patterns for supervised training. Even well-tuned—sometimes highly customized—rule-based algorithms rarely cover every abnormal signature. New defect modes, subtle variations, and shifting production conditions can fall outside predefined thresholds or feature templates, leading to missed detections (escapes). In the figure below, we compare two wav files that we generated manually. Figure 1: OK Wav Figure 2: NG Wav You can see that conventional checks—frequency response, THD, and a typical rub & buzz (R&B) algorithm—can hardly detect the injected low-level noise defect; the overall curve difference is only ~0.1 dB. In a simple FFT comparison, the two wav files do show some discrepancy, but in real production conditions the defect energy may be even lower, making it very likely to fall below fixed thresholds and slip through. By contrast, in the time–frequency representation , the abnormal signature is clearly visible, because it appears as a structured pattern over time rather than a small change in a single averaged curve. Figure 3: Analysis results Principle Of AI Abnormal Noise Algorithm CRYSOUND proposes an abnormal-noise detection approach built on a deep-learning framework that identifies defects by reconstructing the spectrogram and measuring what cannot be well reconstructed. This breaks through key limitations of traditional rule-based methods and, at the principle level, enables broader and more systematic defect coverage—especially for subtle, diverse, and previously unseen abnormal signatures. The figure below illustrates the core workflow behind our training and inference pipeline. Figure 4: Algorithm Flow Principle During model training, we build the algorithm following the workflow below. Figure 5: Algorithm Judgment Principle How To Use And Deploy The AI Algorithm Preparation First, prepare a Low-Noise Measurement Microphone / Low-noise Ear Simulator and a Microphone Power Supply to ensure you can capture subtle abnormal signatures while providing stable power to the mic. Figure 6: Low-Noise Measurement Microphone Next, you'll need a sound card to record the signal and upload the data to the host PC. Figure 7: Data Acquisition System Third, use a fixture or positioning jig to hold the product so that placement is repeatable and every recording is taken under consistent conditions. Finally, ensure a quiet and stable acoustic environment: in a lab, an anechoic chamber is ideal; on a production line, a sound-insulation box is typically used to control ambient noise and keep measurements consistent. Figure 8: Anechoic Room Figure 9: Anechoic Chamber Model Development First, create a test sequence in SonoLab, select "Deep Learning" and apply the setting. Next, select the appropriate AI abnormal-noise algorithm module and its corresponding API Figure 10: Sequence Interface 1 Then open Settings and specify the model type, as well as the file paths for the training dataset and test dataset. Click Train and wait for the model to finish training (Training time depends on your PC's hardware) Figure 11: Sequence Interface 2 During training, the status indicator turns yellow. Once training is complete, it switches to green and shows a "Training completed" message. Figure 12: Sequence Interface 3 Finally, place your test WAV files in the specified test folder and run the sequence. The model will start automatically and output the analysis results. Test Case Figure 13:Test Environment Figure 14:Test Curve System Block Diagram Figure 15: System Block Diagram 1 Figure 16: System Block Diagram 2 Equipment More technical details are available upon request—please use the "Get in touch" form below. Our team can share recommended settings and an on-site workflow tailored to your production conditions.

An Open Platform For Intelligent Sound Imaging

In the fields of acoustic research and industrial inspection, sound is no longer just a signal to be "heard",but information that can be "seen". How to visualize, analyze, and quantify sound has been a long-standing pursuit for research institutions and engineers alike. Today, leveraging its deep expertise in acoustics, CRYSOUND has launched the new SonoCam Pi product series—not just an acoustic camera, but an open acoustic platform, redefining the future of acoustic measurement and imaging. Making Acoustic Experiments Simpler And More Efficient In recent years, microphone arrays have been rapidly adopted in acoustic research. However, research institutions commonly face the following challenges: Traditional systems are expensive and offer a limited number of channels. Array design and algorithm development are complex and time-consuming. In-house array development lacks mature supply chains and integrated hardware-software support. To address these challenges, CRYSOUND leveraging nearly 30 years of expertise in acoustic testing and signal processing, has developed the SonoCam Pi platform—an affordable, open, and programmable acoustic solution. It enables researchers, engineers, and university students to enter the world of acoustic imaging and algorithm validation more quickly, flexibly, and cost-effectively. An Acoustic Development Platform For Research And Industry Hardware Highlights: Large Arrays & Multi-Geometry Adaptability 208-channel MEMS microphone array, supporting replacement and customization. Array diameters of 30 cm / 70 cm / 110 cm, enabling easy switching between near-field and far-field measurements. Wideband response from 20 Hz to 20 kHz, suitable for both precision lab testing and on-site measurements. Modular design, allowing rapid deployment and flexible expansion. SonoCam Pi product appearance Software Ecosystem: Open APIs & Algorithm Freedom Provides an API for 208-channel raw audio waveform data. Comes with a MATLAB acoustic imaging algorithm Demo App for rapid algorithm validation. Built-in acoustic imaging algorithms including Far-field Beamforming and Near-field Acoustic Holography. Supports secondary development, enabling users to build customized acoustic analysis tools. In short, SonoCam Pi is not just a hardware device—it is a complete platform for acoustic algorithm development and experimental validation. From Lab To Factory: Applications Of SonoCam Pi Acoustic Drone Detection Powered by array-based localization and identification algorithms, SonoCam Pi can accurately capture the acoustic signature of drones, enabling reliable low-altitude acoustic detection to support security monitoring and drone detection for site security. Drone detection Acoustic Research & Algorithm Development Research institutions can leverage SonoCam Pi's 208-channel raw-data API and MATLAB demo tools to rapidly validate research algorithms such as Far-field Beamforming and Near-field Acoustic Holography. Algorithm development Sound Propagation Path Analysis Supports directional analysis of both structure-borne and airborne sound propagation, helping researchers and engineers more intuitively understand the transmission mechanisms of noise sources. Sound propagation path analysis Automotive NVH Noise Inspection By combining beamforming and acoustic holography techniques, SonoCam Pi can quickly pinpoint interior and exterior noise sources, visualize acoustic radiation, and support NVH optimization as well as overall vehicle sound quality improvement. NVH research Open · Efficient · Intelligent: A New Start For Acoustic Research Whether for algorithm validation in university laboratories or noise diagnostics in industrial environments, SonoCam Pi has become a new-generation acoustic tool for both research and engineering practice, thanks to its outstanding performance, comprehensive ecosystem, and high level of openness. It makes acoustic measurement more portable, more intelligent, and more open—not only enabling users to see sound, but also empowering researchers to reshape the way sound is understood. SonoCam Pi is more than an acoustic camera; it is an acoustic application ecosystem platform. As technology and acoustic algorithms continue to evolve, CRYSOUND will keep advancing SonoCam Pi, enabling acoustic imaging to unlock new potential across more fields and working hand in hand with research and industrial users to explore the limitless possibilities of the acoustic world. If you'd like to learn more about the applications of CRYSOUND's SonoCam Pi, or discuss the most suitable solution for your needs, please contact us via the form below. Our sales or technical support engineers will get in touch with you shortly.

Get in touch

If you are interested or have questions about our products, book a demo and we will be glad to show how it works, which solutions it can take part of and discuss how it might fit your needs and organization.

Shipping calculated at checkout
$550.00
Support Support
Product Catalogs Product Catalogs Solutions Solutions User Manuals User Manuals Software Download Software Download Product Inquiry Product Inquiry Schedule Demo Schedule Demo Technical Support Technical Support +86-571-88225128 +86-571-88225128
Request Quote 0
Request Quote
This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.