How to add low-overhead, low-power audio processing to SoC design

This article refers to the address: http://

If you are planning to design a SoC system with digital audio, or if you are working on such a project, then taking a few minutes to read this article can help you better balance the various technical parameters and avoid bias in the design direction. The success rate of streaming, which saves weeks of development time. We will analyze the design alternatives, the key factors in the assessment, and the decisions made there.

History of consumer audio products

The history of consumer audio products can be traced back to 130 years ago. In the first 100 years, audio playback can only use analog technology. At that time, products using digital technology were bulky and expensive, and too complicated for consumer products. It wasn't until 1982 that the emergence of CDs and CD players completely changed the situation, and vinyl records devices seemed to disappear overnight.

At the same time, the rapid spread of personal computers has brought consumer audio products into close integration with digital technology. Digital music players such as disk and flash-based MP3s (such as iPOD) have replaced tape devices. What is the future direction of audio products? That is the more realistic restoration of the sound. Sound restoration technology has been constantly evolving, from early mono to stereo, to 3D surround sound with multiple speakers. In just a few years, the home theater system has evolved from 5.1 to 10.1 channels. Each technology leap is accompanied by more channels, higher sample counts, and greater processing power.

The increasingly complex audio codec format has a growing demand for performance, requiring a constant increase in the processor's clock speed. However, increasing the clock speed will bring a series of problems to the design. A higher frequency means higher power consumption, more heat generation, a battery with a larger capacity, and an increased fan for heat dissipation. Product costs and can cause annoying noise. During the SoC design process, higher processor clock speeds make timing closure more difficult. All of these issues should be brought to the attention of the designer.

To date, the largest-selling digital audio player is a mobile phone. In addition to the ability to make calls, today's mobile phones are actually small multimedia terminals that can play both audio and video content and games that contain complex sound effects. These features require mobile phones to have powerful audio processing capabilities. The consumption should be as low as possible, because consumers always want the phones they buy to have a long enough standby time, and hope that the MP3 player can play for a long time while providing excellent sound quality.

With the digitization of audio, the video has also been digitized. Most video playback devices today are compatible with a variety of audio standards. In the field of car audio, digitalization is also in full swing. Many years ago, CDs were loaded into cars as standard equipment. In recent years, with the popularity of high-fidelity satellite broadcasting, digital radios have also entered the field of automotive electronics. In GPS navigation devices, text-to-speech conversion is required, and some GPS devices also function as personal media players.

Audio codec

The codec format is a core element in all digital audio applications. It defines how an analog audio signal is digitized and compressed into a bit stream, and how the bit stream is decompressed and restored to an analog audio signal.

The most popular compression algorithm in consumer products is the MP3 format, which was introduced in 1991 with the MPEG-1 video standard. In the mid-1990s, the first MP3 player was born. Subsequently, vendors are constantly promoting other algorithm formats that offer higher fidelity and lower bitrate, but MP3 still plays an important role in audio standards, and almost all consumer products currently support MP3.

Most audio codec algorithms use lossy compression to reduce the required bit rate. Lossy compression reduces the bandwidth requirements for data exchange and storage, which reduces costs, which is why vendors use lossy compression. However, lossy compression also degrades the quality of the audio, and the degree of quality degradation is related to the compression algorithm. As long as there is enough processing power, the sound loss can be controlled at a level that is indistinguishable to the human ear, so compression algorithm researchers are still improving their algorithms. Different applications need to balance the audio quality and bandwidth requirements according to their own characteristics, resulting in the coexistence of multiple digital audio codec formats.

When you start thinking about how to implement an audio codec algorithm, you actually have 4 alternatives.

1. Rely on software to implement audio codec on a general purpose processor. For example, an MP3 player program running on a PC.

2. Audio codec is implemented using dedicated hardware, which was adopted by early portable MP3 players.

3. Rely on the DSP processor to implement audio codec.

4. Implement audio codec in software on an audio-only processor that is extended based on a general purpose processor.

Option 1, a general purpose processor to implement all system functions, including user interface, I / O and digital audio codec. This solution has the following advantages. First, the audio codec is just a piece of software that runs on a general-purpose processor. The only hardware overhead is simply to add a small amount of instruction memory. Secondly, due to the use of software to implement codec, a variety of codec algorithms can be implemented. Finally, when a new codec algorithm appears, it only needs to be upgraded to support it.

The shortcomings of this scheme are also obvious. Digital audio is sensitive to temporary failures because the human ear can capture very small errors. At this point audio applications are more demanding than video applications. In video applications, a wrong pixel is often not noticed, but not in audio applications. A general-purpose processor is used for audio encoding and decoding. Since the processor performs other tasks, its bandwidth is not audio-specific, which increases the probability of temporary failure of the audio codec.

In addition, most general-purpose processors do not have audio-specific instructions and cannot efficiently execute audio codecs, so they have to increase the processor's clock speed and execute more instructions per unit time to meet performance requirements.

Hardware implementation codec

Option 2, using a relatively low-performance processor with dedicated audio processing hardware, all audio processing tasks are done by dedicated hardware. Typically, this piece of hardware is attached as a peripheral to the system bus. The audio samples are sent by the processor to the codec hardware via the bus, or directly from the memory by the codec hardware via DMA. In this scenario the system bus is also shared.

The advantage of using dedicated codec hardware is that for a particular codec format, its area and power consumption are optimal compared to other schemes. Therefore, MP3 players in the mid-1990s adopted this approach. Its disadvantage is that each codec format has to be added with hardware modules to support it. In the design of Figure 1, in order to support three codec formats, three hardware logics must be added to the design. Therefore, in designs that require multi-format audio, this approach is no longer an advantage, and almost all current SoC designs need to support multi-format audio codecs. Second, if the codec algorithm is upgraded or a bug occurs, in order to fix the problem, the entire SOC chip must be re-sliced, and it is not possible to correct the error through software upgrade. In addition, when implementing a new codec algorithm, it is necessary to design a new hardware module, integrate it into the system design, and re-spin.

Figure 1: In a scenario using dedicated codec hardware, each codec format requires additional hardware modules to support it.
Figure 1: In a scenario using dedicated codec hardware, each codec format requires additional hardware modules to support it.

In scheme 3, audio codec is implemented by software on a general-purpose DSP processor, and the system also includes a main processor for control (the general-purpose DSP processor here refers to a DSP that is not optimized for audio processing). processor). The solution using a DSP processor has many advantages. First of all, there are hardware multipliers in the DSP processor, which can greatly improve the efficiency of the audio codec. Secondly, because of the software implementation, multi-format audio codec can be supported by only adding a small amount of memory overhead. Implementing a new codec algorithm also requires only writing new software, without having to re-spin, thus extending the lifecycle of the product.

The solution using DSP also has disadvantages. Most DSP C compilers are relatively inefficient, so they are generally not controlled by DSP. System control requires a general purpose processor to complete. Moreover, 16 or 32 bit DSP processors are not optimal for audio processing. Although most current codec algorithms use 16-bit sampling, in order to avoid rounding errors in the middle of the operation, a certain margin is required. So 16-bit DSP processors can cause problems when implementing complex audio algorithms. Using double-precision integer calculations can avoid problems, but the efficiency is not high enough to require a higher processor clock speed. In turn, 32-bit DSP processors cannot be fully utilized. In fact, 24-bit DSP processors are the most suitable for audio algorithms.

Audio dedicated RISC processor

Based on the above considerations, we propose a fourth option, which is to use an audio dedicated processor. The extension of audio processing based on a general-purpose processor enables the processor to maintain the efficiency of the C compiler while efficiently executing the audio codec. Figure 2 shows a system for encoding and decoding with an audio dedicated processor.

Figure 2: System for encoding and decoding with an audio dedicated processor.
Figure 2: System for encoding and decoding with an audio dedicated processor.

This design has many advantages. First, audio-specific extensions allow the processor to perform audio algorithms more efficiently, providing the performance required by the algorithm at lower primary frequencies, thereby significantly reducing system power consumption. Like the DSP-based solution, this solution facilitates multi-format audio codec, and can also support new codec algorithms by upgrading software. The downside is that everyone is still new to the concept of audio-specific processors. Let's introduce the audio-specific processors.

The Tensilica Hi-Fi2 audio processing engine is configured and extended based on the 32-bit Tensilica Xtensa RISC processor, which performs audio processing tasks very efficiently. As the name suggests, the Hi-Fi2 audio processing engine is Tensilica's second-generation audio processor. The most critical extension of the Hi-Fi2 audio processing engine is the addition of two 24-bit hardware multipliers that greatly increase the speed of audio calculations.

However, the multiplier itself does not reduce the number of cycles required to execute the audio algorithm. The number of cycles is reduced because the Hi-Fi2 audio processing engine can execute 1 to 2 instructions per cycle. A 48-bit or 56-bit wide register can hold two 24-bit sample values. These register files can be used to efficiently process stereo audio data. Tensilica added a total of 300 audio-specific instructions to the Xtensa RISC processor to create a flexible and efficient audio algorithm processing engine.

An excellent audio SOC solution is not limited to high-performance hardware, but also requires codec software, and for reasons of time, you may not plan to write your own. Although some audio processing programs can be found on the Internet, these programs are not optimized and are not efficient to implement. Second, programs that require authorization like Dolby audio codecs are hard to find on the Internet. The current popular audio codec algorithms have been ported to Tensilica's Hi-Fi2 audio processing engine, and the supported formats are increasing, as shown in Figure 3. All of these codecs are written in C. The processor's RISC basic instructions and audio extension instructions allow the programmer to continue programming in C, improving the maintainability of the software while maintaining performance.

Figure 3: The current popular audio codec algorithms have been ported to Tensilica's Hi-Fi2 audio processing engine.
Figure 3: The current popular audio codec algorithms have been ported to Tensilica's Hi-Fi2 audio processing engine.

The Hi-Fi2 audio processing engine is a set of extended instructions for the Xtensa LX2 processor using Tensilica configurable technology. Tensilica uses these extensions to configure the Diamond 330Hi-Fi audio processor, so all codecs based on the Hi-Fi2 audio processing engine can be run on the 330Hi-Fi processor. Tensilica's Hi-Fi audio processing engine is used in a variety of products and has been validated under different processes, with shipments reaching tens of millions. The current main application is still in the field of mobile phones. In the future, the application of products will be extended to video products, consumer broadcasts, ultra-portable PCs and other fields.

Air Bar

Air Bar,Disposable Vape Air Bar,Air Bar Disposable Vape Pod,Air Bar Light Edition

Shenzhen Zpal Technology Co.,Ltd , https://www.zpalvapes.com