Application of Embedded Wireless Video Surveillance System Based on ARM9

introduction

With the advent of high-performance, low-power embedded CPUs and high-reliability network operating systems, applications such as video telephony, video conferencing, and remote video surveillance, which have large amounts of computational data, have become possible in embedded devices. The traditional video surveillance system based on coaxial cable is complex in structure, poor in stability, low in reliability and expensive, and thus a remote web video monitoring system such as an embedded network video server emerges. In this embedded wireless video surveillance system, a high-performance ARM9 chip is used as a microprocessor to control video4linux to implement USB camera video data acquisition. After the captured video data is compressed by JPEG, it is transmitted via 2.4GHz wireless under the control of an ARM9 chip. The receiving module performs video data transmission; the video transmission module then submits the video data to the video application server via the serial port or the network. Finally, the video application server recombines the received compressed data frames into a video image to implement wireless video surveillance.

1 system structure

The entire system consists of five modules: a video acquisition terminal, a 2.4G wireless transmission module, a 2.4G wireless reception module, video transmission, and a video application server. The video acquisition terminal includes a central control and data processing center centered on the S3C2410X, and a USB Camera data acquisition unit. The central control and data processing center mainly completes video capture terminal control and video image compression, and the data that needs to be transmitted is encoded and sent to the nRF2401 wireless transmitter module through SIO.

The video transmission module mainly includes a central control and data processing center with the S3C2410X as the core, and a MAC interface and a UART interface for transmitting video data to the video application server. The central control and data processing center of the video transmission module mainly performs the following tasks: nRF2401 submits the received video data to the SIO module, S3C2410X first decodes the SIO module data, and then transmits the video data to the video application server through the UART interface or MAC interface. .

The video application server receives video data from a serial port or a network interface, and recomposes and composites the video data into a video image.

1.1 Hardware Structure of Video Acquisition Terminal In this design, the S3C2410X inherited on-chip resources are fully utilized. Only SDRAM, NandFlash, 4X4 ArrayKeyboard, USBHost, EthernetInterface, RS232Interface, JTAG, and Power modules need to be expanded. The video acquisition terminal is one of the core modules of the entire system and mainly completes video capture and image compression.

2 video acquisition module design and implementation

The video capture module is the core of the entire video capture terminal. It completes video capture by scheduling V4L (video4linux) and imaging device drivers through an embedded Linux operating system. V4L is the foundation of Linux image system and embedded image. It is a set of APIs supporting image equipment in Linux kernel. With appropriate video capture card and video capture card driver, V4L can realize image acquisition, AM/FM wireless broadcast, image CODEC. , channel switching and other functions. At present, V4L is mainly used in video streaming systems and embedded video systems. Its application range is quite extensive, such as: distance learning, telemedicine, video conferencing, video surveillance, video telephony, etc. V4L is a 2-layer architecture with the V4L driver at the top, and the imaging device driver at the bottom.

In the Linux operating system, external devices are managed as device files. Therefore, operations on external devices are converted into operations on device files. The video device file is located in the /dev/ directory, typically video0. After the camera is connected to the video capture terminal through the USB interface, the video data acquisition of the video file can be achieved by calling V4LAPIs to read the device file video0 in the program. The main process is as follows:

1) Open the device file: intv4l_open(char*dev,v4l_device*vd){} to open the image source device file;

2) Initialize picture:intv4l_get_picture(v4l_device*vd){} to obtain input image information;

3) Initialize the channel: intv4l_get_channels(v4l_device*vd){} to get the information of each channel;

4) set the channel norm: intv4l_set_norm (v4l_device * vd, intnorm) {} set the norm for all channels;

5) device address mapping: v4l_mmap_init(v4l_device*vd){} returns the address for storing image data;

6) Initialize the mmap buffer: intv4l_grab_init(v4l_device*vd, intwidth, intheight){};

7) Video capture synchronization: intv4l_grab_sync(v4l_device*vd){};

8) Video capture: intdevice_grab_frame(){}.

Through the above operations, you can capture camera video data into memory. The video data collected into the memory can be saved in the form of a file, or it can be compressed and packaged into a data packet and transmitted to the data processing center through the network. This design uses the latter method of processing, ie, the collected video data is first JPEG-compressed and then packaged into a data packet and transmitted to the video application server for processing.

3 video compression module design

Because the video data collected by the video acquisition module has a large amount of information, if it is directly transmitted through the network, it will increase the burden on the data transmission system and greatly reduce the data transmission efficiency. For this reason, this design uses the JPEG-JointPhotographicExpertsGroup compression encoding algorithm to compress the video data. JPEG is a compression standard for color, monochrome, multi-grayscale, continuous tone still digital images. It is an international standard for static digital image compression. It is not only suitable for still image compression but also for intra-frame image compression of TV image sequences. Since JPEG compression uses a full-color image standard, its main processing includes: color model conversion, discrete cosine-DCT transform, rearrangement of DCT results, quantization, encoding, and the like.

In this design, the most basic JPEG algorithm is used. The main steps are: First, the data redundancy is removed by the discrete cosine transform (DCT); secondly, the quantization table is used to quantize the DCT coefficients; finally, the Huaffman variable word length is used. The encoding encodes the quantized DCT coefficients to minimize their entropy. Through experiments, the data compression effect is good, and the image compression rate can reach about 70%.

4nRF2401 wireless transmitter and receiver module design

This design uses nRF24012.4GHz wireless transceiver chip to complete the wireless transmission of video data. The nRF2410 is a single-chip RF transceiver that operates in the 2.4GHz to 2.5GHz ISM band. The chip features built-in frequency synthesizers, power amplifiers, crystal oscillators, and modems. Its output power and communication channel parameters can be configured by programs. . The built-in DuoCeiver receiver enables the nRF2401 to use the same antenna to simultaneously receive two different channels of data, which provides favorable conditions for the transmission of video data.

The nRF2401 mainly performs the following operations when sending and receiving data:

1) Initialize the sender and receiver: Mainly complete the I / O port configuration, enable the transmitter / receiver, start the counter, etc.;

2) Transmitter/receiver configuration: first open the configuration mode, then configure the transmitter/receiver, and finally enable the transceiver function;

3) Receiving packet/receiving packet processing:

4) Send/receive data: Complete the data packet sending/receiving operation;

5) Read the A/D conversion result: After waiting for the AD conversion to be completed, read the A/D conversion result data and start accepting the new conversion;

5 video transmission module design

After the video transmission module receives the video data submitted by the wireless receiving module, it can transmit it to the video application server through a serial port or a network interface. The design adopts a network interface for data transmission. At present, most of the transmission of video data over the Internet uses the UDP protocol. The UDP protocol provides non-connected and unreliable data transmission. Because the receiver only performs simple integrity check on the received UDP data packet and discards the wrong data packet, the data transmission speed is faster. However, in order to improve the accuracy of data transmission and reduce the additional and tedious data confirmation operations due to the use of UDP protocol, this design uses a connection-oriented and reliable data transmission protocol-TCP. The communication process between the video transmission module and the video application server is shown in Figure 3:

6 video application server - video display module design and implementation

The video application server uses Borland C++ Builder 6.0 to complete the synthesis of the monitoring video (if the video application server uses the Linux operating system, Kylix can use the same function). Since BCB's Image class can perform pixel-accurate image processing, BMP, Drawing, custom graphics, etc. can be displayed as images. Therefore, after receiving the video data from the network, the SocketAPI first converts the received JPEG image into a BMP and then sends it to the Image object. The Image object finally processes the video data, generates an image video, and displays it.

7 Conclusion

This paper presents an embedded wireless video surveillance system based on ARMS3C2410X. The embedded Linux operating system is used for video capture, compression, and packetization. The wireless data transmission is performed through the nRF2401 wireless transmitter and receiver module. Finally, the video data is transmitted from the video transmission module to the video application server through the TCP/IP network. Complete wireless video surveillance system. Since the core work of the system is accomplished by using a high-performance embedded processor, the system has the advantages of simple structure, stable performance, low cost, etc. It has broad application prospects in fields such as oilfield, oil and gas well wireless video monitoring, and smart home.

Roman Column

Roman Culumn,European Styel,High Quality

Cornice And Moldng,Other Ceiling Co., Ltd. , http://www.nsceiling.com

Posted on