http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Exact Sum-Rate Analysis of MIMO Broadcast Channels with Random Unitary Beamforming
Hong-Chuan Yang,Peng Lu,Hyung-Ki Sung,Young-Chai Ko IEEE 2011 IEEE TRANSACTIONS ON COMMUNICATIONS Vol.59 No.11
<P>Random unitary beamforming (RUB) is an attractive transmission scheme for MIMO broadcast channels because of its ability to achieve high sum-rate capacity with limited feedback. In this letter, we derive the exact analytical expressions for the ergodic sum rate of MIMO broadcast channels with RUB. The analysis is facilitated by the development of the complete statistical characterization of ordered beam SINRs for a user, which can find their application in many related problems.</P>
Smart Self-Checkout Carts Based on Deep Learning for Shopping Activity Recognition
Hong-Chuan Chi,Muhammad Atif Sarwar,Yousef-Awwad Daraghmi,Kuan-Wen Liu,Tsi-Ui ?k,Yih-Lang Li 한국통신학회 2020 한국통신학회 APNOMS Vol.2020 No.09
Fast and reliable communication plays a major role in the success of smart shopping applications. In a ”Just Walk Out” shopping scenario, a video camera is installed on the cart to monitor shopping activities and transmit images to the cloud for processing so that items in the cart can be tracked and checked out. This paper proposes a prototype of a smart shopping cart based on image-based action recognition. Firstly, deep learning networks such as Faster R-CNN, YOLOv2, and YOLOv2-Tiny are utilized to analyze the content of each video frame. Frames are classified into three classes: No Hand, Empty Hand, and Holding Items. The classification accuracy based on Faster RCNN, YOLOv2, or YOLOv2-Tiny is between 93.0% and 90.3%, and the processing speed of the three networks can be up to 5 fps, 39 fps, and 50 fps, respectively. Secondly, based on the sequence of frame classes, the timeline is divided into No Hand intervals, Empty Hand intervals, and Holding Items intervals. The accuracy of action recognition is 96%, and the time error is 0.119s on average. Finally, we categorize the events into four cases: No Change, placing, Removing, and Swapping. Even including the correctness of the item recognition, the accuracy of shopping event detection is 97.9%, which is higher than the minimal requirement to deploy such a system in a smart shopping environment. A demo of the system and a link to download the data set used in the paper are in Smart Shopping Cart Prototype or found at this URL: https://hackmd.io/abEiC83rQoqxz7zpL4Kh2w.