[情報] Mobile Deep learning Resource

看板DataScience作者 (冷月狂刃)時間6年前 (2018/03/04 18:46), 6年前編輯推噓8(802)
留言10則, 9人參與, 6年前最新討論串1/1
最近在做適合跑在嵌入式或手機上的模型 來整理一下相關研究資源好了 =================================================== Survey paper A Survey of Model Compression and Acceleration for Deep Neural Networks [arXiv '17] https://arxiv.org/abs/1710.09282 -------------------------------------------------------- 輕量化 Model 1. MobilenetV2: Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation [arXiv '18, Google] https://arxiv.org/pdf/1801.04381.pdf 2. NasNet: Learning Transferable Architectures for Scalable Image Recognition [arXiv '17, Google] 註:Google AutoML 的論文 https://arxiv.org/pdf/1707.07012.pdf 3. DeepRebirth: Accelerating Deep Neural Network Execution on Mobile Devices [AAAI'18, Samsung] https://arxiv.org/abs/1708.04728 4. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices [arXiv '17, Megvii] https://arxiv.org/abs/1707.01083 5. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications [arXiv '17, Google] https://arxiv.org/abs/1704.04861 6. CondenseNet: An Efficient DenseNet using Learned Group Convolutions [arXiv '17] https://arxiv.org/abs/1711.09224 ------------------------------------------------------------ System 1. DeepMon: Mobile GPU-based Deep Learning Framework for Continuous Vision Applications [MobiSys '17] https://www.sigmobile.org/mobisys/2017/accepted.php 2. DeepEye: Resource Efficient Local Execution of Multiple Deep Vision Models using Wearable Commodity Hardware [MobiSys '17] http://fahim-kawsar.net/papers/Mathur.MobiSys2017-Camera.pdf 3. MobiRNN: Efficient Recurrent Neural Network Execution on Mobile GPU [EMDL '17] https://arxiv.org/abs/1706.00878 4. DeepSense: A GPU-based deep convolutional neural network framework on commodity mobile devices [WearSys '16] http://ink.library.smu.edu.sg/cgi/viewcontent.cgi?article=4278&context=sis_research 5. DeepX: A Software Accelerator for Low-Power Deep Learning Inference on Mobile Devices [IPSN '16] http://niclane.org/pubs/deepx_ipsn.pdf 6. EIE: Efficient Inference Engine on Compressed Deep Neural Network [ISCA '16] https://arxiv.org/abs/1602.01528 7. MCDNN: An Approximation-Based Execution Framework for Deep Stream Processing Under Resource Constraints [MobiSys '16] http://haneul.github.io/papers/mcdnn.pdf 8. DXTK: Enabling Resource-efficient Deep Learning on Mobile and Embedded Devices with the DeepX Toolkit [MobiCASE '16] 9. Sparsification and Separation of Deep Learning Layers for Constrained Resource Inference on Wearables [SenSys ’16] 10. An Early Resource Characterization of Deep Learning on Wearables, Smartphones and Internet-of-Things Devices [IoT-App ’15] 11. CNNdroid: GPU-Accelerated Execution of Trained Deep Convolutional Neural Networks on Android [MM '16] 12. fpgaConvNet: A Toolflow for Mapping Diverse Convolutional Neural Networks on Embedded FPGAs [NIPS '17] -------------------------------------------------------------- Quantization (Model compression) 1. The ZipML Framework for Training Models with End-to-End Low Precision: The Cans, the Cannots, and a Little Bit of Deep Learning [ICML'17] 2. Compressing Deep Convolutional Networks using Vector Quantization [arXiv'14] 3. Quantized Convolutional Neural Networks for Mobile Devices [CVPR '16] 4. Fixed-Point Performance Analysis of Recurrent Neural Networks [ICASSP'16] 5. Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations [arXiv'16] 6. Loss-aware Binarization of Deep Networks [ICLR'17] 7. Towards the Limit of Network Quantization [ICLR'17] 8. Deep Learning with Low Precision by Half-wave Gaussian Quantization [CVPR'17] 9. ShiftCNN: Generalized Low-Precision Architecture for Inference of Convolutional Neural Networks [arXiv'17] 10. Training and Inference with Integers in Deep Neural Networks [ICLR'18] ------------------------------------------------------------ Pruning (Model Compression) 1. Learning both Weights and Connections for Efficient Neural Networks [NIPS'15] 2. Pruning Filters for Efficient ConvNets [ICLR'17] 3. Pruning Convolutional Neural Networks for Resource Efficient Inference [ICLR'17] 4. Soft Weight-Sharing for Neural Network Compression [ICLR'17] 5. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding [ICLR'16] 6. Dynamic Network Surgery for Efficient DNNs [NIPS'16] 7. Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning [CVPR'17] 8. ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression [ICCV'17] 9. To prune, or not to prune: exploring the efficacy of pruning for model compression [ICLR'18] --------------------------------------------------------------- Approximation 1. Efficient and Accurate Approximations of Nonlinear Convolutional Networks [CVPR'15] 2. Accelerating Very Deep Convolutional Networks for Classification and Detection (Extended version of above one) 3. Convolutional neural networks with low-rank regularization [arXiv'15] 4. Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation [NIPS'14] 5. Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications [ICLR'16] 6. High performance ultra-low-precision convolutions on mobile devices [NIPS'17] 先發PAPER的整理好了 之後有空再整理其他部分 -- ※ 發信站: 批踢踢實業坊(ptt.cc), 來自: 114.25.14.8 ※ 文章網址: https://www.ptt.cc/bbs/deeplearning/M.1520160416.A.80C.html ※ 編輯: aa155495 (114.25.14.8), 03/04/2018 18:50:42

03/04 18:49, 6年前 , 1F
03/04 18:49, 1F

03/04 20:00, 6年前 , 2F
03/04 20:00, 2F

03/04 21:50, 6年前 , 3F
推,碩論做CNN加速,還蠻多篇都有翻過,可以交流一下想
03/04 21:50, 3F

03/04 21:50, 6年前 , 4F
法XD
03/04 21:50, 4F

03/05 11:59, 6年前 , 5F
實用推
03/05 11:59, 5F

03/05 17:20, 6年前 , 6F
03/05 17:20, 6F

03/08 15:43, 6年前 , 7F
實用
03/08 15:43, 7F

03/09 21:30, 6年前 , 8F
03/09 21:30, 8F

03/10 21:13, 6年前 , 9F
03/10 21:13, 9F

03/17 07:48, 6年前 , 10F
剛發現另篇survey: https://arxiv.org/abs/1803.04311
03/17 07:48, 10F
文章代碼(AID): #1QcywWWC (DataScience)