Deep Learning Reading Group: SqueezeNet
DRANK

This paper introduces a small CNN architecture called “SqueezeNet” that achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.By Abhinav Ganesh, Lab41.The next paper from our reading group is by Forrest N. Iandola, Matthew W. Moskewicz, Khalid Ashraf, Song Han, William J. Dally and Kurt Keutzer. This paper introduces a small CNN architecture called “SqueezeNet” that achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. As you may have noticed with one of our recent posts we’re really interested in learning more about the compression of neural network architectures and this paper really stood out.It’s no secret that much of deep learning is tied up in the hell that is parameter tuning. This paper makes a case for increased study into the area of convolutional neural network design in order to drastically reduce the number of parameters you have to deal with. Unlike our previous post on “deep compression”, this paper proposes making a network sma…

kdnuggets.com
Related Topics: Deep Learning
1 comments
  • Interested in a fake bachelor's degree? Our replicas are designed to resemble genuine documents, ensuring they meet your specifications. With customizable options and high-quality printing, our degrees are perfect for various occasions. Order now for a realistic option.