Home Update 5 reasons to choose PyTorch for deep learning

5 reasons to choose PyTorch for deep learning

329
5 reasons to choose PyTorch for deep learning


PyTorch is definitely the flavor of the moment, especially with the recent 1.3 and 1.4 releases bringing a host of performance improvements and more developer-friendly support for mobile platforms. But why should you choose to use PyTorch instead of other frameworks like MXNet, Chainer, or TensorFlow? Let’s look into five reasons that add up to a strong case for PyTorch.

Before we get started, a plea to TensorFlow users who are already typing furious tweets and emails even before I begin: Yes, there are also plenty of reasons to choose TensorFlow over PyTorch, especially if you’re targeting mobile or web platforms. This isn’t intended to be a list of reasons that “TensorFlow sucks” and “PyTorch is brilliant,” but a set of reasons that together make PyTorch the framework I turn to first. TensorFlow is great in its own ways, I admit, so please hold off on the flames.

PyTorch is Python

One of the primary reasons that people choose PyTorch is that the code they look at is fairly simple to understand; the framework is designed and assembled to work with Python instead of often pushing up against it. Your models and layers are simply Python classes, and so is everything else: optimizers, data loaders, loss functions, transformations, and so on.

Due to the eager execution mode that PyTorch operates under, rather than the static execution graph of traditional TensorFlow (yes, TensorFlow 2.0 does offer eager execution, but it’s a touch clunky at times) it’s very easy to reason about your custom PyTorch classes, and you can dig into debugging with TensorBoard or standard Python techniques all the way from print() statements to generating flame graphs from stack trace samples. This all adds up to a very friendly welcome to those coming into deep learning from other data science frameworks such as Pandas or Scikit-learn.

PyTorch also has the plus of a stable API that has only had one major change from the early releases to version 1.3 (that being the change of Variables to Tensors). While this is undoubtedly due to its young age, it does mean that the vast majority of PyTorch code you’ll see in the wild is recognizable and understandable no matter what version it was written for.

PyTorch comes ready to use 

While the “batteries included” philosophy is definitely not exclusive to PyTorch, it’s remarkably easy to get up and running with PyTorch. Using PyTorch Hub, you can get a pre-trained ResNet-50 model with just…



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here