However, I should note that I compiled PyTorch myself back then, as an early adopter, and I could only utilize the M1 CPU in PyTorch. It really can’t handle anything beyond LeNets. I recall making a LeNet-5 runtime comparison between the M1 and a GeForce 1080Ti and finding similar speeds.Įven though the M1 MacBook is an amazing machine, it is really not feasible to train modern deep neural networks on it. Similarly, all scikit-learn-related workflows were much faster on the M1 MacBook Air! I could even run small neural networks in PyTorch in a reasonable time (various multilayer perceptrons and convolutional neural networks for teaching purposes). For example, preprocessing the IMDb movie dataset took only 21 seconds instead of 1 minute and 51 seconds on my 2019 Intel MacBook Pro. When I was writing my new book, I noticed that it didn’t only feel fast in everyday use, but it also sped up several computations. It’s been a fantastic machine so far: it is silent, lightweight, super-fast, and has terrific battery life. In this short blog post, I will summarize my experience and thoughts with the M1 chip for deep learning tasks.īack at the beginning of 2021, I happily sold my loud and chunky 15-inch Intel MacBook Pro to buy a much cheaper M1 MacBook Air. This is an exciting day for Mac users out there, so I spent a few minutes tonight trying it out in practice. Today, PyTorch officially introduced GPU support for Apple’s ARM M1 chips.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |