Running accelerated...
 
Notifications
Clear all

Question Running accelerated DL on M1/M2 chipset Macs

6 Posts
3 Users
2 Likes
62 Views
peterleong
Posts: 8
Member
Topic starter
(@peterleong)
Eminent Member
Joined: 2 months ago

Does anyone have advise on how to to setup PyTorch or TF2 on M1/M2 chipset Macs with acceleration?

5 Replies
1 Reply
Syak
 Syak
Moderator
(@syakyr)
Joined: 2 years ago

New Member
Posts: 1

I would recommend the setup provided by @laurenceliew for best performance, but if one really needs to run TF2 and Pytorch on M1/M2 chips, I recommend to do a read up on the following sites:

TF2 on Metal:
removed link

Pytorch (Nightly) on Metal:
removed link

I highly recommend using miniconda to manage dependencies instead of pip as building packages such as numpy and pandas will be a hassle. This is taken from the TF2 on Metal site:

Download and install <a href=" removed link ">Conda env:

chmod +x ~/Downloads/Miniforge3-MacOSX-arm64.sh
sh ~/Downloads/Miniforge3-MacOSX-arm64.sh
source ~/miniforge3/bin/activate

Hope this helps.

Reply
Laurence Liew
Posts: 83
Admin
(@laurenceliew)
Estimable Member
Joined: 6 months ago

Save the headache, get an Intel x86 PC + NVidia GPU - a standard AI/ML/DL setup - and setup the PC at home. With todays' broadband speed/5G, just remote back into your PC for your AI/ML workloads.

This is my current setup: Macbook Air -> wireguard vpn back home -> intel PC+NVidia GPU over RDP. Very usable.

I get the best of both worlds. Long battery life, standard Windows/Linux AI desktop powered by a beefy GPU.

Reply
3 Replies
peterleong
Member
(@peterleong)
Joined: 2 months ago

Eminent Member
Posts: 8

@laurenceliew Beowulf thanx. Actually I am asking on behalf of many mac enthusiasts who were sold into how cool (in celsius) the new Mx chips are.

Reply
peterleong
Member
(@peterleong)
Joined: 2 months ago

Eminent Member
Posts: 8

And they are exclusive Mac users

Reply
Laurence Liew
Admin
(@laurenceliew)
Joined: 6 months ago

Estimable Member
Posts: 83

@peterleong haha... same situations for the AMD enthusiasts... until they find out their peers can run the same AI/ML code 10-100X faster on similarly spec'ed INTEL systems... all because of Intel MKL.

You CAN run on AMD and M1/M2 systems - but the software ecosystem for AI/ML is a lot weaker, and you have to build (re-compile) a lot of the core libraries yourself. If you enjoy doing such stuff - please go ahead.

If you want to just fire and forget, and focus on your AI/Ml code, the easiest path is to use a well supported ecosystem of hardware and software tooling for AI/ML workloads today.

Use the right tool for the job. 

 

 

Reply
Share: