How to Install Python for AI: Windows & Mac Beginner Guide (2026)
New to AI? This step-by-step guide shows you how to install Python on Windows or Mac, set up a virtual environment, and install the essential AI libraries to start building in under 15 minutes.

If you've been trying to follow AI tutorials — running local models, using the Claude API, building with PyTorch — you've probably hit the same first wall: Python needs to be set up correctly before any of it works. Most guides skip over this part.
This guide doesn't. By the end you'll have Python installed, a virtual environment running, and the core AI libraries ready to go on Windows or Mac. It takes about 15 minutes.
Why You Need Python for AI
Almost every AI library — PyTorch, HuggingFace Transformers, the Anthropic SDK, the OpenAI SDK — is written in Python. Running local AI models, building chatbots, or experimenting with APIs all require a working Python environment.
The good news: Python is free, easy to install, and runs on everything.
Step 1: Download Python
Go to python.org/downloads and download the latest stable version (3.12 or higher as of 2026).
Windows:
- Download the
.exeinstaller - Run it and check ✅ "Add Python to PATH" before clicking Install — this is the most commonly missed step and causes "python is not recognized" errors
- Click "Install Now"
Mac:
- Download the
.pkginstaller and run it - Or use Homebrew if you have it installed:
brew install python - Apple ships its own Python 2.7 on older Macs — you want the fresh python.org version
Linux:
Most Linux distros include Python 3 already. Verify with python3 --version. If not installed: sudo apt install python3 python3-pip (Debian/Ubuntu).
Step 2: Verify the Installation
Open a terminal (Command Prompt or PowerShell on Windows, Terminal on Mac) and run:
Windows:
python --version
Mac / Linux:
python3 --version
You should see something like Python 3.12.3. If you see an error on Windows, Python wasn't added to PATH — re-run the installer and check that box.
Need help opening a terminal? See our terminal beginner's guide — it covers everything from opening Command Prompt to navigating directories.
Step 3: Create a Virtual Environment
A virtual environment is an isolated Python installation for your project. This is important: different AI projects need different library versions, and virtual environments prevent them from conflicting with each other.
Think of it like a separate toolbox for each project.
Create the environment:
# Windows
python -m venv ai-env
# Mac / Linux
python3 -m venv ai-env
This creates a folder called ai-env in your current directory.
Activate it:
# Windows (Command Prompt)
ai-env\Scripts\activate
# Windows (PowerShell)
ai-env\Scripts\Activate.ps1
# Mac / Linux
source ai-env/bin/activate
When activated, your terminal prompt changes to show (ai-env) at the start. That means everything you install from now on goes into this environment, not your global Python.
To deactivate (when you're done working): just type deactivate.

Step 4: Install pip and Upgrade It
pip is Python's package manager — it installs libraries. It comes bundled with Python, but you should upgrade it first:
# Windows
python -m pip install --upgrade pip
# Mac / Linux
python3 -m pip install --upgrade pip
Step 5: Install Core AI Libraries
With your virtual environment active, install the libraries you'll need most often:
pip install torch transformers anthropic openai numpy pandas
Here's what each one does:

torch (PyTorch) — the foundation for most deep learning work and local AI models. Large download (~1GB), be patient.
transformers — HuggingFace's library for loading pre-trained AI models. Powers tools like local LLMs, image classifiers, and speech recognition models.
anthropic — the official SDK for the Claude API. If you want to build Claude-powered apps, this is what you need.
openai — OpenAI's SDK for GPT-5.4 and related APIs.
numpy — mathematical operations on arrays. Almost every AI library depends on it.
pandas — loads and manipulates datasets. Essential for any data prep work before training or fine-tuning.
What About CUDA? (GPU Acceleration)
If you have an NVIDIA GPU and want to run models faster, you need to install a CUDA-compatible version of PyTorch instead of the default CPU version:
# Check PyTorch's website for the exact command for your CUDA version
# Example for CUDA 12.1:
pip install torch --index-url https://download.pytorch.org/whl/cu121
Go to pytorch.org/get-started/locally to generate the exact command for your setup.
Not sure how much VRAM your GPU has or if it can run AI models? See our VRAM guide for AI before proceeding.
Apple Silicon (M1/M2/M3/M4): PyTorch supports Metal (Apple's GPU API) natively. The standard pip install torch works — no CUDA needed.
Step 6: Verify Everything Works
Run a quick sanity check:
# Save this as test_ai.py and run: python test_ai.py
import torch
import numpy as np
print("PyTorch version:", torch.__version__)
print("NumPy version:", np.__version__)
print("GPU available:", torch.cuda.is_available()) # True if CUDA GPU found
print("MPS available:", torch.backends.mps.is_available()) # True if Apple Silicon
print("Setup complete!")
Windows: python test_ai.py
Mac / Linux: python3 test_ai.py
If all imports succeed and you see version numbers, you're ready.
Step 7: Install a Code Editor
You can write Python in any text editor, but VS Code makes it significantly easier:
- Download VS Code from code.visualstudio.com
- Open VS Code and install the Python extension (search "Python" in the Extensions panel)
- Open your project folder, and VS Code will automatically detect your virtual environment
The Python extension gives you syntax highlighting, inline error detection, and a debugger — all helpful when you're just starting out.
What to Build Next
With Python set up, here are natural starting points:
Use the Claude API — our Claude API tutorial walks you through your first API call with 10 lines of Python. The anthropic package you just installed is all you need.
Run local AI models — see our LM Studio guide for the easiest path to running models on your own hardware without any API costs.
Run Gemma 4 — Google's open-weight model works out of the box on Python with the transformers library. See our Gemma 4 setup guide.
For a no-code option to build Claude-powered chatbots without writing Python, CustomGPT lets you create a custom AI assistant from your own content in minutes.
Troubleshooting Common Issues
"python is not recognized" (Windows): Python wasn't added to PATH. Uninstall Python, run the installer again, and check the PATH box at the bottom of the first screen.
"pip: command not found" (Mac): Use pip3 instead of pip, or run python3 -m pip.
"Permission denied" on Mac: Add --user to pip install commands, or use sudo pip3 install (less recommended for beginners).
Torch install is very slow: PyTorch is about 1GB. It's normal. Let it run.
"ModuleNotFoundError" after installing a package: Your virtual environment isn't activated. Run the activate command again from Step 3.
Key Takeaways
- Download Python from python.org — on Windows, check "Add to PATH" during install
- Use
python3on Mac,pythonon Windows (after correct install) - Always create a virtual environment per project — prevents library conflicts
- Install:
pip install torch transformers anthropic openai numpy pandasto cover 90% of AI use cases - NVIDIA GPU: install CUDA-compatible PyTorch from pytorch.org/get-started
- Apple Silicon: standard PyTorch install uses Metal automatically
- VS Code + Python extension = best beginner setup
FAQ
Do I need the latest Python version?
Python 3.10 or higher works for most AI libraries. The latest stable 3.12 is fine. Avoid Python 3.13 for now — some AI packages haven't caught up.
Can I use Anaconda instead of a virtual environment?
Yes. Anaconda is a popular alternative that bundles Python, package management (conda), and many scientific libraries. For pure AI work, a standard Python + virtualenv setup is lighter and more flexible, but Anaconda works too. This guide focuses on the standard approach.
Do I need a GPU to do AI with Python?
No. CPU works for learning, small models, and API-based AI (Claude, GPT-5.4). A GPU becomes important when running larger local models. See our VRAM guide for what hardware matters.
What's the difference between pip and conda?
pip is Python's built-in package manager. conda is Anaconda's alternative that handles both Python packages and non-Python dependencies. If you're using standard Python (not Anaconda), use pip.
How do I know if my virtual environment is active?
Your terminal prompt shows (ai-env) (or whatever you named it) at the start of the line when it's active.
Can I use Python on Windows Subsystem for Linux (WSL)?
Yes, and some developers prefer it for AI work. Follow the Linux steps in this guide inside your WSL terminal. See our terminal beginner's guide for help setting up WSL.
How do I uninstall a package?
pip uninstall package-name — works inside an active virtual environment.

Alex the Engineer
•Founder & AI ArchitectSenior software engineer turned AI Agency owner. I build massive, scalable AI workflows and share the exact blueprints, financial models, and code I use to generate automated revenue in 2026.
Related Articles

How to Use the Claude API: Beginner's Guide to Building with Anthropic (2026)
The Claude API lets you build AI apps, chatbots, and automation tools using one of the most capable models available. This step-by-step guide walks beginners through setup, first API call, and choosing the right model.

OpenAI GPT-5.4-Cyber: What It Is and Who Can Use It (2026)
OpenAI just launched GPT-5.4-Cyber, a cybersecurity-focused AI model with fewer restrictions for security professionals. Here's what it does, how it compares to Claude Mythos, and how to get access.