Lately, I’ve been running deep learning experiments across different computing clusters. Every time I switch to a new server, I have to go through a series of setup steps to get my environment ready. To avoid repeating the same work from scratch each time, I decided to document my routine here. This post mainly serves as a personal checklist, but it might also be useful to others facing similar tasks. I’ll keep it updated whenever I add new steps to the routine.
[!NOTE]
[Updated 10/25] Recently, I found a useful AI tool built in the Mac Terminal - Warp. I can use it to generate commands and scripts, check the environment of a new server, analyze errors, and automate pipeline. It is particularly helpful when using the remote server. So the routines in this post has been replaced by this wonderful AI tool. I also recommend you giving it a try!
Connect to GitHub Account
-
Generate an SSH key
ssh-keygen -t ed25519 -C "your_email@example.com" -
After generating the key, display it with:
cat ~/.ssh/id_ed25519.pub -
Add the public key to GitHub.
Navigate to Settings > SSH and GPG keys
Click “New SSH key”, then paste the copied content
-
Testing SSH connection.
ssh -T git@github.comIf prompted, type
yesand press Enter.
Set Up a Conda Environment
-
Create a new environment.
conda create -n myenv python=3.10 -
Use a faster pip mirror
pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple pip config set install.trusted-host pypi.tuna.tsinghua.edu.cn
Set Up Hugging Face Mirror
export HF_ENDPOINT="https://hf-mirror.com"
echo 'export HF_ENDPOINT="https://hf-mirror.com"' >> ~/.bashrc
Then reload your shell:
source ~/.bashrc