Tutorial on installing io.net environment on Windows

在 Windows 上安装 io.net 环境教程-1

Step-by-step tutorial to install io.net on Windows and connect new devices

Cloud is a state-of-the-art decentralized computing network that allows machine learning engineers to access distributed cloud clusters at a fraction of the cost of comparable centralized services.

Modern machine learning models often leverage parallel and distributed computing. It’s critical to leverage the power of multiple cores across multiple systems to optimize performance or scale to larger data sets and models. The training and inference process are not just simple tasks running on a single device, but often involve a coordinated network of GPUs working together.

在 Windows 上安装 io.net 环境教程-1

Unfortunately, with the need for more GPUs in the public cloud, there are some challenges in getting distributed computing resources. Some of the most prominent are:

  • Limited availability:Accessing hardware using cloud services like AWS, GCP, or Azure often takes weeks, and popular GPU models are often unavailable.
  • Bad choice: Users have little choice in GPU hardware, location, security level, latency, and other options.
  • High cost:Getting a good GPU is very expensive, and projects can easily spend hundreds of thousands of dollars per month on training and inference.

io.net solves this problem by aggregating GPUs from underutilized sources such as independent data centers, crypto miners, and crypto projects such as Filecoin and Render. These resources are combined in the Decentralized Physical Infrastructure Network (DePIN), giving engineers access to vast amounts of computing power in a system that is accessible, customizable, cost-effective, and easy to implement.

io.net documentation:https://developers.io.net/docs/overview

Tutorial on installing io.net and connecting to new devices

First go tocloud.io.net Log in to io.net using your Google email or X account

在 Windows 上安装 io.net 环境教程-1

1. Navigate to WORKER from the drop-down menu

在 Windows 上安装 io.net 环境教程-1

2. Connect new devices

Click "Connect new device"

在 Windows 上安装 io.net 环境教程-2

3. Select a supplier

Select the vendor you wish to group the hardware to

在 Windows 上安装 io.net 环境教程-3

4. Name your device

Add a unique name for your device, ideally similar to the following: My-Test-Device

在 Windows 上安装 io.net 环境教程-4

5. Select operating system "OS"

Click on the "Windows" field

在 Windows 上安装 io.net 环境教程-5

6. Equipment type

If you select GPU Worker and your device does not have a GPU, setup will fail

在 Windows 上安装 io.net 环境教程-6

7. Docker and Nvidia driver installation

Follow the steps in our Docker, Cuda and Nvidia driver installation documentation

在 Windows 上安装 io.net 环境教程-7

8. Run Docker commands

Run this command in terminal and make sure docker desktop is running in the background

在 Windows 上安装 io.net 环境教程-8

9. Waiting for connection

While you're waiting for the new device to connect, keep clicking Refresh.

在 Windows 上安装 io.net 环境教程-9


Docker installation on Windows

First, you need to enable virtualization in the BIOS.

To check if it's enabled, go to Task Manager Performance so you can see it here:

在 Windows 上安装 io.net 环境教程-10

If it is not enabled, follow these steps:

  1. To enable virtualization technology in BIOS or UEFI settings, you need to access the computer's BIOS or UEFI configuration menu during the boot process. The specific steps may vary depending on the make and model of your computer, but the following are the general steps for enabling virtualization.
  2. Install WSL 2 by opening PowerShell as administrator. To do this, search for "PowerShell" in the Start menu, right-click "Windows PowerShell," and then select "Run as administrator."
  3. Run the following command to enable WSL functionality in Windows 10/11:
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
  1. Then, enable the virtual machine platform feature in the same PowerShell window by running the following command:
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
  1. Then set WSL 2 as the default version (sometimes you may need to restart your computer):
wsl --set-default-version 2

Download Docker:

Visit the docker website:https://www.docker.com/products/docker-desktop/ and click "download for Windows":

在 Windows 上安装 io.net 环境教程-11

Run the installation process and restart the machine after installation is complete:

在 Windows 上安装 io.net 环境教程-12

Start docker desktop and select wsl2 in docker for integration:

在 Windows 上安装 io.net 环境教程-13

Verify the installation by opening CMD and typing the following

docker --version

You will then receive the following output

Docker version 24.0.6, build ed223bc

That's it. You have docker installed and ready.

Nvidia driver installation on Windows

  1. To check if you have the correct driver, open a command line on your Windows PC (Windows key + R, type cmd) and enter the following: nvidia-smi. If you encounter the following error message:

C:\Users>nvidia-smi 'nvidia-smi' is not recognized as an internal or external command, operable program or batch file.

This means you do not have the NVIDIA driver installed. To install them, follow these steps:

  1. Visit the Nvidia website:https://www.nvidia.com/download/index.aspxand enter your GPU name and click Search:

在 Windows 上安装 io.net 环境教程-14

  1. Click the Download button for the NVIDIA driver for your GPU and Windows version.

在 Windows 上安装 io.net 环境教程-15

  1. Once the download is complete, start the installation, select the first option and click "Agree and Continue".

在 Windows 上安装 io.net 环境教程-16

  1. After the installation is complete, the computer must be restarted. Restart your computer to ensure the new NVIDIA driver is fully integrated into your system.
  2. After the computer restarts, open Command Prompt (Windows key + R, type cmd) and type the following command:

nvidia-smi

  1. You should see results like this:

在 Windows 上安装 io.net 环境教程-17

That's it. You have the correct NVIDIA drivers installed and ready.

Download the CUDA toolkit (optional)

  1. Visit the NVIDIA CUDA toolkit download page:https://developer.nvidia.com/cuda-downloads

在 Windows 上安装 io.net 环境教程-18

  1. Select your operating system (e.g. Windows).
  2. Select your architecture (usually x86_64 for 64-bit Windows).
  3. Download the exe local installer. After downloading the file, run the installer:

在 Windows 上安装 io.net 环境教程-19

  1. and follow the installation process.
  2. Then, verify the installation process. Open a command prompt (Windows key + R, type cmd) and type the following command:

nvcc --version

  1. You should get the following answer:

nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA Corporation Built on Wed_Sep_21_10:41:10_Pacific_Daylight_Time_2022 Cuda compilation tools, release 11.8, V11.8.89 Build cuda_11.8.r11.8/compiler.31833905_0

That's it. You have installed and prepared the CUDA toolkit.

Please note that we are now installing a 20GB sized container that contains all the packages required for the ML application, everything happens inside the container, nothing is transferred from the container to the file system

Trouble guide

Troubleshooting Guide for GPU-enabled Docker Platforms

Verify docker setup on Linux and Windows

  • To verify that your setup is working properly, execute:
docker run --gpus all nvidia/cuda:11.0.3-base-ubuntu18.04 nvidia-smi
  • The output should be similar to nvidia-smi.
  • This command checks whether Docker is utilizing your GPU correctly.

stop platform

Windows (using PowerShell):

docker ps -a -q | ForEach { docker rm $\_ }

Erratic uptime on Windows?

To ensure that the DHCP lease time on the router is set to more than 24 hours, access the Group Policy Editor in the Windows operating system. Enable specified settings in the following order:

  • Navigate to Computer Configuration in Group Policy Editor.
  • In Computer Configuration, find the Administrative Templates section.
  • In the Administrative Templates section, navigate to System.
  • In the System menu, select Power Management.
  • Finally, visit the "Sleep Settings" subsection in "Power Management".
  • In the "Sleep settings" submenu, activate the "Allow network connections during connected standby (on battery)" and "Allow network connections during connected standby (plugged in)" options.
  • Please make sure to adjust these configurations accordingly to get the desired results.

Which ports need to be exposed on the firewall for the platform to function properly: (Linux and Windows)
TCP: 443 25061 5432 80
UDP: 80 443 41641 3478

How do I verify that the program started successfully?

You should always run 2 docker containers when running the following commands on powershell (windows) or terminal (linux):

docker ps

If there are no containers or only 1 container is running after executing the docker run -d… command on the website:

Stop the platform (see command guide above) and restart the platform using the website command again
If this still doesn’t work – please contact our Discord community for help:https://discord.com/invite/kqFzFK7fg2

Devices supported by io.net

GPU 
ManufacturerGPU Model
NVIDIAA10
NVIDIAA100 80G PCIe NVLink
NVIDIAA100 80GB PCIe
NVIDIAA100-PCIE-40GB
NVIDIAA100-SXM4-40GB
NVIDIAA40
NVIDIAA40 PCIe
NVIDIAA40-8Q
NVIDIAGeForce RTX 3050 Laptop
NVIDIAGeForce RTX 3050 Ti Laptop
NVIDIAGeForce RTX 3060
NVIDIAGeForce RTX 3060 Laptop
NVIDIAGeForce RTX 3060 Ti
NVIDIAGeForce RTX 3070
NVIDIAGeForce RTX 3070 Laptop
NVIDIAGeForce RTX 3070 Ti
NVIDIAGeForce RTX 3070 Ti Laptop
NVIDIAGeForce RTX 3080
NVIDIAGeForce RTX 3080 Laptop
NVIDIAGeForce RTX 3080 Ti
NVIDIAGeForce RTX 3080 Ti Laptop
NVIDIAGeForce RTX 3090
NVIDIAGeForce RTX 3090 Ti
NVIDIAGeForce RTX 4060
NVIDIAGeForce RTX 4060 Laptop
NVIDIAGeForce RTX 4060 Ti
NVIDIAGeForce RTX 4070
NVIDIAGeForce RTX 4070 Laptop
NVIDIAGeForce RTX 4070 SUPER
NVIDIAGeForce RTX 4070 Ti
NVIDIAGeForce RTX 4070 Ti SUPER
NVIDIAGeForce RTX 4080
NVIDIAGeForce RTX 4080 SUPER
NVIDIAGeForce RTX 4090
NVIDIAGeForce RTX 4090 Laptop
NVIDIAH100 80G PCIe
NVIDIAH100PCIe
NVIDIAL40
NVIDIAQuadro RTX 4000
NVIDIARTX 4000 SFF Ada Generation
NVIDIARTX 5000
NVIDIARTX 5000 Ada Generation
NVIDIARTX 6000 Ada Generation
NVIDIARTX 8000
NVIDIARTX A4000
NVIDIARTX A5000
NVIDIARTX A6000
NVIDIATesla P100 PCIe
NVIDIATesla T4
NVIDIATesla V100-SXM2-16GB
NVIDIATesla V100-SXM2-32GB
 

CPU

ManufacturerCPU Model
AppleM1 Pro
AppleM1
AppleM1 Max
AppleM2
AppleM2 Max
AppleM2 Pro
AppleM3
AppleM3 Max
AppleM3 Pro
AMDQEMU Virtual CPU version 2.5+
5/5 - (1 vote)

Leave a Reply

Your email address will not be published. Required fields are marked *