Getting started(or not). TVM on Windows.

Scott Jin
4 min readJan 1, 2025

--

tvm logo

While it is always more convenient to learn about machine learning using Linux, still as in all my previous stories, I would like to continue using Windows.

Due to my previous stories in using C to run ONNX graph all by myself, I‘m thinking about going into the “AI compiler”, to be succinct, it seems that I should learn “TVM”, “XLA”, “GLOW” and how LLVM potentially works. And by skimping over popularity, I think Apache TVM may be good to start with, but it soon appears that even building it is quite complicated.

In this story, I aimed to build Apache TVM with LLVM in the beginning. However, at the end of the day, it only partially works. While I can get the TVM relay to work, the autoTVM part, which is the cherry on the top, does not work and will stall

Traceback (most recent call last):
File "d:\libraries\tvm\python\tvm\autotvm\measure\measure_methods.py", line 624, in run_through_rpc
with module_loader(remote_kwargs, build_result) as (remote, mod):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "d:\libraries\tvm\python\tvm\autotvm\measure\measure_methods.py", line 692, in __call__
yield remote, remote.load_module(os.path.split(build_result.filename)[1])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "d:\libraries\tvm\python\tvm\rpc\client.py", line 178, in load_module
return _ffi_api.LoadRemoteModule(self._sess, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "d:\libraries\tvm\python\tvm\_ffi\_ctypes\packed_func.py", line 245, in __call__
raise_last_ffi_error()
File "d:\libraries\tvm\python\tvm\_ffi\base.py", line 481, in raise_last_ffi_error
raise py_err
tvm._ffi.base.TVMError: Traceback (most recent call last):
File "D:\libraries\tvm\src\runtime\rpc\rpc_endpoint.cc", line 439
RPCError: Error caught from RPC call:

Traceback (most recent call last):
File "D:\libraries\tvm\src\runtime\rpc\rpc_endpoint.cc", line 439
RPCError: Error caught from RPC call:

Thus, I will say it isn't worth it for the sake of learning TVM with Windows.

Still, I will write about amalgamated commands on building both LLVM and Apache TVM if I have to pick it up sometime, and if anyone is interested.

Building LLVM

Reference: https://llvm.org/docs/GettingStartedVS.html

Generally following the official instructions will work, but I got mine built in C:\Program Files (x86), which makes it useless! The space in the path will break the TVM install.

So here are the revised commands, which only one thing is different.

Use VS 2019 or later(I use 2022) with administrator access.

Go(cd) to the directory you want to download the git files.

regsvr32 "%VSINSTALLDIR%\DIA SDK\bin\msdia140.dll"
regsvr32 "%VSINSTALLDIR%\DIA SDK\bin\amd64\msdia140.dll"

pip install psutil
git clone https://github.com/llvm/llvm-project.git llvm

cmake -S llvm\llvm -B build -DLLVM_ENABLE_PROJECTS=clang -DLLVM_TARGETS_TO_BUILD=X86 -Thost=x64 -DCMAKE_INSTALL_PREFIX=c:\libraries\llvm
exit

Do remember to open visual studio with admin to build the solution.

The most important thing is to specifically say where to install it. Then put the installed bin into environmental variable, this will make building TVM slightly easier.

Building TVM

I will assume you have cmake and Python ready. Personally, I like to use cmake-gui for some extra help.

We first get the source files

git clone --recursive https://github.com/apache/tvm tvm
cd tvm
mkdir build
cmake --build build --config Release -- /m

Then go to the build directory and use cmake to change some settings.

Set “CMAKE_INSTALL_PREFIX” to the directory where you want to install, maybe next to llvm.

Since we have put llvm in environment path we can set

USE_LLVM llvm-config — link-static

Then click configure and generate.

In the past(https://discuss.tvm.apache.org/t/unofficial-autotvm-on-windows-guide/4711) it said changing “USE_OPENMP” to gnu will help with tvm looping, which is the thing I am stuck on, but it doesn’t work in my case.

Then, click Configure and generate. Finally, build with visual studio with “admin”.

Go to the root of TVM and run

pip install -e /tvm/python

To install the Python module.

This is the expanse of what I have found, I will suggest sticking to using wsl for experiments, but maybe not building the Apache TVM since llvm takes a humongous amount of time to build on a virtual machine and the optimization will not work that well.

Thank you for reading this story!

--

--

Scott Jin
Scott Jin

Written by Scott Jin

Graduate student from Taiwan in Computer Science at the University of California, Riverside. Passionate about HPC, ML, and embedded software development.

No responses yet