A build.zig for llama.cpp, with Vulkan.
You can use llama.cpp from Zig projects.
You can also cross-compile llama.cpp to different targets.
Supported targets are:
- Linux x86_64
- Linux aarch64
- Windows x86_64
- Windows aarch64
Supported backends are:
- CPU
- Vulkan
Other targets and backends can be added with time and test devices.
- Random x86_64 running linux: All good.
- Random x86_64 running windows: All good.
- Raspberry Pi 5 (aarch64 linux): CPU works, Vulkan compiles but don't due to some lack of memory.
- Surface pro X SQ2 (aarch64 windows): CPU works, vulkan compiles but don't run due to some missing feature.
- Termux (aarch64 android/linux): CPU works, vulkan compiles but don't run.
All you need is Zig installed. All dependencies are pulled and compiled.
You can compile with:
zig build installYou can choose the backend used:
zig build install -Dbackend=vulkan
zig build install -Dbackend=cpu #defaultAnd choose a target architecture and OS:
zig build install -Dtarget=x86_64-linuxFirst compilation can take several minutes on some plataforms.
Add as a dependency on your project:
zig fetch --save git+https://github.com/diogok/llama.cpp.zigThen build the library on your buid.zig like:
const llama_cpp_dep = b.dependency("llama_cpp_zig", .{
.target = target,
.optimize = optimize,
.backend = backend, // like `.vulkan` or `.cpu`
});
const llama_cpp_lib = llama_cpp_dep.artifact("llama_cpp");
you_module.linkLibrary(llama_cpp_lib);Refer to [src/demo.zig] for an usage example.
MIT