Skip to content

chalk-lab/Mooncake.jl

Repository files navigation

Mooncake logo

Mooncake.jl

Build Status codecov Code Style: Blue ColPrac: Contributor's Guide on Collaborative Practices for Community Packages Stable docs Aqua QA

The goal of the Mooncake.jl project is to produce an AD package which is written entirely in Julia, which improves over ForwardDiff.jl, ReverseDiff.jl and Zygote.jl in several ways, and is competitive with Enzyme.jl. Please refer to the docs for more info.

Getting Started

Check that you're running a version of Julia that Mooncake.jl supports. See the SUPPORT_POLICY.md file for more info.

There are several ways to interact with Mooncake.jl. To interact directly with Mooncake.jl, use Mooncake.value_and_gradient!!, which exposes the native API and allows reuse of a prepared gradient cache. For example, it can be used to compute the gradient of a function mapping a Vector{ComplexF64} to a Float64.

import Mooncake as MC

f(x) = sum(abs2, x)
x = [1.0 + 2.0im, 3.0 + 4.0im]

cache_friendly = MC.prepare_gradient_cache(f, x; friendly_tangents=true)
val, grad = MC.value_and_gradient!!(cache_friendly, f, x)

You should expect that MC.prepare_gradient_cache takes a little bit of time to run, but that MC.value_and_gradient!! is fast. For additional details, see the interface docs. You can also interact with Mooncake.jl via DifferentiationInterface.jl.

import DifferentiationInterface as DI

# Reverse-mode AD. For forward-mode AD, use `AutoMooncakeForward()`
backend = DI.AutoMooncake()
prep = DI.prepare_gradient(f, backend, x)
DI.gradient(f, prep, backend, x)

We generally recommend interacting with Mooncake.jl through DifferentiationInterface.jl, although this interface may lag behind Mooncake in supporting newly introduced features.

About

Implementation of a language-level autograd compiler

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages